Test Report: KVM_Linux_crio 19461

                    
                      ee4f5fb2e73abafca70b3598ab7977372efc25a8:2024-08-16:35814
                    
                

Test fail (29/318)

Order failed test Duration
34 TestAddons/parallel/Ingress 151.16
36 TestAddons/parallel/MetricsServer 319.49
45 TestAddons/StoppedEnableDisable 154.27
164 TestMultiControlPlane/serial/StopSecondaryNode 141.81
166 TestMultiControlPlane/serial/RestartSecondaryNode 59.04
168 TestMultiControlPlane/serial/RestartClusterKeepsNodes 398.07
171 TestMultiControlPlane/serial/StopCluster 141.64
231 TestMultiNode/serial/RestartKeepsNodes 331.3
233 TestMultiNode/serial/StopMultiNode 141.31
240 TestPreload 274.38
248 TestKubernetesUpgrade 457.93
319 TestStartStop/group/old-k8s-version/serial/FirstStart 287.2
345 TestStartStop/group/embed-certs/serial/Stop 138.96
348 TestStartStop/group/no-preload/serial/Stop 138.97
351 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.08
352 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.39
353 TestStartStop/group/old-k8s-version/serial/DeployApp 0.47
354 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 112.6
356 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
358 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
362 TestStartStop/group/old-k8s-version/serial/SecondStart 715.54
363 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.23
364 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.67
365 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.62
366 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.48
367 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 399.9
368 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 423.28
369 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 367.85
370 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 141.72
x
+
TestAddons/parallel/Ingress (151.16s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-671083 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-671083 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-671083 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [d0bf1f79-56d5-4c95-8a88-8e8d0007a72a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [d0bf1f79-56d5-4c95-8a88-8e8d0007a72a] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.004035818s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-671083 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-671083 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m8.718355746s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-671083 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-671083 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.240
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-671083 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-671083 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-671083 addons disable ingress --alsologtostderr -v=1: (7.666247377s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-671083 -n addons-671083
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-671083 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-671083 logs -n 25: (1.126954837s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-651132                                                                     | download-only-651132 | jenkins | v1.33.1 | 16 Aug 24 16:48 UTC | 16 Aug 24 16:48 UTC |
	| delete  | -p download-only-696494                                                                     | download-only-696494 | jenkins | v1.33.1 | 16 Aug 24 16:48 UTC | 16 Aug 24 16:48 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-250559 | jenkins | v1.33.1 | 16 Aug 24 16:48 UTC |                     |
	|         | binary-mirror-250559                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:41735                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-250559                                                                     | binary-mirror-250559 | jenkins | v1.33.1 | 16 Aug 24 16:48 UTC | 16 Aug 24 16:48 UTC |
	| addons  | enable dashboard -p                                                                         | addons-671083        | jenkins | v1.33.1 | 16 Aug 24 16:48 UTC |                     |
	|         | addons-671083                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-671083        | jenkins | v1.33.1 | 16 Aug 24 16:48 UTC |                     |
	|         | addons-671083                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-671083 --wait=true                                                                | addons-671083        | jenkins | v1.33.1 | 16 Aug 24 16:48 UTC | 16 Aug 24 16:51 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-671083 addons disable                                                                | addons-671083        | jenkins | v1.33.1 | 16 Aug 24 16:51 UTC | 16 Aug 24 16:51 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-671083 addons disable                                                                | addons-671083        | jenkins | v1.33.1 | 16 Aug 24 16:51 UTC | 16 Aug 24 16:51 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ssh     | addons-671083 ssh cat                                                                       | addons-671083        | jenkins | v1.33.1 | 16 Aug 24 16:51 UTC | 16 Aug 24 16:51 UTC |
	|         | /opt/local-path-provisioner/pvc-38437f91-cec1-425d-a656-8ecfa2176521_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-671083 addons disable                                                                | addons-671083        | jenkins | v1.33.1 | 16 Aug 24 16:51 UTC | 16 Aug 24 16:51 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-671083 ip                                                                            | addons-671083        | jenkins | v1.33.1 | 16 Aug 24 16:51 UTC | 16 Aug 24 16:51 UTC |
	| addons  | addons-671083 addons disable                                                                | addons-671083        | jenkins | v1.33.1 | 16 Aug 24 16:51 UTC | 16 Aug 24 16:51 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-671083        | jenkins | v1.33.1 | 16 Aug 24 16:51 UTC | 16 Aug 24 16:51 UTC |
	|         | -p addons-671083                                                                            |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-671083        | jenkins | v1.33.1 | 16 Aug 24 16:51 UTC | 16 Aug 24 16:51 UTC |
	|         | addons-671083                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-671083        | jenkins | v1.33.1 | 16 Aug 24 16:51 UTC | 16 Aug 24 16:51 UTC |
	|         | -p addons-671083                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-671083 addons disable                                                                | addons-671083        | jenkins | v1.33.1 | 16 Aug 24 16:51 UTC | 16 Aug 24 16:52 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-671083        | jenkins | v1.33.1 | 16 Aug 24 16:52 UTC | 16 Aug 24 16:52 UTC |
	|         | addons-671083                                                                               |                      |         |         |                     |                     |
	| addons  | addons-671083 addons disable                                                                | addons-671083        | jenkins | v1.33.1 | 16 Aug 24 16:52 UTC | 16 Aug 24 16:52 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-671083 ssh curl -s                                                                   | addons-671083        | jenkins | v1.33.1 | 16 Aug 24 16:52 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-671083 addons                                                                        | addons-671083        | jenkins | v1.33.1 | 16 Aug 24 16:52 UTC | 16 Aug 24 16:53 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-671083 addons                                                                        | addons-671083        | jenkins | v1.33.1 | 16 Aug 24 16:53 UTC | 16 Aug 24 16:53 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-671083 ip                                                                            | addons-671083        | jenkins | v1.33.1 | 16 Aug 24 16:54 UTC | 16 Aug 24 16:54 UTC |
	| addons  | addons-671083 addons disable                                                                | addons-671083        | jenkins | v1.33.1 | 16 Aug 24 16:54 UTC | 16 Aug 24 16:54 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-671083 addons disable                                                                | addons-671083        | jenkins | v1.33.1 | 16 Aug 24 16:54 UTC | 16 Aug 24 16:54 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 16:48:58
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 16:48:58.066320   17475 out.go:345] Setting OutFile to fd 1 ...
	I0816 16:48:58.066549   17475 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 16:48:58.066557   17475 out.go:358] Setting ErrFile to fd 2...
	I0816 16:48:58.066561   17475 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 16:48:58.066729   17475 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-9545/.minikube/bin
	I0816 16:48:58.067959   17475 out.go:352] Setting JSON to false
	I0816 16:48:58.068791   17475 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1836,"bootTime":1723825102,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0816 16:48:58.068852   17475 start.go:139] virtualization: kvm guest
	I0816 16:48:58.070481   17475 out.go:177] * [addons-671083] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0816 16:48:58.071896   17475 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 16:48:58.071898   17475 notify.go:220] Checking for updates...
	I0816 16:48:58.073106   17475 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 16:48:58.074323   17475 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19461-9545/kubeconfig
	I0816 16:48:58.075526   17475 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19461-9545/.minikube
	I0816 16:48:58.076862   17475 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 16:48:58.077959   17475 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 16:48:58.079112   17475 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 16:48:58.109792   17475 out.go:177] * Using the kvm2 driver based on user configuration
	I0816 16:48:58.110961   17475 start.go:297] selected driver: kvm2
	I0816 16:48:58.111007   17475 start.go:901] validating driver "kvm2" against <nil>
	I0816 16:48:58.111026   17475 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 16:48:58.111718   17475 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 16:48:58.111787   17475 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19461-9545/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 16:48:58.126153   17475 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0816 16:48:58.126195   17475 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 16:48:58.126471   17475 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 16:48:58.126549   17475 cni.go:84] Creating CNI manager for ""
	I0816 16:48:58.126566   17475 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 16:48:58.126577   17475 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0816 16:48:58.126635   17475 start.go:340] cluster config:
	{Name:addons-671083 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-671083 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 16:48:58.126747   17475 iso.go:125] acquiring lock: {Name:mke35866c1d0f078e1f027c2d727692722810e04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 16:48:58.128601   17475 out.go:177] * Starting "addons-671083" primary control-plane node in "addons-671083" cluster
	I0816 16:48:58.129930   17475 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 16:48:58.129964   17475 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0816 16:48:58.129980   17475 cache.go:56] Caching tarball of preloaded images
	I0816 16:48:58.130060   17475 preload.go:172] Found /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 16:48:58.130073   17475 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0816 16:48:58.130545   17475 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/config.json ...
	I0816 16:48:58.130586   17475 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/config.json: {Name:mkd709046bf2fd424ed782edfe71f24ef626b9f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 16:48:58.130790   17475 start.go:360] acquireMachinesLock for addons-671083: {Name:mke1ffa1f2d6d714bdd85e184816ba8f4dfd08f1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 16:48:58.130856   17475 start.go:364] duration metric: took 47.276µs to acquireMachinesLock for "addons-671083"
	I0816 16:48:58.130880   17475 start.go:93] Provisioning new machine with config: &{Name:addons-671083 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:addons-671083 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 16:48:58.130953   17475 start.go:125] createHost starting for "" (driver="kvm2")
	I0816 16:48:58.132462   17475 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0816 16:48:58.132605   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:48:58.132665   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:48:58.146329   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37745
	I0816 16:48:58.146784   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:48:58.147300   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:48:58.147317   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:48:58.147728   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:48:58.147964   17475 main.go:141] libmachine: (addons-671083) Calling .GetMachineName
	I0816 16:48:58.148123   17475 main.go:141] libmachine: (addons-671083) Calling .DriverName
	I0816 16:48:58.148272   17475 start.go:159] libmachine.API.Create for "addons-671083" (driver="kvm2")
	I0816 16:48:58.148308   17475 client.go:168] LocalClient.Create starting
	I0816 16:48:58.148348   17475 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem
	I0816 16:48:58.212191   17475 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem
	I0816 16:48:58.364167   17475 main.go:141] libmachine: Running pre-create checks...
	I0816 16:48:58.364191   17475 main.go:141] libmachine: (addons-671083) Calling .PreCreateCheck
	I0816 16:48:58.364721   17475 main.go:141] libmachine: (addons-671083) Calling .GetConfigRaw
	I0816 16:48:58.365138   17475 main.go:141] libmachine: Creating machine...
	I0816 16:48:58.365153   17475 main.go:141] libmachine: (addons-671083) Calling .Create
	I0816 16:48:58.365323   17475 main.go:141] libmachine: (addons-671083) Creating KVM machine...
	I0816 16:48:58.366609   17475 main.go:141] libmachine: (addons-671083) DBG | found existing default KVM network
	I0816 16:48:58.367258   17475 main.go:141] libmachine: (addons-671083) DBG | I0816 16:48:58.367118   17497 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0816 16:48:58.367271   17475 main.go:141] libmachine: (addons-671083) DBG | created network xml: 
	I0816 16:48:58.367280   17475 main.go:141] libmachine: (addons-671083) DBG | <network>
	I0816 16:48:58.367286   17475 main.go:141] libmachine: (addons-671083) DBG |   <name>mk-addons-671083</name>
	I0816 16:48:58.367292   17475 main.go:141] libmachine: (addons-671083) DBG |   <dns enable='no'/>
	I0816 16:48:58.367296   17475 main.go:141] libmachine: (addons-671083) DBG |   
	I0816 16:48:58.367308   17475 main.go:141] libmachine: (addons-671083) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0816 16:48:58.367316   17475 main.go:141] libmachine: (addons-671083) DBG |     <dhcp>
	I0816 16:48:58.367326   17475 main.go:141] libmachine: (addons-671083) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0816 16:48:58.367335   17475 main.go:141] libmachine: (addons-671083) DBG |     </dhcp>
	I0816 16:48:58.367344   17475 main.go:141] libmachine: (addons-671083) DBG |   </ip>
	I0816 16:48:58.367353   17475 main.go:141] libmachine: (addons-671083) DBG |   
	I0816 16:48:58.367360   17475 main.go:141] libmachine: (addons-671083) DBG | </network>
	I0816 16:48:58.367374   17475 main.go:141] libmachine: (addons-671083) DBG | 
	I0816 16:48:58.372585   17475 main.go:141] libmachine: (addons-671083) DBG | trying to create private KVM network mk-addons-671083 192.168.39.0/24...
	I0816 16:48:58.437354   17475 main.go:141] libmachine: (addons-671083) DBG | private KVM network mk-addons-671083 192.168.39.0/24 created
	I0816 16:48:58.437389   17475 main.go:141] libmachine: (addons-671083) Setting up store path in /home/jenkins/minikube-integration/19461-9545/.minikube/machines/addons-671083 ...
	I0816 16:48:58.437404   17475 main.go:141] libmachine: (addons-671083) DBG | I0816 16:48:58.437305   17497 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19461-9545/.minikube
	I0816 16:48:58.437437   17475 main.go:141] libmachine: (addons-671083) Building disk image from file:///home/jenkins/minikube-integration/19461-9545/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0816 16:48:58.437459   17475 main.go:141] libmachine: (addons-671083) Downloading /home/jenkins/minikube-integration/19461-9545/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19461-9545/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0816 16:48:58.687608   17475 main.go:141] libmachine: (addons-671083) DBG | I0816 16:48:58.687450   17497 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/addons-671083/id_rsa...
	I0816 16:48:58.861012   17475 main.go:141] libmachine: (addons-671083) DBG | I0816 16:48:58.860911   17497 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/addons-671083/addons-671083.rawdisk...
	I0816 16:48:58.861033   17475 main.go:141] libmachine: (addons-671083) DBG | Writing magic tar header
	I0816 16:48:58.861043   17475 main.go:141] libmachine: (addons-671083) DBG | Writing SSH key tar header
	I0816 16:48:58.861101   17475 main.go:141] libmachine: (addons-671083) DBG | I0816 16:48:58.861041   17497 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19461-9545/.minikube/machines/addons-671083 ...
	I0816 16:48:58.861256   17475 main.go:141] libmachine: (addons-671083) Setting executable bit set on /home/jenkins/minikube-integration/19461-9545/.minikube/machines/addons-671083 (perms=drwx------)
	I0816 16:48:58.861301   17475 main.go:141] libmachine: (addons-671083) Setting executable bit set on /home/jenkins/minikube-integration/19461-9545/.minikube/machines (perms=drwxr-xr-x)
	I0816 16:48:58.861320   17475 main.go:141] libmachine: (addons-671083) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/addons-671083
	I0816 16:48:58.861332   17475 main.go:141] libmachine: (addons-671083) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19461-9545/.minikube/machines
	I0816 16:48:58.861343   17475 main.go:141] libmachine: (addons-671083) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19461-9545/.minikube
	I0816 16:48:58.861351   17475 main.go:141] libmachine: (addons-671083) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19461-9545
	I0816 16:48:58.861358   17475 main.go:141] libmachine: (addons-671083) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0816 16:48:58.861380   17475 main.go:141] libmachine: (addons-671083) Setting executable bit set on /home/jenkins/minikube-integration/19461-9545/.minikube (perms=drwxr-xr-x)
	I0816 16:48:58.861412   17475 main.go:141] libmachine: (addons-671083) Setting executable bit set on /home/jenkins/minikube-integration/19461-9545 (perms=drwxrwxr-x)
	I0816 16:48:58.861426   17475 main.go:141] libmachine: (addons-671083) DBG | Checking permissions on dir: /home/jenkins
	I0816 16:48:58.861438   17475 main.go:141] libmachine: (addons-671083) DBG | Checking permissions on dir: /home
	I0816 16:48:58.861448   17475 main.go:141] libmachine: (addons-671083) DBG | Skipping /home - not owner
	I0816 16:48:58.861465   17475 main.go:141] libmachine: (addons-671083) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0816 16:48:58.861477   17475 main.go:141] libmachine: (addons-671083) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0816 16:48:58.861502   17475 main.go:141] libmachine: (addons-671083) Creating domain...
	I0816 16:48:58.862374   17475 main.go:141] libmachine: (addons-671083) define libvirt domain using xml: 
	I0816 16:48:58.862398   17475 main.go:141] libmachine: (addons-671083) <domain type='kvm'>
	I0816 16:48:58.862409   17475 main.go:141] libmachine: (addons-671083)   <name>addons-671083</name>
	I0816 16:48:58.862416   17475 main.go:141] libmachine: (addons-671083)   <memory unit='MiB'>4000</memory>
	I0816 16:48:58.862436   17475 main.go:141] libmachine: (addons-671083)   <vcpu>2</vcpu>
	I0816 16:48:58.862454   17475 main.go:141] libmachine: (addons-671083)   <features>
	I0816 16:48:58.862462   17475 main.go:141] libmachine: (addons-671083)     <acpi/>
	I0816 16:48:58.862466   17475 main.go:141] libmachine: (addons-671083)     <apic/>
	I0816 16:48:58.862472   17475 main.go:141] libmachine: (addons-671083)     <pae/>
	I0816 16:48:58.862479   17475 main.go:141] libmachine: (addons-671083)     
	I0816 16:48:58.862484   17475 main.go:141] libmachine: (addons-671083)   </features>
	I0816 16:48:58.862491   17475 main.go:141] libmachine: (addons-671083)   <cpu mode='host-passthrough'>
	I0816 16:48:58.862495   17475 main.go:141] libmachine: (addons-671083)   
	I0816 16:48:58.862503   17475 main.go:141] libmachine: (addons-671083)   </cpu>
	I0816 16:48:58.862509   17475 main.go:141] libmachine: (addons-671083)   <os>
	I0816 16:48:58.862515   17475 main.go:141] libmachine: (addons-671083)     <type>hvm</type>
	I0816 16:48:58.862520   17475 main.go:141] libmachine: (addons-671083)     <boot dev='cdrom'/>
	I0816 16:48:58.862525   17475 main.go:141] libmachine: (addons-671083)     <boot dev='hd'/>
	I0816 16:48:58.862553   17475 main.go:141] libmachine: (addons-671083)     <bootmenu enable='no'/>
	I0816 16:48:58.862573   17475 main.go:141] libmachine: (addons-671083)   </os>
	I0816 16:48:58.862585   17475 main.go:141] libmachine: (addons-671083)   <devices>
	I0816 16:48:58.862597   17475 main.go:141] libmachine: (addons-671083)     <disk type='file' device='cdrom'>
	I0816 16:48:58.862612   17475 main.go:141] libmachine: (addons-671083)       <source file='/home/jenkins/minikube-integration/19461-9545/.minikube/machines/addons-671083/boot2docker.iso'/>
	I0816 16:48:58.862624   17475 main.go:141] libmachine: (addons-671083)       <target dev='hdc' bus='scsi'/>
	I0816 16:48:58.862636   17475 main.go:141] libmachine: (addons-671083)       <readonly/>
	I0816 16:48:58.862650   17475 main.go:141] libmachine: (addons-671083)     </disk>
	I0816 16:48:58.862663   17475 main.go:141] libmachine: (addons-671083)     <disk type='file' device='disk'>
	I0816 16:48:58.862676   17475 main.go:141] libmachine: (addons-671083)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0816 16:48:58.862692   17475 main.go:141] libmachine: (addons-671083)       <source file='/home/jenkins/minikube-integration/19461-9545/.minikube/machines/addons-671083/addons-671083.rawdisk'/>
	I0816 16:48:58.862712   17475 main.go:141] libmachine: (addons-671083)       <target dev='hda' bus='virtio'/>
	I0816 16:48:58.862732   17475 main.go:141] libmachine: (addons-671083)     </disk>
	I0816 16:48:58.862750   17475 main.go:141] libmachine: (addons-671083)     <interface type='network'>
	I0816 16:48:58.862759   17475 main.go:141] libmachine: (addons-671083)       <source network='mk-addons-671083'/>
	I0816 16:48:58.862764   17475 main.go:141] libmachine: (addons-671083)       <model type='virtio'/>
	I0816 16:48:58.862770   17475 main.go:141] libmachine: (addons-671083)     </interface>
	I0816 16:48:58.862777   17475 main.go:141] libmachine: (addons-671083)     <interface type='network'>
	I0816 16:48:58.862783   17475 main.go:141] libmachine: (addons-671083)       <source network='default'/>
	I0816 16:48:58.862790   17475 main.go:141] libmachine: (addons-671083)       <model type='virtio'/>
	I0816 16:48:58.862795   17475 main.go:141] libmachine: (addons-671083)     </interface>
	I0816 16:48:58.862802   17475 main.go:141] libmachine: (addons-671083)     <serial type='pty'>
	I0816 16:48:58.862808   17475 main.go:141] libmachine: (addons-671083)       <target port='0'/>
	I0816 16:48:58.862812   17475 main.go:141] libmachine: (addons-671083)     </serial>
	I0816 16:48:58.862825   17475 main.go:141] libmachine: (addons-671083)     <console type='pty'>
	I0816 16:48:58.862837   17475 main.go:141] libmachine: (addons-671083)       <target type='serial' port='0'/>
	I0816 16:48:58.862845   17475 main.go:141] libmachine: (addons-671083)     </console>
	I0816 16:48:58.862849   17475 main.go:141] libmachine: (addons-671083)     <rng model='virtio'>
	I0816 16:48:58.862856   17475 main.go:141] libmachine: (addons-671083)       <backend model='random'>/dev/random</backend>
	I0816 16:48:58.862863   17475 main.go:141] libmachine: (addons-671083)     </rng>
	I0816 16:48:58.862868   17475 main.go:141] libmachine: (addons-671083)     
	I0816 16:48:58.862873   17475 main.go:141] libmachine: (addons-671083)     
	I0816 16:48:58.862879   17475 main.go:141] libmachine: (addons-671083)   </devices>
	I0816 16:48:58.862883   17475 main.go:141] libmachine: (addons-671083) </domain>
	I0816 16:48:58.862890   17475 main.go:141] libmachine: (addons-671083) 
	I0816 16:48:58.869540   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:96:88:f3 in network default
	I0816 16:48:58.870032   17475 main.go:141] libmachine: (addons-671083) Ensuring networks are active...
	I0816 16:48:58.870060   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:48:58.870572   17475 main.go:141] libmachine: (addons-671083) Ensuring network default is active
	I0816 16:48:58.870899   17475 main.go:141] libmachine: (addons-671083) Ensuring network mk-addons-671083 is active
	I0816 16:48:58.871971   17475 main.go:141] libmachine: (addons-671083) Getting domain xml...
	I0816 16:48:58.872549   17475 main.go:141] libmachine: (addons-671083) Creating domain...
	I0816 16:49:00.249291   17475 main.go:141] libmachine: (addons-671083) Waiting to get IP...
	I0816 16:49:00.250017   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:00.250334   17475 main.go:141] libmachine: (addons-671083) DBG | unable to find current IP address of domain addons-671083 in network mk-addons-671083
	I0816 16:49:00.250391   17475 main.go:141] libmachine: (addons-671083) DBG | I0816 16:49:00.250329   17497 retry.go:31] will retry after 283.890348ms: waiting for machine to come up
	I0816 16:49:00.535939   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:00.536338   17475 main.go:141] libmachine: (addons-671083) DBG | unable to find current IP address of domain addons-671083 in network mk-addons-671083
	I0816 16:49:00.536365   17475 main.go:141] libmachine: (addons-671083) DBG | I0816 16:49:00.536278   17497 retry.go:31] will retry after 272.589716ms: waiting for machine to come up
	I0816 16:49:00.810717   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:00.811053   17475 main.go:141] libmachine: (addons-671083) DBG | unable to find current IP address of domain addons-671083 in network mk-addons-671083
	I0816 16:49:00.811076   17475 main.go:141] libmachine: (addons-671083) DBG | I0816 16:49:00.811017   17497 retry.go:31] will retry after 327.359128ms: waiting for machine to come up
	I0816 16:49:01.139598   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:01.140077   17475 main.go:141] libmachine: (addons-671083) DBG | unable to find current IP address of domain addons-671083 in network mk-addons-671083
	I0816 16:49:01.140105   17475 main.go:141] libmachine: (addons-671083) DBG | I0816 16:49:01.139964   17497 retry.go:31] will retry after 531.723403ms: waiting for machine to come up
	I0816 16:49:01.673755   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:01.674244   17475 main.go:141] libmachine: (addons-671083) DBG | unable to find current IP address of domain addons-671083 in network mk-addons-671083
	I0816 16:49:01.674275   17475 main.go:141] libmachine: (addons-671083) DBG | I0816 16:49:01.674193   17497 retry.go:31] will retry after 675.414072ms: waiting for machine to come up
	I0816 16:49:02.351169   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:02.351653   17475 main.go:141] libmachine: (addons-671083) DBG | unable to find current IP address of domain addons-671083 in network mk-addons-671083
	I0816 16:49:02.351681   17475 main.go:141] libmachine: (addons-671083) DBG | I0816 16:49:02.351600   17497 retry.go:31] will retry after 640.251541ms: waiting for machine to come up
	I0816 16:49:02.993371   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:02.993740   17475 main.go:141] libmachine: (addons-671083) DBG | unable to find current IP address of domain addons-671083 in network mk-addons-671083
	I0816 16:49:02.993763   17475 main.go:141] libmachine: (addons-671083) DBG | I0816 16:49:02.993706   17497 retry.go:31] will retry after 1.168312298s: waiting for machine to come up
	I0816 16:49:04.163701   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:04.164021   17475 main.go:141] libmachine: (addons-671083) DBG | unable to find current IP address of domain addons-671083 in network mk-addons-671083
	I0816 16:49:04.164044   17475 main.go:141] libmachine: (addons-671083) DBG | I0816 16:49:04.163972   17497 retry.go:31] will retry after 1.340581367s: waiting for machine to come up
	I0816 16:49:05.505783   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:05.506209   17475 main.go:141] libmachine: (addons-671083) DBG | unable to find current IP address of domain addons-671083 in network mk-addons-671083
	I0816 16:49:05.506238   17475 main.go:141] libmachine: (addons-671083) DBG | I0816 16:49:05.506128   17497 retry.go:31] will retry after 1.298392326s: waiting for machine to come up
	I0816 16:49:06.806595   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:06.806996   17475 main.go:141] libmachine: (addons-671083) DBG | unable to find current IP address of domain addons-671083 in network mk-addons-671083
	I0816 16:49:06.807031   17475 main.go:141] libmachine: (addons-671083) DBG | I0816 16:49:06.806964   17497 retry.go:31] will retry after 2.080408667s: waiting for machine to come up
	I0816 16:49:08.889159   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:08.889759   17475 main.go:141] libmachine: (addons-671083) DBG | unable to find current IP address of domain addons-671083 in network mk-addons-671083
	I0816 16:49:08.889781   17475 main.go:141] libmachine: (addons-671083) DBG | I0816 16:49:08.889712   17497 retry.go:31] will retry after 2.264587812s: waiting for machine to come up
	I0816 16:49:11.156974   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:11.157347   17475 main.go:141] libmachine: (addons-671083) DBG | unable to find current IP address of domain addons-671083 in network mk-addons-671083
	I0816 16:49:11.157376   17475 main.go:141] libmachine: (addons-671083) DBG | I0816 16:49:11.157323   17497 retry.go:31] will retry after 2.310982395s: waiting for machine to come up
	I0816 16:49:13.470389   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:13.470775   17475 main.go:141] libmachine: (addons-671083) DBG | unable to find current IP address of domain addons-671083 in network mk-addons-671083
	I0816 16:49:13.470793   17475 main.go:141] libmachine: (addons-671083) DBG | I0816 16:49:13.470750   17497 retry.go:31] will retry after 3.3460659s: waiting for machine to come up
	I0816 16:49:16.821167   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:16.821588   17475 main.go:141] libmachine: (addons-671083) DBG | unable to find current IP address of domain addons-671083 in network mk-addons-671083
	I0816 16:49:16.821611   17475 main.go:141] libmachine: (addons-671083) DBG | I0816 16:49:16.821544   17497 retry.go:31] will retry after 3.950147872s: waiting for machine to come up
	I0816 16:49:20.775320   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:20.775789   17475 main.go:141] libmachine: (addons-671083) Found IP for machine: 192.168.39.240
	I0816 16:49:20.775803   17475 main.go:141] libmachine: (addons-671083) Reserving static IP address...
	I0816 16:49:20.775812   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has current primary IP address 192.168.39.240 and MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:20.776271   17475 main.go:141] libmachine: (addons-671083) DBG | unable to find host DHCP lease matching {name: "addons-671083", mac: "52:54:00:4b:34:d9", ip: "192.168.39.240"} in network mk-addons-671083
	I0816 16:49:20.845508   17475 main.go:141] libmachine: (addons-671083) Reserved static IP address: 192.168.39.240
	I0816 16:49:20.845534   17475 main.go:141] libmachine: (addons-671083) Waiting for SSH to be available...
	I0816 16:49:20.845543   17475 main.go:141] libmachine: (addons-671083) DBG | Getting to WaitForSSH function...
	I0816 16:49:20.847610   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:20.848015   17475 main.go:141] libmachine: (addons-671083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:34:d9", ip: ""} in network mk-addons-671083: {Iface:virbr1 ExpiryTime:2024-08-16 17:49:12 +0000 UTC Type:0 Mac:52:54:00:4b:34:d9 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:minikube Clientid:01:52:54:00:4b:34:d9}
	I0816 16:49:20.848049   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined IP address 192.168.39.240 and MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:20.848229   17475 main.go:141] libmachine: (addons-671083) DBG | Using SSH client type: external
	I0816 16:49:20.848263   17475 main.go:141] libmachine: (addons-671083) DBG | Using SSH private key: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/addons-671083/id_rsa (-rw-------)
	I0816 16:49:20.848309   17475 main.go:141] libmachine: (addons-671083) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.240 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19461-9545/.minikube/machines/addons-671083/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 16:49:20.848323   17475 main.go:141] libmachine: (addons-671083) DBG | About to run SSH command:
	I0816 16:49:20.848335   17475 main.go:141] libmachine: (addons-671083) DBG | exit 0
	I0816 16:49:20.976967   17475 main.go:141] libmachine: (addons-671083) DBG | SSH cmd err, output: <nil>: 
	I0816 16:49:20.977279   17475 main.go:141] libmachine: (addons-671083) KVM machine creation complete!
	I0816 16:49:20.977674   17475 main.go:141] libmachine: (addons-671083) Calling .GetConfigRaw
	I0816 16:49:20.978151   17475 main.go:141] libmachine: (addons-671083) Calling .DriverName
	I0816 16:49:20.978313   17475 main.go:141] libmachine: (addons-671083) Calling .DriverName
	I0816 16:49:20.978474   17475 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0816 16:49:20.978486   17475 main.go:141] libmachine: (addons-671083) Calling .GetState
	I0816 16:49:20.979759   17475 main.go:141] libmachine: Detecting operating system of created instance...
	I0816 16:49:20.979774   17475 main.go:141] libmachine: Waiting for SSH to be available...
	I0816 16:49:20.979780   17475 main.go:141] libmachine: Getting to WaitForSSH function...
	I0816 16:49:20.979786   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHHostname
	I0816 16:49:20.982049   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:20.982411   17475 main.go:141] libmachine: (addons-671083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:34:d9", ip: ""} in network mk-addons-671083: {Iface:virbr1 ExpiryTime:2024-08-16 17:49:12 +0000 UTC Type:0 Mac:52:54:00:4b:34:d9 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:addons-671083 Clientid:01:52:54:00:4b:34:d9}
	I0816 16:49:20.982438   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined IP address 192.168.39.240 and MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:20.982558   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHPort
	I0816 16:49:20.982724   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHKeyPath
	I0816 16:49:20.982912   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHKeyPath
	I0816 16:49:20.983045   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHUsername
	I0816 16:49:20.983215   17475 main.go:141] libmachine: Using SSH client type: native
	I0816 16:49:20.983379   17475 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0816 16:49:20.983389   17475 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0816 16:49:21.079814   17475 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 16:49:21.079834   17475 main.go:141] libmachine: Detecting the provisioner...
	I0816 16:49:21.079842   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHHostname
	I0816 16:49:21.082532   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:21.082912   17475 main.go:141] libmachine: (addons-671083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:34:d9", ip: ""} in network mk-addons-671083: {Iface:virbr1 ExpiryTime:2024-08-16 17:49:12 +0000 UTC Type:0 Mac:52:54:00:4b:34:d9 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:addons-671083 Clientid:01:52:54:00:4b:34:d9}
	I0816 16:49:21.082936   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined IP address 192.168.39.240 and MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:21.083024   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHPort
	I0816 16:49:21.083232   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHKeyPath
	I0816 16:49:21.083380   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHKeyPath
	I0816 16:49:21.083507   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHUsername
	I0816 16:49:21.083770   17475 main.go:141] libmachine: Using SSH client type: native
	I0816 16:49:21.083958   17475 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0816 16:49:21.083970   17475 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0816 16:49:21.180964   17475 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0816 16:49:21.181038   17475 main.go:141] libmachine: found compatible host: buildroot
	I0816 16:49:21.181048   17475 main.go:141] libmachine: Provisioning with buildroot...
	I0816 16:49:21.181055   17475 main.go:141] libmachine: (addons-671083) Calling .GetMachineName
	I0816 16:49:21.181426   17475 buildroot.go:166] provisioning hostname "addons-671083"
	I0816 16:49:21.181451   17475 main.go:141] libmachine: (addons-671083) Calling .GetMachineName
	I0816 16:49:21.181629   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHHostname
	I0816 16:49:21.184121   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:21.184541   17475 main.go:141] libmachine: (addons-671083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:34:d9", ip: ""} in network mk-addons-671083: {Iface:virbr1 ExpiryTime:2024-08-16 17:49:12 +0000 UTC Type:0 Mac:52:54:00:4b:34:d9 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:addons-671083 Clientid:01:52:54:00:4b:34:d9}
	I0816 16:49:21.184581   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined IP address 192.168.39.240 and MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:21.184760   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHPort
	I0816 16:49:21.184933   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHKeyPath
	I0816 16:49:21.185085   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHKeyPath
	I0816 16:49:21.185225   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHUsername
	I0816 16:49:21.185430   17475 main.go:141] libmachine: Using SSH client type: native
	I0816 16:49:21.185624   17475 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0816 16:49:21.185641   17475 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-671083 && echo "addons-671083" | sudo tee /etc/hostname
	I0816 16:49:21.299478   17475 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-671083
	
	I0816 16:49:21.299509   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHHostname
	I0816 16:49:21.302474   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:21.302806   17475 main.go:141] libmachine: (addons-671083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:34:d9", ip: ""} in network mk-addons-671083: {Iface:virbr1 ExpiryTime:2024-08-16 17:49:12 +0000 UTC Type:0 Mac:52:54:00:4b:34:d9 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:addons-671083 Clientid:01:52:54:00:4b:34:d9}
	I0816 16:49:21.302833   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined IP address 192.168.39.240 and MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:21.302986   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHPort
	I0816 16:49:21.303177   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHKeyPath
	I0816 16:49:21.303385   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHKeyPath
	I0816 16:49:21.303544   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHUsername
	I0816 16:49:21.303704   17475 main.go:141] libmachine: Using SSH client type: native
	I0816 16:49:21.303929   17475 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0816 16:49:21.303948   17475 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-671083' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-671083/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-671083' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 16:49:21.408027   17475 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 16:49:21.408053   17475 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19461-9545/.minikube CaCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19461-9545/.minikube}
	I0816 16:49:21.408078   17475 buildroot.go:174] setting up certificates
	I0816 16:49:21.408093   17475 provision.go:84] configureAuth start
	I0816 16:49:21.408103   17475 main.go:141] libmachine: (addons-671083) Calling .GetMachineName
	I0816 16:49:21.408401   17475 main.go:141] libmachine: (addons-671083) Calling .GetIP
	I0816 16:49:21.410788   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:21.411067   17475 main.go:141] libmachine: (addons-671083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:34:d9", ip: ""} in network mk-addons-671083: {Iface:virbr1 ExpiryTime:2024-08-16 17:49:12 +0000 UTC Type:0 Mac:52:54:00:4b:34:d9 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:addons-671083 Clientid:01:52:54:00:4b:34:d9}
	I0816 16:49:21.411100   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined IP address 192.168.39.240 and MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:21.411293   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHHostname
	I0816 16:49:21.413459   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:21.413787   17475 main.go:141] libmachine: (addons-671083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:34:d9", ip: ""} in network mk-addons-671083: {Iface:virbr1 ExpiryTime:2024-08-16 17:49:12 +0000 UTC Type:0 Mac:52:54:00:4b:34:d9 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:addons-671083 Clientid:01:52:54:00:4b:34:d9}
	I0816 16:49:21.413811   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined IP address 192.168.39.240 and MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:21.413887   17475 provision.go:143] copyHostCerts
	I0816 16:49:21.413976   17475 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem (1082 bytes)
	I0816 16:49:21.414114   17475 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem (1123 bytes)
	I0816 16:49:21.414227   17475 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem (1679 bytes)
	I0816 16:49:21.414310   17475 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem org=jenkins.addons-671083 san=[127.0.0.1 192.168.39.240 addons-671083 localhost minikube]
	I0816 16:49:21.726952   17475 provision.go:177] copyRemoteCerts
	I0816 16:49:21.727010   17475 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 16:49:21.727032   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHHostname
	I0816 16:49:21.729698   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:21.730018   17475 main.go:141] libmachine: (addons-671083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:34:d9", ip: ""} in network mk-addons-671083: {Iface:virbr1 ExpiryTime:2024-08-16 17:49:12 +0000 UTC Type:0 Mac:52:54:00:4b:34:d9 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:addons-671083 Clientid:01:52:54:00:4b:34:d9}
	I0816 16:49:21.730046   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined IP address 192.168.39.240 and MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:21.730227   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHPort
	I0816 16:49:21.730418   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHKeyPath
	I0816 16:49:21.730638   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHUsername
	I0816 16:49:21.730778   17475 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/addons-671083/id_rsa Username:docker}
	I0816 16:49:21.806159   17475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0816 16:49:21.827190   17475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 16:49:21.848400   17475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 16:49:21.868815   17475 provision.go:87] duration metric: took 460.707117ms to configureAuth
	I0816 16:49:21.868848   17475 buildroot.go:189] setting minikube options for container-runtime
	I0816 16:49:21.869048   17475 config.go:182] Loaded profile config "addons-671083": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 16:49:21.869140   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHHostname
	I0816 16:49:21.871548   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:21.871868   17475 main.go:141] libmachine: (addons-671083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:34:d9", ip: ""} in network mk-addons-671083: {Iface:virbr1 ExpiryTime:2024-08-16 17:49:12 +0000 UTC Type:0 Mac:52:54:00:4b:34:d9 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:addons-671083 Clientid:01:52:54:00:4b:34:d9}
	I0816 16:49:21.871896   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined IP address 192.168.39.240 and MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:21.872043   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHPort
	I0816 16:49:21.872239   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHKeyPath
	I0816 16:49:21.872408   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHKeyPath
	I0816 16:49:21.872527   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHUsername
	I0816 16:49:21.872696   17475 main.go:141] libmachine: Using SSH client type: native
	I0816 16:49:21.872847   17475 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0816 16:49:21.872860   17475 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 16:49:22.134070   17475 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 16:49:22.134095   17475 main.go:141] libmachine: Checking connection to Docker...
	I0816 16:49:22.134102   17475 main.go:141] libmachine: (addons-671083) Calling .GetURL
	I0816 16:49:22.135572   17475 main.go:141] libmachine: (addons-671083) DBG | Using libvirt version 6000000
	I0816 16:49:22.137843   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:22.138190   17475 main.go:141] libmachine: (addons-671083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:34:d9", ip: ""} in network mk-addons-671083: {Iface:virbr1 ExpiryTime:2024-08-16 17:49:12 +0000 UTC Type:0 Mac:52:54:00:4b:34:d9 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:addons-671083 Clientid:01:52:54:00:4b:34:d9}
	I0816 16:49:22.138221   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined IP address 192.168.39.240 and MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:22.138371   17475 main.go:141] libmachine: Docker is up and running!
	I0816 16:49:22.138386   17475 main.go:141] libmachine: Reticulating splines...
	I0816 16:49:22.138393   17475 client.go:171] duration metric: took 23.990076596s to LocalClient.Create
	I0816 16:49:22.138413   17475 start.go:167] duration metric: took 23.990143896s to libmachine.API.Create "addons-671083"
	I0816 16:49:22.138422   17475 start.go:293] postStartSetup for "addons-671083" (driver="kvm2")
	I0816 16:49:22.138430   17475 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 16:49:22.138446   17475 main.go:141] libmachine: (addons-671083) Calling .DriverName
	I0816 16:49:22.138662   17475 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 16:49:22.138684   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHHostname
	I0816 16:49:22.140585   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:22.140926   17475 main.go:141] libmachine: (addons-671083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:34:d9", ip: ""} in network mk-addons-671083: {Iface:virbr1 ExpiryTime:2024-08-16 17:49:12 +0000 UTC Type:0 Mac:52:54:00:4b:34:d9 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:addons-671083 Clientid:01:52:54:00:4b:34:d9}
	I0816 16:49:22.140952   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined IP address 192.168.39.240 and MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:22.141067   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHPort
	I0816 16:49:22.141217   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHKeyPath
	I0816 16:49:22.141360   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHUsername
	I0816 16:49:22.141514   17475 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/addons-671083/id_rsa Username:docker}
	I0816 16:49:22.220583   17475 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 16:49:22.224660   17475 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 16:49:22.224679   17475 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/addons for local assets ...
	I0816 16:49:22.224767   17475 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/files for local assets ...
	I0816 16:49:22.224801   17475 start.go:296] duration metric: took 86.372451ms for postStartSetup
	I0816 16:49:22.224841   17475 main.go:141] libmachine: (addons-671083) Calling .GetConfigRaw
	I0816 16:49:22.225400   17475 main.go:141] libmachine: (addons-671083) Calling .GetIP
	I0816 16:49:22.228015   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:22.228329   17475 main.go:141] libmachine: (addons-671083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:34:d9", ip: ""} in network mk-addons-671083: {Iface:virbr1 ExpiryTime:2024-08-16 17:49:12 +0000 UTC Type:0 Mac:52:54:00:4b:34:d9 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:addons-671083 Clientid:01:52:54:00:4b:34:d9}
	I0816 16:49:22.228356   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined IP address 192.168.39.240 and MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:22.228607   17475 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/config.json ...
	I0816 16:49:22.228808   17475 start.go:128] duration metric: took 24.097843577s to createHost
	I0816 16:49:22.228830   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHHostname
	I0816 16:49:22.231121   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:22.231427   17475 main.go:141] libmachine: (addons-671083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:34:d9", ip: ""} in network mk-addons-671083: {Iface:virbr1 ExpiryTime:2024-08-16 17:49:12 +0000 UTC Type:0 Mac:52:54:00:4b:34:d9 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:addons-671083 Clientid:01:52:54:00:4b:34:d9}
	I0816 16:49:22.231449   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined IP address 192.168.39.240 and MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:22.231581   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHPort
	I0816 16:49:22.231776   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHKeyPath
	I0816 16:49:22.231916   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHKeyPath
	I0816 16:49:22.232045   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHUsername
	I0816 16:49:22.232188   17475 main.go:141] libmachine: Using SSH client type: native
	I0816 16:49:22.232328   17475 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0816 16:49:22.232338   17475 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 16:49:22.329268   17475 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723826962.306837323
	
	I0816 16:49:22.329293   17475 fix.go:216] guest clock: 1723826962.306837323
	I0816 16:49:22.329302   17475 fix.go:229] Guest: 2024-08-16 16:49:22.306837323 +0000 UTC Remote: 2024-08-16 16:49:22.228820507 +0000 UTC m=+24.194451298 (delta=78.016816ms)
	I0816 16:49:22.329347   17475 fix.go:200] guest clock delta is within tolerance: 78.016816ms
	I0816 16:49:22.329352   17475 start.go:83] releasing machines lock for "addons-671083", held for 24.198483464s
	I0816 16:49:22.329370   17475 main.go:141] libmachine: (addons-671083) Calling .DriverName
	I0816 16:49:22.329601   17475 main.go:141] libmachine: (addons-671083) Calling .GetIP
	I0816 16:49:22.331847   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:22.332122   17475 main.go:141] libmachine: (addons-671083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:34:d9", ip: ""} in network mk-addons-671083: {Iface:virbr1 ExpiryTime:2024-08-16 17:49:12 +0000 UTC Type:0 Mac:52:54:00:4b:34:d9 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:addons-671083 Clientid:01:52:54:00:4b:34:d9}
	I0816 16:49:22.332148   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined IP address 192.168.39.240 and MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:22.332295   17475 main.go:141] libmachine: (addons-671083) Calling .DriverName
	I0816 16:49:22.332787   17475 main.go:141] libmachine: (addons-671083) Calling .DriverName
	I0816 16:49:22.332972   17475 main.go:141] libmachine: (addons-671083) Calling .DriverName
	I0816 16:49:22.333074   17475 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 16:49:22.333128   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHHostname
	I0816 16:49:22.333213   17475 ssh_runner.go:195] Run: cat /version.json
	I0816 16:49:22.333241   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHHostname
	I0816 16:49:22.335809   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:22.336125   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:22.336152   17475 main.go:141] libmachine: (addons-671083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:34:d9", ip: ""} in network mk-addons-671083: {Iface:virbr1 ExpiryTime:2024-08-16 17:49:12 +0000 UTC Type:0 Mac:52:54:00:4b:34:d9 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:addons-671083 Clientid:01:52:54:00:4b:34:d9}
	I0816 16:49:22.336170   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined IP address 192.168.39.240 and MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:22.336315   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHPort
	I0816 16:49:22.336496   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHKeyPath
	I0816 16:49:22.336587   17475 main.go:141] libmachine: (addons-671083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:34:d9", ip: ""} in network mk-addons-671083: {Iface:virbr1 ExpiryTime:2024-08-16 17:49:12 +0000 UTC Type:0 Mac:52:54:00:4b:34:d9 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:addons-671083 Clientid:01:52:54:00:4b:34:d9}
	I0816 16:49:22.336610   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined IP address 192.168.39.240 and MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:22.336657   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHUsername
	I0816 16:49:22.336785   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHPort
	I0816 16:49:22.336850   17475 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/addons-671083/id_rsa Username:docker}
	I0816 16:49:22.336890   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHKeyPath
	I0816 16:49:22.337035   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHUsername
	I0816 16:49:22.337166   17475 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/addons-671083/id_rsa Username:docker}
	I0816 16:49:22.455116   17475 ssh_runner.go:195] Run: systemctl --version
	I0816 16:49:22.461526   17475 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 16:49:22.625159   17475 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 16:49:22.630466   17475 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 16:49:22.630529   17475 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 16:49:22.645886   17475 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 16:49:22.645910   17475 start.go:495] detecting cgroup driver to use...
	I0816 16:49:22.645966   17475 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 16:49:22.665926   17475 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 16:49:22.679933   17475 docker.go:217] disabling cri-docker service (if available) ...
	I0816 16:49:22.680000   17475 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 16:49:22.693228   17475 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 16:49:22.706115   17475 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 16:49:22.827685   17475 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 16:49:22.970987   17475 docker.go:233] disabling docker service ...
	I0816 16:49:22.971051   17475 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 16:49:22.984803   17475 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 16:49:22.998013   17475 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 16:49:23.137822   17475 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 16:49:23.266235   17475 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 16:49:23.286162   17475 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 16:49:23.302966   17475 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 16:49:23.303026   17475 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 16:49:23.312392   17475 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 16:49:23.312464   17475 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 16:49:23.321863   17475 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 16:49:23.331321   17475 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 16:49:23.340694   17475 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 16:49:23.350176   17475 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 16:49:23.359512   17475 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 16:49:23.375249   17475 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 16:49:23.384525   17475 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 16:49:23.393049   17475 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 16:49:23.393097   17475 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 16:49:23.404223   17475 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 16:49:23.412877   17475 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 16:49:23.523051   17475 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 16:49:23.654922   17475 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 16:49:23.655064   17475 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 16:49:23.659523   17475 start.go:563] Will wait 60s for crictl version
	I0816 16:49:23.659599   17475 ssh_runner.go:195] Run: which crictl
	I0816 16:49:23.663037   17475 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 16:49:23.698352   17475 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 16:49:23.698483   17475 ssh_runner.go:195] Run: crio --version
	I0816 16:49:23.724087   17475 ssh_runner.go:195] Run: crio --version
	I0816 16:49:23.751473   17475 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 16:49:23.752926   17475 main.go:141] libmachine: (addons-671083) Calling .GetIP
	I0816 16:49:23.755470   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:23.755818   17475 main.go:141] libmachine: (addons-671083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:34:d9", ip: ""} in network mk-addons-671083: {Iface:virbr1 ExpiryTime:2024-08-16 17:49:12 +0000 UTC Type:0 Mac:52:54:00:4b:34:d9 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:addons-671083 Clientid:01:52:54:00:4b:34:d9}
	I0816 16:49:23.755839   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined IP address 192.168.39.240 and MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:23.756083   17475 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0816 16:49:23.760086   17475 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 16:49:23.771879   17475 kubeadm.go:883] updating cluster {Name:addons-671083 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:addons-671083 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 16:49:23.771997   17475 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 16:49:23.772041   17475 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 16:49:23.801894   17475 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0816 16:49:23.801959   17475 ssh_runner.go:195] Run: which lz4
	I0816 16:49:23.805923   17475 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 16:49:23.809737   17475 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 16:49:23.809762   17475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0816 16:49:24.866202   17475 crio.go:462] duration metric: took 1.060313922s to copy over tarball
	I0816 16:49:24.866281   17475 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 16:49:26.924459   17475 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.058146903s)
	I0816 16:49:26.924493   17475 crio.go:469] duration metric: took 2.058266681s to extract the tarball
	I0816 16:49:26.924503   17475 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 16:49:26.961094   17475 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 16:49:27.001598   17475 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 16:49:27.001626   17475 cache_images.go:84] Images are preloaded, skipping loading
	I0816 16:49:27.001634   17475 kubeadm.go:934] updating node { 192.168.39.240 8443 v1.31.0 crio true true} ...
	I0816 16:49:27.001731   17475 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-671083 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.240
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-671083 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 16:49:27.001791   17475 ssh_runner.go:195] Run: crio config
	I0816 16:49:27.041757   17475 cni.go:84] Creating CNI manager for ""
	I0816 16:49:27.041779   17475 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 16:49:27.041791   17475 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 16:49:27.041820   17475 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.240 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-671083 NodeName:addons-671083 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.240"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.240 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 16:49:27.041972   17475 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.240
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-671083"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.240
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.240"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 16:49:27.042029   17475 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 16:49:27.051237   17475 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 16:49:27.051308   17475 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 16:49:27.060411   17475 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0816 16:49:27.075960   17475 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 16:49:27.090578   17475 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0816 16:49:27.106363   17475 ssh_runner.go:195] Run: grep 192.168.39.240	control-plane.minikube.internal$ /etc/hosts
	I0816 16:49:27.109970   17475 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.240	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 16:49:27.121189   17475 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 16:49:27.232304   17475 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 16:49:27.248032   17475 certs.go:68] Setting up /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083 for IP: 192.168.39.240
	I0816 16:49:27.248059   17475 certs.go:194] generating shared ca certs ...
	I0816 16:49:27.248077   17475 certs.go:226] acquiring lock for ca certs: {Name:mk1e000575c94c138513704c2900b8a68810eb65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 16:49:27.248237   17475 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key
	I0816 16:49:27.381753   17475 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt ...
	I0816 16:49:27.381782   17475 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt: {Name:mk6d327ac07a7e309565320b227eab2f0c3c16b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 16:49:27.381938   17475 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key ...
	I0816 16:49:27.381948   17475 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key: {Name:mk531a862bb1f6818fc284bd4510b9af89a30ca4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 16:49:27.382017   17475 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key
	I0816 16:49:27.529203   17475 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.crt ...
	I0816 16:49:27.529229   17475 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.crt: {Name:mk085bb605cf2710eff87a2d7387ebf03b6d81a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 16:49:27.529377   17475 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key ...
	I0816 16:49:27.529388   17475 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key: {Name:mk97b7b7a6a59b99d7bef0f92b9ec38593c29a70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 16:49:27.529450   17475 certs.go:256] generating profile certs ...
	I0816 16:49:27.529500   17475 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/client.key
	I0816 16:49:27.529513   17475 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/client.crt with IP's: []
	I0816 16:49:27.586097   17475 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/client.crt ...
	I0816 16:49:27.586123   17475 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/client.crt: {Name:mke44386a63cceabbe31b6f26838a3bc63e55d4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 16:49:27.586270   17475 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/client.key ...
	I0816 16:49:27.586280   17475 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/client.key: {Name:mk86f28cecac6f2f60291769bb16fc2a2c7ce4aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 16:49:27.586353   17475 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/apiserver.key.60d270f6
	I0816 16:49:27.586371   17475 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/apiserver.crt.60d270f6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.240]
	I0816 16:49:27.739560   17475 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/apiserver.crt.60d270f6 ...
	I0816 16:49:27.739590   17475 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/apiserver.crt.60d270f6: {Name:mk83f1b4bb87ab0b9301b076c432e8b854cf7240 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 16:49:27.739749   17475 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/apiserver.key.60d270f6 ...
	I0816 16:49:27.739762   17475 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/apiserver.key.60d270f6: {Name:mk5d84c5a9e73e6534ba86728e8ada61126679ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 16:49:27.739829   17475 certs.go:381] copying /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/apiserver.crt.60d270f6 -> /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/apiserver.crt
	I0816 16:49:27.739897   17475 certs.go:385] copying /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/apiserver.key.60d270f6 -> /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/apiserver.key
	I0816 16:49:27.739941   17475 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/proxy-client.key
	I0816 16:49:27.739958   17475 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/proxy-client.crt with IP's: []
	I0816 16:49:27.837567   17475 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/proxy-client.crt ...
	I0816 16:49:27.837596   17475 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/proxy-client.crt: {Name:mk15d605ea322d53750c270c4b1e85f4322af7fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 16:49:27.837762   17475 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/proxy-client.key ...
	I0816 16:49:27.837777   17475 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/proxy-client.key: {Name:mk5de68870834ec73c34e593e465169a09f08758 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 16:49:27.837964   17475 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 16:49:27.838005   17475 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem (1082 bytes)
	I0816 16:49:27.838053   17475 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem (1123 bytes)
	I0816 16:49:27.838101   17475 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem (1679 bytes)
	I0816 16:49:27.838742   17475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 16:49:27.861769   17475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 16:49:27.883070   17475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 16:49:27.904384   17475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 16:49:27.927156   17475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0816 16:49:27.953057   17475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 16:49:27.976222   17475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 16:49:27.996750   17475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 16:49:28.017829   17475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 16:49:28.039777   17475 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 16:49:28.054610   17475 ssh_runner.go:195] Run: openssl version
	I0816 16:49:28.059792   17475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 16:49:28.069136   17475 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 16:49:28.073021   17475 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:49 /usr/share/ca-certificates/minikubeCA.pem
	I0816 16:49:28.073064   17475 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 16:49:28.078402   17475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 16:49:28.087529   17475 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 16:49:28.091174   17475 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0816 16:49:28.091222   17475 kubeadm.go:392] StartCluster: {Name:addons-671083 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:addons-671083 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 16:49:28.091322   17475 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 16:49:28.091382   17475 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 16:49:28.124558   17475 cri.go:89] found id: ""
	I0816 16:49:28.124672   17475 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 16:49:28.133925   17475 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 16:49:28.142576   17475 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 16:49:28.151416   17475 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 16:49:28.151437   17475 kubeadm.go:157] found existing configuration files:
	
	I0816 16:49:28.151490   17475 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 16:49:28.159896   17475 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 16:49:28.159962   17475 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 16:49:28.168648   17475 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 16:49:28.176866   17475 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 16:49:28.176931   17475 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 16:49:28.185493   17475 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 16:49:28.193528   17475 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 16:49:28.193594   17475 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 16:49:28.201955   17475 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 16:49:28.209840   17475 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 16:49:28.209899   17475 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 16:49:28.218065   17475 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 16:49:28.263114   17475 kubeadm.go:310] W0816 16:49:28.246550     829 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 16:49:28.263765   17475 kubeadm.go:310] W0816 16:49:28.247504     829 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 16:49:28.365306   17475 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 16:49:37.921355   17475 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0816 16:49:37.921434   17475 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 16:49:37.921534   17475 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 16:49:37.921675   17475 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 16:49:37.921820   17475 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0816 16:49:37.921895   17475 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 16:49:37.923502   17475 out.go:235]   - Generating certificates and keys ...
	I0816 16:49:37.923602   17475 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 16:49:37.923667   17475 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 16:49:37.923730   17475 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0816 16:49:37.923782   17475 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0816 16:49:37.923832   17475 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0816 16:49:37.923879   17475 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0816 16:49:37.923948   17475 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0816 16:49:37.924092   17475 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-671083 localhost] and IPs [192.168.39.240 127.0.0.1 ::1]
	I0816 16:49:37.924148   17475 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0816 16:49:37.924263   17475 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-671083 localhost] and IPs [192.168.39.240 127.0.0.1 ::1]
	I0816 16:49:37.924369   17475 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0816 16:49:37.924473   17475 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0816 16:49:37.924537   17475 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0816 16:49:37.924611   17475 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 16:49:37.924692   17475 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 16:49:37.924781   17475 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0816 16:49:37.924845   17475 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 16:49:37.924915   17475 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 16:49:37.925003   17475 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 16:49:37.925110   17475 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 16:49:37.925211   17475 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 16:49:37.926690   17475 out.go:235]   - Booting up control plane ...
	I0816 16:49:37.926769   17475 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 16:49:37.926833   17475 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 16:49:37.926890   17475 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 16:49:37.927009   17475 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 16:49:37.927094   17475 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 16:49:37.927136   17475 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 16:49:37.927241   17475 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0816 16:49:37.927346   17475 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0816 16:49:37.927410   17475 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 507.361999ms
	I0816 16:49:37.927471   17475 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0816 16:49:37.927529   17475 kubeadm.go:310] [api-check] The API server is healthy after 5.002080019s
	I0816 16:49:37.927618   17475 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0816 16:49:37.927733   17475 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0816 16:49:37.927786   17475 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0816 16:49:37.927959   17475 kubeadm.go:310] [mark-control-plane] Marking the node addons-671083 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0816 16:49:37.928017   17475 kubeadm.go:310] [bootstrap-token] Using token: xuuct1.enaoa72wl8k12y87
	I0816 16:49:37.929298   17475 out.go:235]   - Configuring RBAC rules ...
	I0816 16:49:37.929425   17475 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0816 16:49:37.929502   17475 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0816 16:49:37.929620   17475 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0816 16:49:37.929729   17475 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0816 16:49:37.929835   17475 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0816 16:49:37.929926   17475 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0816 16:49:37.930029   17475 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0816 16:49:37.930090   17475 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0816 16:49:37.930129   17475 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0816 16:49:37.930135   17475 kubeadm.go:310] 
	I0816 16:49:37.930197   17475 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0816 16:49:37.930209   17475 kubeadm.go:310] 
	I0816 16:49:37.930282   17475 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0816 16:49:37.930290   17475 kubeadm.go:310] 
	I0816 16:49:37.930310   17475 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0816 16:49:37.930367   17475 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0816 16:49:37.930416   17475 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0816 16:49:37.930422   17475 kubeadm.go:310] 
	I0816 16:49:37.930475   17475 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0816 16:49:37.930489   17475 kubeadm.go:310] 
	I0816 16:49:37.930532   17475 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0816 16:49:37.930539   17475 kubeadm.go:310] 
	I0816 16:49:37.930588   17475 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0816 16:49:37.930651   17475 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0816 16:49:37.930707   17475 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0816 16:49:37.930712   17475 kubeadm.go:310] 
	I0816 16:49:37.930782   17475 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0816 16:49:37.930849   17475 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0816 16:49:37.930855   17475 kubeadm.go:310] 
	I0816 16:49:37.930922   17475 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token xuuct1.enaoa72wl8k12y87 \
	I0816 16:49:37.931007   17475 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:340cd7e6f4afd769ffe0b97bb40a910498795aaf29fab06cd6b8c9a3957ccd57 \
	I0816 16:49:37.931026   17475 kubeadm.go:310] 	--control-plane 
	I0816 16:49:37.931032   17475 kubeadm.go:310] 
	I0816 16:49:37.931111   17475 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0816 16:49:37.931125   17475 kubeadm.go:310] 
	I0816 16:49:37.931190   17475 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xuuct1.enaoa72wl8k12y87 \
	I0816 16:49:37.931298   17475 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:340cd7e6f4afd769ffe0b97bb40a910498795aaf29fab06cd6b8c9a3957ccd57 
	I0816 16:49:37.931312   17475 cni.go:84] Creating CNI manager for ""
	I0816 16:49:37.931327   17475 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 16:49:37.933466   17475 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 16:49:37.934520   17475 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 16:49:37.945860   17475 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 16:49:37.965501   17475 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 16:49:37.965576   17475 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-671083 minikube.k8s.io/updated_at=2024_08_16T16_49_37_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd minikube.k8s.io/name=addons-671083 minikube.k8s.io/primary=true
	I0816 16:49:37.965590   17475 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 16:49:37.993859   17475 ops.go:34] apiserver oom_adj: -16
	I0816 16:49:38.104530   17475 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 16:49:38.604771   17475 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 16:49:39.105224   17475 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 16:49:39.604849   17475 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 16:49:40.105589   17475 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 16:49:40.604979   17475 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 16:49:41.105313   17475 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 16:49:41.604842   17475 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 16:49:41.682288   17475 kubeadm.go:1113] duration metric: took 3.716787052s to wait for elevateKubeSystemPrivileges
	I0816 16:49:41.682325   17475 kubeadm.go:394] duration metric: took 13.591107205s to StartCluster
	I0816 16:49:41.682349   17475 settings.go:142] acquiring lock: {Name:mk7d6bbff611e99a9222156e58c7ea3b0a5541f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 16:49:41.682478   17475 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19461-9545/kubeconfig
	I0816 16:49:41.682872   17475 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/kubeconfig: {Name:mk7d5abb3a1b9b654c5d7490d32aeae2c9a1bdd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 16:49:41.683062   17475 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 16:49:41.683094   17475 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 16:49:41.683152   17475 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0816 16:49:41.683258   17475 config.go:182] Loaded profile config "addons-671083": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 16:49:41.683268   17475 addons.go:69] Setting cloud-spanner=true in profile "addons-671083"
	I0816 16:49:41.683275   17475 addons.go:69] Setting registry=true in profile "addons-671083"
	I0816 16:49:41.683256   17475 addons.go:69] Setting yakd=true in profile "addons-671083"
	I0816 16:49:41.683319   17475 addons.go:69] Setting ingress=true in profile "addons-671083"
	I0816 16:49:41.683321   17475 addons.go:69] Setting ingress-dns=true in profile "addons-671083"
	I0816 16:49:41.683315   17475 addons.go:69] Setting volcano=true in profile "addons-671083"
	I0816 16:49:41.683319   17475 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-671083"
	I0816 16:49:41.683337   17475 addons.go:234] Setting addon ingress=true in "addons-671083"
	I0816 16:49:41.683339   17475 addons.go:234] Setting addon ingress-dns=true in "addons-671083"
	I0816 16:49:41.683347   17475 addons.go:234] Setting addon volcano=true in "addons-671083"
	I0816 16:49:41.683364   17475 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-671083"
	I0816 16:49:41.683372   17475 host.go:66] Checking if "addons-671083" exists ...
	I0816 16:49:41.683374   17475 host.go:66] Checking if "addons-671083" exists ...
	I0816 16:49:41.683374   17475 host.go:66] Checking if "addons-671083" exists ...
	I0816 16:49:41.683270   17475 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-671083"
	I0816 16:49:41.683449   17475 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-671083"
	I0816 16:49:41.683476   17475 host.go:66] Checking if "addons-671083" exists ...
	I0816 16:49:41.683338   17475 addons.go:234] Setting addon yakd=true in "addons-671083"
	I0816 16:49:41.683263   17475 addons.go:69] Setting inspektor-gadget=true in profile "addons-671083"
	I0816 16:49:41.683533   17475 host.go:66] Checking if "addons-671083" exists ...
	I0816 16:49:41.683581   17475 addons.go:234] Setting addon inspektor-gadget=true in "addons-671083"
	I0816 16:49:41.683614   17475 host.go:66] Checking if "addons-671083" exists ...
	I0816 16:49:41.683807   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:49:41.683811   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:49:41.683815   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:49:41.683830   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:49:41.683833   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:49:41.683839   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:49:41.683856   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:49:41.683265   17475 addons.go:69] Setting metrics-server=true in profile "addons-671083"
	I0816 16:49:41.683904   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:49:41.683928   17475 addons.go:234] Setting addon metrics-server=true in "addons-671083"
	I0816 16:49:41.683939   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:49:41.683956   17475 host.go:66] Checking if "addons-671083" exists ...
	I0816 16:49:41.683297   17475 addons.go:234] Setting addon cloud-spanner=true in "addons-671083"
	I0816 16:49:41.683841   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:49:41.683301   17475 addons.go:234] Setting addon registry=true in "addons-671083"
	I0816 16:49:41.683818   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:49:41.684112   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:49:41.683307   17475 addons.go:69] Setting default-storageclass=true in profile "addons-671083"
	I0816 16:49:41.684167   17475 host.go:66] Checking if "addons-671083" exists ...
	I0816 16:49:41.683303   17475 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-671083"
	I0816 16:49:41.684234   17475 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-671083"
	I0816 16:49:41.684190   17475 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-671083"
	I0816 16:49:41.683313   17475 addons.go:69] Setting storage-provisioner=true in profile "addons-671083"
	I0816 16:49:41.684279   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:49:41.684297   17475 addons.go:234] Setting addon storage-provisioner=true in "addons-671083"
	I0816 16:49:41.683317   17475 addons.go:69] Setting helm-tiller=true in profile "addons-671083"
	I0816 16:49:41.684323   17475 addons.go:234] Setting addon helm-tiller=true in "addons-671083"
	I0816 16:49:41.684307   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:49:41.683312   17475 addons.go:69] Setting gcp-auth=true in profile "addons-671083"
	I0816 16:49:41.684395   17475 mustload.go:65] Loading cluster: addons-671083
	I0816 16:49:41.684498   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:49:41.684523   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:49:41.683306   17475 addons.go:69] Setting volumesnapshots=true in profile "addons-671083"
	I0816 16:49:41.684573   17475 config.go:182] Loaded profile config "addons-671083": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 16:49:41.684593   17475 addons.go:234] Setting addon volumesnapshots=true in "addons-671083"
	I0816 16:49:41.684647   17475 host.go:66] Checking if "addons-671083" exists ...
	I0816 16:49:41.684667   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:49:41.684694   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:49:41.684837   17475 host.go:66] Checking if "addons-671083" exists ...
	I0816 16:49:41.684918   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:49:41.684999   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:49:41.685234   17475 host.go:66] Checking if "addons-671083" exists ...
	I0816 16:49:41.685275   17475 host.go:66] Checking if "addons-671083" exists ...
	I0816 16:49:41.685324   17475 host.go:66] Checking if "addons-671083" exists ...
	I0816 16:49:41.685598   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:49:41.685620   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:49:41.685620   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:49:41.685649   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:49:41.685661   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:49:41.685677   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:49:41.684966   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:49:41.685946   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:49:41.686455   17475 out.go:177] * Verifying Kubernetes components...
	I0816 16:49:41.688411   17475 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 16:49:41.705854   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32863
	I0816 16:49:41.706123   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39983
	I0816 16:49:41.706257   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41751
	I0816 16:49:41.706392   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:49:41.706515   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:49:41.706861   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:49:41.706883   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:49:41.706923   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:49:41.706955   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:49:41.707247   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:49:41.707798   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:49:41.707839   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:49:41.707846   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33933
	I0816 16:49:41.708004   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:49:41.708164   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:49:41.708438   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:49:41.708590   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:49:41.708603   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:49:41.708641   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:49:41.708675   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:49:41.708812   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:49:41.708826   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:49:41.708876   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:49:41.714148   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:49:41.714207   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34775
	I0816 16:49:41.714725   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:49:41.715069   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:49:41.715098   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:49:41.720826   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45619
	I0816 16:49:41.720942   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:49:41.720963   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:49:41.721029   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36689
	I0816 16:49:41.720949   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45199
	I0816 16:49:41.721143   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:49:41.721160   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:49:41.721204   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:49:41.721236   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:49:41.721416   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:49:41.721436   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:49:41.723286   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:49:41.723735   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:49:41.723832   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:49:41.724288   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:49:41.724301   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:49:41.724397   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:49:41.724403   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:49:41.724776   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:49:41.725201   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:49:41.725233   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:49:41.734988   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:49:41.735206   17475 main.go:141] libmachine: (addons-671083) Calling .GetState
	I0816 16:49:41.735257   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:49:41.735623   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:49:41.735642   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:49:41.736046   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:49:41.736612   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:49:41.736657   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:49:41.737479   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:49:41.737515   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:49:41.740201   17475 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-671083"
	I0816 16:49:41.740242   17475 host.go:66] Checking if "addons-671083" exists ...
	I0816 16:49:41.740588   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:49:41.740646   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:49:41.740834   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39579
	I0816 16:49:41.741355   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:49:41.741881   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:49:41.741898   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:49:41.742268   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:49:41.742476   17475 main.go:141] libmachine: (addons-671083) Calling .GetState
	I0816 16:49:41.744589   17475 main.go:141] libmachine: (addons-671083) Calling .DriverName
	I0816 16:49:41.746136   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38195
	I0816 16:49:41.746861   17475 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0816 16:49:41.747092   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:49:41.747602   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:49:41.747621   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:49:41.748003   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:49:41.748141   17475 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0816 16:49:41.748159   17475 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0816 16:49:41.748179   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHHostname
	I0816 16:49:41.748185   17475 main.go:141] libmachine: (addons-671083) Calling .GetState
	I0816 16:49:41.748986   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33509
	I0816 16:49:41.749444   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:49:41.750231   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:49:41.750254   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:49:41.750831   17475 main.go:141] libmachine: (addons-671083) Calling .DriverName
	I0816 16:49:41.752359   17475 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0816 16:49:41.752823   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:41.753440   17475 main.go:141] libmachine: (addons-671083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:34:d9", ip: ""} in network mk-addons-671083: {Iface:virbr1 ExpiryTime:2024-08-16 17:49:12 +0000 UTC Type:0 Mac:52:54:00:4b:34:d9 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:addons-671083 Clientid:01:52:54:00:4b:34:d9}
	I0816 16:49:41.753475   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined IP address 192.168.39.240 and MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:41.753663   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHPort
	I0816 16:49:41.753728   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:49:41.753810   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHKeyPath
	I0816 16:49:41.753834   17475 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0816 16:49:41.753848   17475 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0816 16:49:41.753865   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHHostname
	I0816 16:49:41.753935   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHUsername
	I0816 16:49:41.754058   17475 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/addons-671083/id_rsa Username:docker}
	I0816 16:49:41.754301   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:49:41.754336   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:49:41.757732   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:41.758205   17475 main.go:141] libmachine: (addons-671083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:34:d9", ip: ""} in network mk-addons-671083: {Iface:virbr1 ExpiryTime:2024-08-16 17:49:12 +0000 UTC Type:0 Mac:52:54:00:4b:34:d9 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:addons-671083 Clientid:01:52:54:00:4b:34:d9}
	I0816 16:49:41.758226   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined IP address 192.168.39.240 and MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:41.758547   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHPort
	I0816 16:49:41.758751   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHKeyPath
	I0816 16:49:41.758894   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHUsername
	I0816 16:49:41.759060   17475 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/addons-671083/id_rsa Username:docker}
	I0816 16:49:41.769371   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42009
	I0816 16:49:41.770102   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46495
	I0816 16:49:41.770235   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:49:41.770620   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:49:41.770989   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46309
	I0816 16:49:41.771171   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:49:41.771194   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:49:41.771441   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:49:41.771517   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:49:41.771716   17475 main.go:141] libmachine: (addons-671083) Calling .GetState
	I0816 16:49:41.771900   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:49:41.771918   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:49:41.772292   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:49:41.772386   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35971
	I0816 16:49:41.772899   17475 main.go:141] libmachine: (addons-671083) Calling .GetState
	I0816 16:49:41.773236   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:49:41.773808   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:49:41.773824   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:49:41.773885   17475 main.go:141] libmachine: (addons-671083) Calling .DriverName
	I0816 16:49:41.773956   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:49:41.773970   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:49:41.774475   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:49:41.774802   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:49:41.775066   17475 main.go:141] libmachine: (addons-671083) Calling .GetState
	I0816 16:49:41.775518   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:49:41.775553   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:49:41.776177   17475 main.go:141] libmachine: (addons-671083) Calling .DriverName
	I0816 16:49:41.776241   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32865
	I0816 16:49:41.776514   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43471
	I0816 16:49:41.777061   17475 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0816 16:49:41.777492   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:49:41.777498   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:49:41.777928   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:49:41.777941   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:49:41.778342   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:49:41.778417   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36395
	I0816 16:49:41.778577   17475 addons.go:234] Setting addon default-storageclass=true in "addons-671083"
	I0816 16:49:41.778589   17475 main.go:141] libmachine: (addons-671083) Calling .GetState
	I0816 16:49:41.778608   17475 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0816 16:49:41.778611   17475 host.go:66] Checking if "addons-671083" exists ...
	I0816 16:49:41.778621   17475 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0816 16:49:41.778637   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHHostname
	I0816 16:49:41.778957   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:49:41.778989   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:49:41.779209   17475 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0816 16:49:41.779718   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:49:41.779735   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:49:41.780139   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:49:41.780307   17475 main.go:141] libmachine: (addons-671083) Calling .GetState
	I0816 16:49:41.780390   17475 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0816 16:49:41.780416   17475 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0816 16:49:41.780437   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHHostname
	I0816 16:49:41.780556   17475 main.go:141] libmachine: (addons-671083) Calling .DriverName
	I0816 16:49:41.780634   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:49:41.781682   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:49:41.781699   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:49:41.782047   17475 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 16:49:41.782596   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:49:41.783270   17475 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 16:49:41.783287   17475 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 16:49:41.783303   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHHostname
	I0816 16:49:41.783331   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:41.783377   17475 host.go:66] Checking if "addons-671083" exists ...
	I0816 16:49:41.783666   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:49:41.783688   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:49:41.784435   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:49:41.784471   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:49:41.784709   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:41.784752   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45851
	I0816 16:49:41.785072   17475 main.go:141] libmachine: (addons-671083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:34:d9", ip: ""} in network mk-addons-671083: {Iface:virbr1 ExpiryTime:2024-08-16 17:49:12 +0000 UTC Type:0 Mac:52:54:00:4b:34:d9 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:addons-671083 Clientid:01:52:54:00:4b:34:d9}
	I0816 16:49:41.785092   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined IP address 192.168.39.240 and MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:41.785290   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHPort
	I0816 16:49:41.785293   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:49:41.785494   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHKeyPath
	I0816 16:49:41.785984   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:49:41.786003   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:49:41.786062   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHUsername
	I0816 16:49:41.786070   17475 main.go:141] libmachine: (addons-671083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:34:d9", ip: ""} in network mk-addons-671083: {Iface:virbr1 ExpiryTime:2024-08-16 17:49:12 +0000 UTC Type:0 Mac:52:54:00:4b:34:d9 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:addons-671083 Clientid:01:52:54:00:4b:34:d9}
	I0816 16:49:41.786092   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined IP address 192.168.39.240 and MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:41.786234   17475 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/addons-671083/id_rsa Username:docker}
	I0816 16:49:41.786543   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:49:41.786545   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHPort
	I0816 16:49:41.786747   17475 main.go:141] libmachine: (addons-671083) Calling .GetState
	I0816 16:49:41.786801   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHKeyPath
	I0816 16:49:41.786982   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHUsername
	I0816 16:49:41.787148   17475 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/addons-671083/id_rsa Username:docker}
	I0816 16:49:41.787411   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:41.787443   17475 main.go:141] libmachine: (addons-671083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:34:d9", ip: ""} in network mk-addons-671083: {Iface:virbr1 ExpiryTime:2024-08-16 17:49:12 +0000 UTC Type:0 Mac:52:54:00:4b:34:d9 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:addons-671083 Clientid:01:52:54:00:4b:34:d9}
	I0816 16:49:41.787456   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined IP address 192.168.39.240 and MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:41.788197   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHPort
	I0816 16:49:41.788424   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHKeyPath
	I0816 16:49:41.788617   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHUsername
	I0816 16:49:41.788786   17475 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/addons-671083/id_rsa Username:docker}
	I0816 16:49:41.790493   17475 main.go:141] libmachine: (addons-671083) Calling .DriverName
	I0816 16:49:41.790751   17475 main.go:141] libmachine: Making call to close driver server
	I0816 16:49:41.790763   17475 main.go:141] libmachine: (addons-671083) Calling .Close
	I0816 16:49:41.792523   17475 main.go:141] libmachine: (addons-671083) DBG | Closing plugin on server side
	I0816 16:49:41.792547   17475 main.go:141] libmachine: Successfully made call to close driver server
	I0816 16:49:41.792553   17475 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 16:49:41.792559   17475 main.go:141] libmachine: Making call to close driver server
	I0816 16:49:41.792563   17475 main.go:141] libmachine: (addons-671083) Calling .Close
	I0816 16:49:41.792787   17475 main.go:141] libmachine: Successfully made call to close driver server
	I0816 16:49:41.792800   17475 main.go:141] libmachine: Making call to close connection to plugin binary
	W0816 16:49:41.792887   17475 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0816 16:49:41.798158   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41901
	I0816 16:49:41.798674   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:49:41.799237   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:49:41.799256   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:49:41.799627   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:49:41.799833   17475 main.go:141] libmachine: (addons-671083) Calling .GetState
	I0816 16:49:41.803488   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45565
	I0816 16:49:41.803662   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33917
	I0816 16:49:41.804093   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:49:41.804199   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:49:41.804684   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:49:41.804712   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:49:41.804869   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:49:41.804888   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:49:41.805224   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:49:41.805768   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:49:41.805809   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:49:41.806031   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33101
	I0816 16:49:41.806474   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:49:41.806530   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:49:41.806668   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33485
	I0816 16:49:41.806922   17475 main.go:141] libmachine: (addons-671083) Calling .GetState
	I0816 16:49:41.807046   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:49:41.807055   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:49:41.807620   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:49:41.807929   17475 main.go:141] libmachine: (addons-671083) Calling .GetState
	I0816 16:49:41.808338   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37013
	I0816 16:49:41.809298   17475 main.go:141] libmachine: (addons-671083) Calling .DriverName
	I0816 16:49:41.809620   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33117
	I0816 16:49:41.809769   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:49:41.809980   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:49:41.810003   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34957
	I0816 16:49:41.809852   17475 main.go:141] libmachine: (addons-671083) Calling .DriverName
	I0816 16:49:41.810455   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:49:41.810475   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:49:41.810636   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:49:41.810787   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:49:41.810807   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:49:41.811021   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:49:41.811121   17475 main.go:141] libmachine: (addons-671083) Calling .DriverName
	I0816 16:49:41.811182   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:49:41.811221   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:49:41.811690   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:49:41.811728   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:49:41.811783   17475 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0816 16:49:41.812135   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:49:41.812177   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:49:41.812486   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:49:41.812509   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:49:41.812564   17475 out.go:177]   - Using image docker.io/registry:2.8.3
	I0816 16:49:41.812789   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:49:41.812812   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:49:41.812880   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:49:41.813067   17475 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0816 16:49:41.813158   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:49:41.813441   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36287
	I0816 16:49:41.813540   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:49:41.813581   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:49:41.813603   17475 main.go:141] libmachine: (addons-671083) Calling .GetState
	I0816 16:49:41.814514   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:49:41.814531   17475 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0816 16:49:41.814563   17475 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0816 16:49:41.814898   17475 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0816 16:49:41.814917   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHHostname
	I0816 16:49:41.815335   17475 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0816 16:49:41.815395   17475 main.go:141] libmachine: (addons-671083) Calling .DriverName
	I0816 16:49:41.815899   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:49:41.815917   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:49:41.816761   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:49:41.816978   17475 main.go:141] libmachine: (addons-671083) Calling .DriverName
	I0816 16:49:41.817025   17475 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0816 16:49:41.817038   17475 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0816 16:49:41.817061   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHHostname
	I0816 16:49:41.818153   17475 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0816 16:49:41.818239   17475 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0816 16:49:41.819712   17475 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0816 16:49:41.819730   17475 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0816 16:49:41.819747   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHHostname
	I0816 16:49:41.819966   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:41.820044   17475 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0816 16:49:41.820061   17475 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0816 16:49:41.820075   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHHostname
	I0816 16:49:41.820350   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:41.820798   17475 main.go:141] libmachine: (addons-671083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:34:d9", ip: ""} in network mk-addons-671083: {Iface:virbr1 ExpiryTime:2024-08-16 17:49:12 +0000 UTC Type:0 Mac:52:54:00:4b:34:d9 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:addons-671083 Clientid:01:52:54:00:4b:34:d9}
	I0816 16:49:41.820818   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined IP address 192.168.39.240 and MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:41.821200   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHPort
	I0816 16:49:41.821635   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHKeyPath
	I0816 16:49:41.821862   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHUsername
	I0816 16:49:41.822113   17475 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/addons-671083/id_rsa Username:docker}
	I0816 16:49:41.823897   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:41.823929   17475 main.go:141] libmachine: (addons-671083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:34:d9", ip: ""} in network mk-addons-671083: {Iface:virbr1 ExpiryTime:2024-08-16 17:49:12 +0000 UTC Type:0 Mac:52:54:00:4b:34:d9 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:addons-671083 Clientid:01:52:54:00:4b:34:d9}
	I0816 16:49:41.823948   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined IP address 192.168.39.240 and MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:41.824035   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHPort
	I0816 16:49:41.824291   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHKeyPath
	I0816 16:49:41.824343   17475 main.go:141] libmachine: (addons-671083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:34:d9", ip: ""} in network mk-addons-671083: {Iface:virbr1 ExpiryTime:2024-08-16 17:49:12 +0000 UTC Type:0 Mac:52:54:00:4b:34:d9 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:addons-671083 Clientid:01:52:54:00:4b:34:d9}
	I0816 16:49:41.824357   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined IP address 192.168.39.240 and MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:41.824394   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHPort
	I0816 16:49:41.824591   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHUsername
	I0816 16:49:41.824679   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHKeyPath
	I0816 16:49:41.824841   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHUsername
	I0816 16:49:41.825009   17475 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/addons-671083/id_rsa Username:docker}
	I0816 16:49:41.825011   17475 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/addons-671083/id_rsa Username:docker}
	I0816 16:49:41.825267   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:41.825588   17475 main.go:141] libmachine: (addons-671083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:34:d9", ip: ""} in network mk-addons-671083: {Iface:virbr1 ExpiryTime:2024-08-16 17:49:12 +0000 UTC Type:0 Mac:52:54:00:4b:34:d9 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:addons-671083 Clientid:01:52:54:00:4b:34:d9}
	I0816 16:49:41.825611   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined IP address 192.168.39.240 and MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:41.825853   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHPort
	I0816 16:49:41.826025   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHKeyPath
	I0816 16:49:41.826154   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHUsername
	I0816 16:49:41.826271   17475 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/addons-671083/id_rsa Username:docker}
	I0816 16:49:41.829052   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44613
	I0816 16:49:41.829363   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:49:41.829826   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:49:41.829845   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:49:41.830266   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:49:41.830728   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:49:41.830757   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:49:41.835187   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36789
	I0816 16:49:41.835328   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45315
	I0816 16:49:41.836257   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:49:41.836266   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:49:41.836746   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:49:41.836769   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:49:41.836902   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:49:41.836919   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:49:41.837273   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:49:41.837604   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:49:41.837646   17475 main.go:141] libmachine: (addons-671083) Calling .GetState
	I0816 16:49:41.837733   17475 main.go:141] libmachine: (addons-671083) Calling .GetState
	I0816 16:49:41.839652   17475 main.go:141] libmachine: (addons-671083) Calling .DriverName
	I0816 16:49:41.839910   17475 main.go:141] libmachine: (addons-671083) Calling .DriverName
	I0816 16:49:41.841666   17475 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0816 16:49:41.841685   17475 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0816 16:49:41.842075   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38235
	I0816 16:49:41.842451   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:49:41.842798   17475 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 16:49:41.842823   17475 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0816 16:49:41.842842   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHHostname
	I0816 16:49:41.842910   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:49:41.842925   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:49:41.842912   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41741
	I0816 16:49:41.843009   17475 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0816 16:49:41.843023   17475 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0816 16:49:41.843041   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHHostname
	I0816 16:49:41.843381   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:49:41.843561   17475 main.go:141] libmachine: (addons-671083) Calling .GetState
	I0816 16:49:41.843903   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:49:41.844573   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:49:41.844606   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:49:41.845183   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:49:41.845644   17475 main.go:141] libmachine: (addons-671083) Calling .GetState
	I0816 16:49:41.846501   17475 main.go:141] libmachine: (addons-671083) Calling .DriverName
	I0816 16:49:41.846883   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:41.847276   17475 main.go:141] libmachine: (addons-671083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:34:d9", ip: ""} in network mk-addons-671083: {Iface:virbr1 ExpiryTime:2024-08-16 17:49:12 +0000 UTC Type:0 Mac:52:54:00:4b:34:d9 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:addons-671083 Clientid:01:52:54:00:4b:34:d9}
	I0816 16:49:41.847300   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined IP address 192.168.39.240 and MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:41.847429   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHPort
	I0816 16:49:41.847783   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHKeyPath
	I0816 16:49:41.847907   17475 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0816 16:49:41.848284   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:41.848321   17475 main.go:141] libmachine: (addons-671083) Calling .DriverName
	I0816 16:49:41.848377   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHUsername
	I0816 16:49:41.848554   17475 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/addons-671083/id_rsa Username:docker}
	I0816 16:49:41.849193   17475 main.go:141] libmachine: (addons-671083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:34:d9", ip: ""} in network mk-addons-671083: {Iface:virbr1 ExpiryTime:2024-08-16 17:49:12 +0000 UTC Type:0 Mac:52:54:00:4b:34:d9 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:addons-671083 Clientid:01:52:54:00:4b:34:d9}
	I0816 16:49:41.849218   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined IP address 192.168.39.240 and MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:41.849392   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHPort
	I0816 16:49:41.849545   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHKeyPath
	I0816 16:49:41.849684   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHUsername
	I0816 16:49:41.849728   17475 out.go:177]   - Using image docker.io/busybox:stable
	I0816 16:49:41.849745   17475 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0816 16:49:41.849838   17475 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/addons-671083/id_rsa Username:docker}
	I0816 16:49:41.851112   17475 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0816 16:49:41.851129   17475 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0816 16:49:41.851147   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHHostname
	I0816 16:49:41.851976   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46275
	I0816 16:49:41.852006   17475 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0816 16:49:41.852554   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:49:41.853154   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:49:41.853178   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:49:41.853551   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:49:41.853727   17475 main.go:141] libmachine: (addons-671083) Calling .GetState
	I0816 16:49:41.854441   17475 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0816 16:49:41.854598   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:41.855094   17475 main.go:141] libmachine: (addons-671083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:34:d9", ip: ""} in network mk-addons-671083: {Iface:virbr1 ExpiryTime:2024-08-16 17:49:12 +0000 UTC Type:0 Mac:52:54:00:4b:34:d9 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:addons-671083 Clientid:01:52:54:00:4b:34:d9}
	I0816 16:49:41.855124   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined IP address 192.168.39.240 and MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:41.855166   17475 main.go:141] libmachine: (addons-671083) Calling .DriverName
	I0816 16:49:41.855361   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHPort
	I0816 16:49:41.855363   17475 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 16:49:41.855400   17475 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 16:49:41.855408   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHHostname
	I0816 16:49:41.855496   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHKeyPath
	I0816 16:49:41.855677   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHUsername
	I0816 16:49:41.855795   17475 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/addons-671083/id_rsa Username:docker}
	I0816 16:49:41.856511   17475 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0816 16:49:41.857585   17475 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0816 16:49:41.858390   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:41.858748   17475 main.go:141] libmachine: (addons-671083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:34:d9", ip: ""} in network mk-addons-671083: {Iface:virbr1 ExpiryTime:2024-08-16 17:49:12 +0000 UTC Type:0 Mac:52:54:00:4b:34:d9 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:addons-671083 Clientid:01:52:54:00:4b:34:d9}
	I0816 16:49:41.858774   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined IP address 192.168.39.240 and MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:41.858926   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHPort
	I0816 16:49:41.859151   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHKeyPath
	I0816 16:49:41.859306   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHUsername
	I0816 16:49:41.859441   17475 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/addons-671083/id_rsa Username:docker}
	I0816 16:49:41.859787   17475 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0816 16:49:41.860975   17475 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0816 16:49:41.862091   17475 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0816 16:49:41.863137   17475 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0816 16:49:41.863159   17475 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0816 16:49:41.863181   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHHostname
	I0816 16:49:41.866255   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:41.866639   17475 main.go:141] libmachine: (addons-671083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:34:d9", ip: ""} in network mk-addons-671083: {Iface:virbr1 ExpiryTime:2024-08-16 17:49:12 +0000 UTC Type:0 Mac:52:54:00:4b:34:d9 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:addons-671083 Clientid:01:52:54:00:4b:34:d9}
	I0816 16:49:41.866654   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined IP address 192.168.39.240 and MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:41.866816   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHPort
	I0816 16:49:41.866970   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHKeyPath
	I0816 16:49:41.867058   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHUsername
	I0816 16:49:41.867130   17475 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/addons-671083/id_rsa Username:docker}
	W0816 16:49:41.877808   17475 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:46902->192.168.39.240:22: read: connection reset by peer
	I0816 16:49:41.877847   17475 retry.go:31] will retry after 205.707768ms: ssh: handshake failed: read tcp 192.168.39.1:46902->192.168.39.240:22: read: connection reset by peer
	I0816 16:49:42.128251   17475 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0816 16:49:42.128276   17475 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0816 16:49:42.147026   17475 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 16:49:42.147049   17475 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0816 16:49:42.148901   17475 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 16:49:42.148918   17475 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0816 16:49:42.211397   17475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0816 16:49:42.214063   17475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0816 16:49:42.234360   17475 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0816 16:49:42.234390   17475 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0816 16:49:42.244323   17475 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0816 16:49:42.244348   17475 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0816 16:49:42.246982   17475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0816 16:49:42.248024   17475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0816 16:49:42.261768   17475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0816 16:49:42.279231   17475 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0816 16:49:42.279252   17475 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0816 16:49:42.281911   17475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 16:49:42.284283   17475 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0816 16:49:42.284305   17475 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0816 16:49:42.289272   17475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 16:49:42.293655   17475 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0816 16:49:42.293676   17475 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0816 16:49:42.420307   17475 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0816 16:49:42.420339   17475 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0816 16:49:42.452168   17475 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0816 16:49:42.452194   17475 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0816 16:49:42.463898   17475 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 16:49:42.463931   17475 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0816 16:49:42.506638   17475 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0816 16:49:42.506657   17475 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0816 16:49:42.531253   17475 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0816 16:49:42.531281   17475 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0816 16:49:42.576917   17475 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0816 16:49:42.576946   17475 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0816 16:49:42.580927   17475 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0816 16:49:42.580947   17475 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0816 16:49:42.616752   17475 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0816 16:49:42.616774   17475 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0816 16:49:42.670886   17475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0816 16:49:42.727245   17475 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 16:49:42.727277   17475 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0816 16:49:42.728653   17475 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0816 16:49:42.728672   17475 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0816 16:49:42.752374   17475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0816 16:49:42.795598   17475 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0816 16:49:42.795633   17475 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0816 16:49:42.813915   17475 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0816 16:49:42.813942   17475 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0816 16:49:42.855297   17475 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0816 16:49:42.855333   17475 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0816 16:49:42.864883   17475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 16:49:42.889069   17475 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0816 16:49:42.889093   17475 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0816 16:49:42.972089   17475 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0816 16:49:42.972112   17475 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0816 16:49:42.993675   17475 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0816 16:49:42.993700   17475 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0816 16:49:43.031829   17475 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0816 16:49:43.031849   17475 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0816 16:49:43.078742   17475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0816 16:49:43.156969   17475 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0816 16:49:43.156995   17475 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0816 16:49:43.211009   17475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0816 16:49:43.245841   17475 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0816 16:49:43.245910   17475 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0816 16:49:43.436179   17475 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0816 16:49:43.436212   17475 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0816 16:49:43.471720   17475 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0816 16:49:43.471748   17475 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0816 16:49:43.645354   17475 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0816 16:49:43.645375   17475 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0816 16:49:43.686628   17475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0816 16:49:43.911641   17475 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0816 16:49:43.911791   17475 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0816 16:49:44.006173   17475 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0816 16:49:44.006195   17475 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0816 16:49:44.159626   17475 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0816 16:49:44.159651   17475 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0816 16:49:44.289626   17475 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0816 16:49:44.289656   17475 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0816 16:49:44.553241   17475 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.406162375s)
	I0816 16:49:44.553275   17475 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0816 16:49:44.553313   17475 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.406257866s)
	I0816 16:49:44.554511   17475 node_ready.go:35] waiting up to 6m0s for node "addons-671083" to be "Ready" ...
	I0816 16:49:44.569361   17475 node_ready.go:49] node "addons-671083" has status "Ready":"True"
	I0816 16:49:44.569383   17475 node_ready.go:38] duration metric: took 14.852002ms for node "addons-671083" to be "Ready" ...
	I0816 16:49:44.569393   17475 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 16:49:44.652691   17475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0816 16:49:44.653817   17475 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-jq9bq" in "kube-system" namespace to be "Ready" ...
	I0816 16:49:45.094988   17475 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-671083" context rescaled to 1 replicas
	I0816 16:49:46.663921   17475 pod_ready.go:103] pod "coredns-6f6b679f8f-jq9bq" in "kube-system" namespace has status "Ready":"False"
	I0816 16:49:48.690025   17475 pod_ready.go:103] pod "coredns-6f6b679f8f-jq9bq" in "kube-system" namespace has status "Ready":"False"
	I0816 16:49:48.841302   17475 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0816 16:49:48.841341   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHHostname
	I0816 16:49:48.844288   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:48.844610   17475 main.go:141] libmachine: (addons-671083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:34:d9", ip: ""} in network mk-addons-671083: {Iface:virbr1 ExpiryTime:2024-08-16 17:49:12 +0000 UTC Type:0 Mac:52:54:00:4b:34:d9 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:addons-671083 Clientid:01:52:54:00:4b:34:d9}
	I0816 16:49:48.844654   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined IP address 192.168.39.240 and MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:48.844789   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHPort
	I0816 16:49:48.845023   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHKeyPath
	I0816 16:49:48.845183   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHUsername
	I0816 16:49:48.845422   17475 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/addons-671083/id_rsa Username:docker}
	I0816 16:49:49.074767   17475 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0816 16:49:49.102644   17475 addons.go:234] Setting addon gcp-auth=true in "addons-671083"
	I0816 16:49:49.102732   17475 host.go:66] Checking if "addons-671083" exists ...
	I0816 16:49:49.103176   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:49:49.103215   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:49:49.118790   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35945
	I0816 16:49:49.119299   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:49:49.119862   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:49:49.119894   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:49:49.120273   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:49:49.120925   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:49:49.120960   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:49:49.136215   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37611
	I0816 16:49:49.136701   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:49:49.137278   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:49:49.137306   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:49:49.137700   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:49:49.137914   17475 main.go:141] libmachine: (addons-671083) Calling .GetState
	I0816 16:49:49.139630   17475 main.go:141] libmachine: (addons-671083) Calling .DriverName
	I0816 16:49:49.139887   17475 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0816 16:49:49.139914   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHHostname
	I0816 16:49:49.143232   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:49.143673   17475 main.go:141] libmachine: (addons-671083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:34:d9", ip: ""} in network mk-addons-671083: {Iface:virbr1 ExpiryTime:2024-08-16 17:49:12 +0000 UTC Type:0 Mac:52:54:00:4b:34:d9 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:addons-671083 Clientid:01:52:54:00:4b:34:d9}
	I0816 16:49:49.143705   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined IP address 192.168.39.240 and MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:49.143842   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHPort
	I0816 16:49:49.144046   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHKeyPath
	I0816 16:49:49.144217   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHUsername
	I0816 16:49:49.144394   17475 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/addons-671083/id_rsa Username:docker}
	I0816 16:49:50.347685   17475 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.136249715s)
	I0816 16:49:50.347735   17475 main.go:141] libmachine: Making call to close driver server
	I0816 16:49:50.347748   17475 main.go:141] libmachine: (addons-671083) Calling .Close
	I0816 16:49:50.347786   17475 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.133690502s)
	I0816 16:49:50.347837   17475 main.go:141] libmachine: Making call to close driver server
	I0816 16:49:50.347850   17475 main.go:141] libmachine: (addons-671083) Calling .Close
	I0816 16:49:50.347851   17475 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.100823994s)
	I0816 16:49:50.347871   17475 main.go:141] libmachine: Making call to close driver server
	I0816 16:49:50.347884   17475 main.go:141] libmachine: (addons-671083) Calling .Close
	I0816 16:49:50.347965   17475 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.099895276s)
	I0816 16:49:50.348012   17475 main.go:141] libmachine: Making call to close driver server
	I0816 16:49:50.348051   17475 main.go:141] libmachine: (addons-671083) Calling .Close
	I0816 16:49:50.348198   17475 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.086405668s)
	I0816 16:49:50.348226   17475 main.go:141] libmachine: Making call to close driver server
	I0816 16:49:50.348233   17475 main.go:141] libmachine: (addons-671083) Calling .Close
	I0816 16:49:50.348299   17475 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.066369333s)
	I0816 16:49:50.348319   17475 main.go:141] libmachine: Making call to close driver server
	I0816 16:49:50.348327   17475 main.go:141] libmachine: (addons-671083) Calling .Close
	I0816 16:49:50.348405   17475 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.0591027s)
	I0816 16:49:50.348418   17475 main.go:141] libmachine: Making call to close driver server
	I0816 16:49:50.348425   17475 main.go:141] libmachine: (addons-671083) Calling .Close
	I0816 16:49:50.348495   17475 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.677584762s)
	I0816 16:49:50.348515   17475 main.go:141] libmachine: Making call to close driver server
	I0816 16:49:50.348529   17475 main.go:141] libmachine: (addons-671083) Calling .Close
	I0816 16:49:50.348574   17475 main.go:141] libmachine: (addons-671083) DBG | Closing plugin on server side
	I0816 16:49:50.348582   17475 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.596181723s)
	I0816 16:49:50.348595   17475 main.go:141] libmachine: Making call to close driver server
	I0816 16:49:50.348603   17475 main.go:141] libmachine: (addons-671083) Calling .Close
	I0816 16:49:50.348614   17475 main.go:141] libmachine: Successfully made call to close driver server
	I0816 16:49:50.348651   17475 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 16:49:50.348660   17475 main.go:141] libmachine: Making call to close driver server
	I0816 16:49:50.348667   17475 main.go:141] libmachine: (addons-671083) Calling .Close
	I0816 16:49:50.348698   17475 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.483789547s)
	I0816 16:49:50.348714   17475 main.go:141] libmachine: Making call to close driver server
	I0816 16:49:50.348720   17475 main.go:141] libmachine: Successfully made call to close driver server
	I0816 16:49:50.348722   17475 main.go:141] libmachine: (addons-671083) Calling .Close
	I0816 16:49:50.348729   17475 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 16:49:50.348737   17475 main.go:141] libmachine: Making call to close driver server
	I0816 16:49:50.348745   17475 main.go:141] libmachine: (addons-671083) Calling .Close
	I0816 16:49:50.348784   17475 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.270009033s)
	I0816 16:49:50.348798   17475 main.go:141] libmachine: Making call to close driver server
	I0816 16:49:50.348805   17475 main.go:141] libmachine: (addons-671083) Calling .Close
	I0816 16:49:50.348912   17475 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.13786678s)
	W0816 16:49:50.348939   17475 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0816 16:49:50.348965   17475 retry.go:31] will retry after 150.564904ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0816 16:49:50.349042   17475 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.66238718s)
	I0816 16:49:50.349056   17475 main.go:141] libmachine: Making call to close driver server
	I0816 16:49:50.349063   17475 main.go:141] libmachine: (addons-671083) Calling .Close
	I0816 16:49:50.349108   17475 main.go:141] libmachine: (addons-671083) DBG | Closing plugin on server side
	I0816 16:49:50.349128   17475 main.go:141] libmachine: Successfully made call to close driver server
	I0816 16:49:50.349140   17475 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 16:49:50.349276   17475 main.go:141] libmachine: (addons-671083) DBG | Closing plugin on server side
	I0816 16:49:50.349300   17475 main.go:141] libmachine: (addons-671083) DBG | Closing plugin on server side
	I0816 16:49:50.349315   17475 main.go:141] libmachine: (addons-671083) DBG | Closing plugin on server side
	I0816 16:49:50.349331   17475 main.go:141] libmachine: (addons-671083) DBG | Closing plugin on server side
	I0816 16:49:50.349354   17475 main.go:141] libmachine: Successfully made call to close driver server
	I0816 16:49:50.349365   17475 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 16:49:50.350094   17475 main.go:141] libmachine: Successfully made call to close driver server
	I0816 16:49:50.350117   17475 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 16:49:50.350126   17475 main.go:141] libmachine: Making call to close driver server
	I0816 16:49:50.350133   17475 main.go:141] libmachine: (addons-671083) Calling .Close
	I0816 16:49:50.350184   17475 main.go:141] libmachine: (addons-671083) DBG | Closing plugin on server side
	I0816 16:49:50.350209   17475 main.go:141] libmachine: Successfully made call to close driver server
	I0816 16:49:50.350216   17475 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 16:49:50.350224   17475 main.go:141] libmachine: Making call to close driver server
	I0816 16:49:50.350232   17475 main.go:141] libmachine: (addons-671083) Calling .Close
	I0816 16:49:50.350533   17475 main.go:141] libmachine: (addons-671083) DBG | Closing plugin on server side
	I0816 16:49:50.350587   17475 main.go:141] libmachine: Successfully made call to close driver server
	I0816 16:49:50.350602   17475 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 16:49:50.350611   17475 main.go:141] libmachine: Making call to close driver server
	I0816 16:49:50.350621   17475 main.go:141] libmachine: (addons-671083) Calling .Close
	I0816 16:49:50.350680   17475 main.go:141] libmachine: (addons-671083) DBG | Closing plugin on server side
	I0816 16:49:50.350706   17475 main.go:141] libmachine: Successfully made call to close driver server
	I0816 16:49:50.350717   17475 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 16:49:50.350758   17475 main.go:141] libmachine: (addons-671083) DBG | Closing plugin on server side
	I0816 16:49:50.350888   17475 main.go:141] libmachine: (addons-671083) DBG | Closing plugin on server side
	I0816 16:49:50.350943   17475 main.go:141] libmachine: Successfully made call to close driver server
	I0816 16:49:50.350954   17475 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 16:49:50.350963   17475 addons.go:475] Verifying addon registry=true in "addons-671083"
	I0816 16:49:50.351484   17475 main.go:141] libmachine: (addons-671083) DBG | Closing plugin on server side
	I0816 16:49:50.351515   17475 main.go:141] libmachine: Successfully made call to close driver server
	I0816 16:49:50.351526   17475 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 16:49:50.352393   17475 out.go:177] * Verifying registry addon...
	I0816 16:49:50.352749   17475 main.go:141] libmachine: (addons-671083) DBG | Closing plugin on server side
	I0816 16:49:50.352799   17475 main.go:141] libmachine: (addons-671083) DBG | Closing plugin on server side
	I0816 16:49:50.352834   17475 main.go:141] libmachine: Successfully made call to close driver server
	I0816 16:49:50.352861   17475 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 16:49:50.352884   17475 main.go:141] libmachine: Making call to close driver server
	I0816 16:49:50.352900   17475 main.go:141] libmachine: (addons-671083) Calling .Close
	I0816 16:49:50.348014   17475 main.go:141] libmachine: Successfully made call to close driver server
	I0816 16:49:50.352934   17475 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 16:49:50.352945   17475 main.go:141] libmachine: Making call to close driver server
	I0816 16:49:50.352953   17475 main.go:141] libmachine: (addons-671083) Calling .Close
	I0816 16:49:50.348116   17475 main.go:141] libmachine: Successfully made call to close driver server
	I0816 16:49:50.352994   17475 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 16:49:50.353002   17475 main.go:141] libmachine: Making call to close driver server
	I0816 16:49:50.353009   17475 main.go:141] libmachine: (addons-671083) Calling .Close
	I0816 16:49:50.353017   17475 main.go:141] libmachine: Successfully made call to close driver server
	I0816 16:49:50.353025   17475 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 16:49:50.348135   17475 main.go:141] libmachine: (addons-671083) DBG | Closing plugin on server side
	I0816 16:49:50.353051   17475 main.go:141] libmachine: Successfully made call to close driver server
	I0816 16:49:50.353059   17475 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 16:49:50.353067   17475 main.go:141] libmachine: Making call to close driver server
	I0816 16:49:50.348155   17475 main.go:141] libmachine: (addons-671083) DBG | Closing plugin on server side
	I0816 16:49:50.353033   17475 main.go:141] libmachine: Making call to close driver server
	I0816 16:49:50.353083   17475 main.go:141] libmachine: (addons-671083) Calling .Close
	I0816 16:49:50.353075   17475 main.go:141] libmachine: (addons-671083) Calling .Close
	I0816 16:49:50.353909   17475 main.go:141] libmachine: (addons-671083) DBG | Closing plugin on server side
	I0816 16:49:50.353933   17475 main.go:141] libmachine: Successfully made call to close driver server
	I0816 16:49:50.353940   17475 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 16:49:50.354066   17475 main.go:141] libmachine: (addons-671083) DBG | Closing plugin on server side
	I0816 16:49:50.354098   17475 main.go:141] libmachine: Successfully made call to close driver server
	I0816 16:49:50.354106   17475 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 16:49:50.354113   17475 main.go:141] libmachine: Making call to close driver server
	I0816 16:49:50.354121   17475 main.go:141] libmachine: (addons-671083) Calling .Close
	I0816 16:49:50.354169   17475 main.go:141] libmachine: (addons-671083) DBG | Closing plugin on server side
	I0816 16:49:50.354189   17475 main.go:141] libmachine: (addons-671083) DBG | Closing plugin on server side
	I0816 16:49:50.354221   17475 main.go:141] libmachine: Successfully made call to close driver server
	I0816 16:49:50.354229   17475 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 16:49:50.354236   17475 addons.go:475] Verifying addon ingress=true in "addons-671083"
	I0816 16:49:50.354273   17475 main.go:141] libmachine: (addons-671083) DBG | Closing plugin on server side
	I0816 16:49:50.354327   17475 main.go:141] libmachine: Successfully made call to close driver server
	I0816 16:49:50.354335   17475 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 16:49:50.354343   17475 addons.go:475] Verifying addon metrics-server=true in "addons-671083"
	I0816 16:49:50.348086   17475 main.go:141] libmachine: Successfully made call to close driver server
	I0816 16:49:50.354597   17475 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 16:49:50.354608   17475 main.go:141] libmachine: Making call to close driver server
	I0816 16:49:50.354616   17475 main.go:141] libmachine: (addons-671083) Calling .Close
	I0816 16:49:50.355035   17475 main.go:141] libmachine: (addons-671083) DBG | Closing plugin on server side
	I0816 16:49:50.355068   17475 main.go:141] libmachine: Successfully made call to close driver server
	I0816 16:49:50.355075   17475 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 16:49:50.355613   17475 main.go:141] libmachine: Successfully made call to close driver server
	I0816 16:49:50.355623   17475 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 16:49:50.354304   17475 main.go:141] libmachine: Successfully made call to close driver server
	I0816 16:49:50.355782   17475 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 16:49:50.356037   17475 out.go:177] * Verifying ingress addon...
	I0816 16:49:50.356124   17475 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0816 16:49:50.357023   17475 main.go:141] libmachine: (addons-671083) DBG | Closing plugin on server side
	I0816 16:49:50.357086   17475 main.go:141] libmachine: Successfully made call to close driver server
	I0816 16:49:50.357123   17475 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 16:49:50.357337   17475 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-671083 service yakd-dashboard -n yakd-dashboard
	
	I0816 16:49:50.358313   17475 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0816 16:49:50.382266   17475 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0816 16:49:50.382295   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:49:50.385956   17475 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0816 16:49:50.385975   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:49:50.398837   17475 main.go:141] libmachine: Making call to close driver server
	I0816 16:49:50.398862   17475 main.go:141] libmachine: (addons-671083) Calling .Close
	I0816 16:49:50.399297   17475 main.go:141] libmachine: (addons-671083) DBG | Closing plugin on server side
	I0816 16:49:50.399345   17475 main.go:141] libmachine: Successfully made call to close driver server
	I0816 16:49:50.399354   17475 main.go:141] libmachine: Making call to close connection to plugin binary
	W0816 16:49:50.399433   17475 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0816 16:49:50.407098   17475 main.go:141] libmachine: Making call to close driver server
	I0816 16:49:50.407118   17475 main.go:141] libmachine: (addons-671083) Calling .Close
	I0816 16:49:50.407404   17475 main.go:141] libmachine: Successfully made call to close driver server
	I0816 16:49:50.407424   17475 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 16:49:50.499922   17475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0816 16:49:50.870589   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:49:50.872882   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:49:51.175544   17475 pod_ready.go:103] pod "coredns-6f6b679f8f-jq9bq" in "kube-system" namespace has status "Ready":"False"
	I0816 16:49:51.355975   17475 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.703230957s)
	I0816 16:49:51.356030   17475 main.go:141] libmachine: Making call to close driver server
	I0816 16:49:51.356043   17475 main.go:141] libmachine: (addons-671083) Calling .Close
	I0816 16:49:51.355988   17475 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.216077119s)
	I0816 16:49:51.356388   17475 main.go:141] libmachine: Successfully made call to close driver server
	I0816 16:49:51.356410   17475 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 16:49:51.356418   17475 main.go:141] libmachine: Making call to close driver server
	I0816 16:49:51.356424   17475 main.go:141] libmachine: (addons-671083) Calling .Close
	I0816 16:49:51.356695   17475 main.go:141] libmachine: (addons-671083) DBG | Closing plugin on server side
	I0816 16:49:51.356708   17475 main.go:141] libmachine: Successfully made call to close driver server
	I0816 16:49:51.356722   17475 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 16:49:51.356731   17475 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-671083"
	I0816 16:49:51.359305   17475 out.go:177] * Verifying csi-hostpath-driver addon...
	I0816 16:49:51.359320   17475 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0816 16:49:51.360714   17475 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0816 16:49:51.361539   17475 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0816 16:49:51.361552   17475 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0816 16:49:51.361626   17475 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0816 16:49:51.397203   17475 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0816 16:49:51.397226   17475 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0816 16:49:51.399360   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:49:51.399962   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:49:51.400495   17475 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0816 16:49:51.400519   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:49:51.503342   17475 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0816 16:49:51.503369   17475 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0816 16:49:51.556046   17475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0816 16:49:51.860495   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:49:51.863128   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:49:51.866033   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:49:52.450400   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:49:52.451065   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:49:52.451185   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:49:52.661731   17475 pod_ready.go:93] pod "coredns-6f6b679f8f-jq9bq" in "kube-system" namespace has status "Ready":"True"
	I0816 16:49:52.661755   17475 pod_ready.go:82] duration metric: took 8.007913783s for pod "coredns-6f6b679f8f-jq9bq" in "kube-system" namespace to be "Ready" ...
	I0816 16:49:52.661766   17475 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-z4wg6" in "kube-system" namespace to be "Ready" ...
	I0816 16:49:52.728497   17475 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.228529416s)
	I0816 16:49:52.728555   17475 main.go:141] libmachine: Making call to close driver server
	I0816 16:49:52.728583   17475 main.go:141] libmachine: (addons-671083) Calling .Close
	I0816 16:49:52.728872   17475 main.go:141] libmachine: (addons-671083) DBG | Closing plugin on server side
	I0816 16:49:52.728923   17475 main.go:141] libmachine: Successfully made call to close driver server
	I0816 16:49:52.728934   17475 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 16:49:52.728954   17475 main.go:141] libmachine: Making call to close driver server
	I0816 16:49:52.728967   17475 main.go:141] libmachine: (addons-671083) Calling .Close
	I0816 16:49:52.729166   17475 main.go:141] libmachine: Successfully made call to close driver server
	I0816 16:49:52.729182   17475 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 16:49:52.874580   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:49:52.876521   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:49:52.880122   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:49:53.018606   17475 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.462523547s)
	I0816 16:49:53.018652   17475 main.go:141] libmachine: Making call to close driver server
	I0816 16:49:53.018667   17475 main.go:141] libmachine: (addons-671083) Calling .Close
	I0816 16:49:53.018955   17475 main.go:141] libmachine: Successfully made call to close driver server
	I0816 16:49:53.019011   17475 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 16:49:53.019031   17475 main.go:141] libmachine: Making call to close driver server
	I0816 16:49:53.019039   17475 main.go:141] libmachine: (addons-671083) Calling .Close
	I0816 16:49:53.019056   17475 main.go:141] libmachine: (addons-671083) DBG | Closing plugin on server side
	I0816 16:49:53.019339   17475 main.go:141] libmachine: (addons-671083) DBG | Closing plugin on server side
	I0816 16:49:53.019341   17475 main.go:141] libmachine: Successfully made call to close driver server
	I0816 16:49:53.019370   17475 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 16:49:53.021232   17475 addons.go:475] Verifying addon gcp-auth=true in "addons-671083"
	I0816 16:49:53.022678   17475 out.go:177] * Verifying gcp-auth addon...
	I0816 16:49:53.024482   17475 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0816 16:49:53.071131   17475 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0816 16:49:53.071153   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:49:53.368574   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:49:53.368617   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:49:53.372207   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:49:53.527731   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:49:53.862012   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:49:53.866439   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:49:53.869853   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:49:54.028501   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:49:54.170616   17475 pod_ready.go:98] pod "coredns-6f6b679f8f-z4wg6" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-16 16:49:54 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-16 16:49:42 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-16 16:49:42 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-16 16:49:42 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-16 16:49:42 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.240 HostIPs:[{IP:192.168.39
.240}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-08-16 16:49:42 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-08-16 16:49:48 +0000 UTC,FinishedAt:2024-08-16 16:49:53 +0000 UTC,ContainerID:cri-o://663dbdc93002796eec820a926f18a3c3a5d9f6411dcdfbeceae5c1106c031142,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://663dbdc93002796eec820a926f18a3c3a5d9f6411dcdfbeceae5c1106c031142 Started:0xc002151f20 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001af3eb0} {Name:kube-api-access-nxb56 MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc001af3ec0}] User:ni
l AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0816 16:49:54.170647   17475 pod_ready.go:82] duration metric: took 1.508873244s for pod "coredns-6f6b679f8f-z4wg6" in "kube-system" namespace to be "Ready" ...
	E0816 16:49:54.170661   17475 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-6f6b679f8f-z4wg6" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-16 16:49:54 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-16 16:49:42 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-16 16:49:42 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-16 16:49:42 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-16 16:49:42 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.240 HostIPs:[{IP:192.168.39.240}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-08-16 16:49:42 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-08-16 16:49:48 +0000 UTC,FinishedAt:2024-08-16 16:49:53 +0000 UTC,ContainerID:cri-o://663dbdc93002796eec820a926f18a3c3a5d9f6411dcdfbeceae5c1106c031142,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://663dbdc93002796eec820a926f18a3c3a5d9f6411dcdfbeceae5c1106c031142 Started:0xc002151f20 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001af3eb0} {Name:kube-api-access-nxb56 MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRe
adOnly:0xc001af3ec0}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0816 16:49:54.170673   17475 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-671083" in "kube-system" namespace to be "Ready" ...
	I0816 16:49:54.176700   17475 pod_ready.go:93] pod "etcd-addons-671083" in "kube-system" namespace has status "Ready":"True"
	I0816 16:49:54.176717   17475 pod_ready.go:82] duration metric: took 6.035654ms for pod "etcd-addons-671083" in "kube-system" namespace to be "Ready" ...
	I0816 16:49:54.176725   17475 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-671083" in "kube-system" namespace to be "Ready" ...
	I0816 16:49:54.182254   17475 pod_ready.go:93] pod "kube-apiserver-addons-671083" in "kube-system" namespace has status "Ready":"True"
	I0816 16:49:54.182270   17475 pod_ready.go:82] duration metric: took 5.53894ms for pod "kube-apiserver-addons-671083" in "kube-system" namespace to be "Ready" ...
	I0816 16:49:54.182277   17475 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-671083" in "kube-system" namespace to be "Ready" ...
	I0816 16:49:54.187072   17475 pod_ready.go:93] pod "kube-controller-manager-addons-671083" in "kube-system" namespace has status "Ready":"True"
	I0816 16:49:54.187086   17475 pod_ready.go:82] duration metric: took 4.802902ms for pod "kube-controller-manager-addons-671083" in "kube-system" namespace to be "Ready" ...
	I0816 16:49:54.187093   17475 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vcpxh" in "kube-system" namespace to be "Ready" ...
	I0816 16:49:54.259709   17475 pod_ready.go:93] pod "kube-proxy-vcpxh" in "kube-system" namespace has status "Ready":"True"
	I0816 16:49:54.259728   17475 pod_ready.go:82] duration metric: took 72.630163ms for pod "kube-proxy-vcpxh" in "kube-system" namespace to be "Ready" ...
	I0816 16:49:54.259736   17475 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-671083" in "kube-system" namespace to be "Ready" ...
	I0816 16:49:54.360303   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:49:54.362427   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:49:54.364968   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:49:54.528524   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:49:54.658649   17475 pod_ready.go:93] pod "kube-scheduler-addons-671083" in "kube-system" namespace has status "Ready":"True"
	I0816 16:49:54.658678   17475 pod_ready.go:82] duration metric: took 398.934745ms for pod "kube-scheduler-addons-671083" in "kube-system" namespace to be "Ready" ...
	I0816 16:49:54.658691   17475 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-6fkvh" in "kube-system" namespace to be "Ready" ...
	I0816 16:49:54.860436   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:49:54.861875   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:49:54.864931   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:49:55.028403   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:49:55.359913   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:49:55.362116   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:49:55.365064   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:49:55.528643   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:49:55.860825   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:49:55.862378   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:49:55.865783   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:49:56.027608   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:49:56.360344   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:49:56.362033   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:49:56.365250   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:49:56.527711   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:49:56.664818   17475 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-6fkvh" in "kube-system" namespace has status "Ready":"False"
	I0816 16:49:56.860638   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:49:56.863671   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:49:56.865090   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:49:57.028263   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:49:57.360395   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:49:57.362817   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:49:57.365142   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:49:57.528811   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:49:57.860107   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:49:57.862225   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:49:57.865723   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:49:58.029726   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:49:58.360126   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:49:58.362274   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:49:58.365902   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:49:58.528104   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:49:58.665512   17475 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-6fkvh" in "kube-system" namespace has status "Ready":"False"
	I0816 16:49:58.860044   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:49:58.862238   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:49:58.865239   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:49:59.027536   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:49:59.360358   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:49:59.362376   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:49:59.366226   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:49:59.528833   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:49:59.859701   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:49:59.862032   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:49:59.865298   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:00.028459   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:00.361804   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:00.363972   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:00.365409   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:00.528070   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:00.862045   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:00.862530   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:00.865523   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:01.029990   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:01.166119   17475 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-6fkvh" in "kube-system" namespace has status "Ready":"False"
	I0816 16:50:01.361255   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:01.363021   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:01.365135   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:01.528355   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:01.860016   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:01.862095   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:01.865336   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:02.027282   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:02.368576   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:02.373830   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:02.374943   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:02.527797   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:02.860332   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:02.862817   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:02.865094   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:03.028346   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:03.360420   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:03.363612   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:03.367257   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:03.527459   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:03.664683   17475 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-6fkvh" in "kube-system" namespace has status "Ready":"False"
	I0816 16:50:03.860772   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:03.862218   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:03.865423   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:04.027421   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:04.362681   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:04.365575   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:04.370202   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:04.528717   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:04.860421   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:04.861986   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:04.865449   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:05.028402   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:05.360473   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:05.363148   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:05.367676   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:05.528896   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:05.665386   17475 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-6fkvh" in "kube-system" namespace has status "Ready":"False"
	I0816 16:50:05.859986   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:05.861864   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:05.864990   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:06.028196   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:06.362331   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:06.362468   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:06.368886   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:06.528062   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:06.859992   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:06.862545   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:06.865837   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:07.029046   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:07.360305   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:07.362694   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:07.365104   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:07.528799   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:07.860236   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:07.862438   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:07.865459   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:08.028155   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:08.164898   17475 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-6fkvh" in "kube-system" namespace has status "Ready":"False"
	I0816 16:50:08.373773   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:08.373863   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:08.374288   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:08.528884   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:08.861329   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:08.863059   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:08.865303   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:09.027577   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:09.360522   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:09.362691   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:09.364936   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:09.528957   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:09.664732   17475 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-6fkvh" in "kube-system" namespace has status "Ready":"True"
	I0816 16:50:09.664755   17475 pod_ready.go:82] duration metric: took 15.00605745s for pod "nvidia-device-plugin-daemonset-6fkvh" in "kube-system" namespace to be "Ready" ...
	I0816 16:50:09.664763   17475 pod_ready.go:39] duration metric: took 25.095357982s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 16:50:09.664778   17475 api_server.go:52] waiting for apiserver process to appear ...
	I0816 16:50:09.664827   17475 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 16:50:09.682601   17475 api_server.go:72] duration metric: took 27.999466706s to wait for apiserver process to appear ...
	I0816 16:50:09.682628   17475 api_server.go:88] waiting for apiserver healthz status ...
	I0816 16:50:09.682645   17475 api_server.go:253] Checking apiserver healthz at https://192.168.39.240:8443/healthz ...
	I0816 16:50:09.687727   17475 api_server.go:279] https://192.168.39.240:8443/healthz returned 200:
	ok
	I0816 16:50:09.688733   17475 api_server.go:141] control plane version: v1.31.0
	I0816 16:50:09.688755   17475 api_server.go:131] duration metric: took 6.121364ms to wait for apiserver health ...
	I0816 16:50:09.688763   17475 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 16:50:09.697181   17475 system_pods.go:59] 18 kube-system pods found
	I0816 16:50:09.697212   17475 system_pods.go:61] "coredns-6f6b679f8f-jq9bq" [50cf4e20-39bf-4c95-9744-3f86148fcb61] Running
	I0816 16:50:09.697222   17475 system_pods.go:61] "csi-hostpath-attacher-0" [828bbc78-aefd-4414-b73f-3386e27ddf03] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0816 16:50:09.697228   17475 system_pods.go:61] "csi-hostpath-resizer-0" [6e1d39ba-5f5f-4cdf-8109-b1382360eccb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0816 16:50:09.697237   17475 system_pods.go:61] "csi-hostpathplugin-lfs24" [344d6dad-37be-4ec3-8791-fde08e6ebd57] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0816 16:50:09.697242   17475 system_pods.go:61] "etcd-addons-671083" [147192dd-da81-4ad1-8a05-52eedfbc84fd] Running
	I0816 16:50:09.697246   17475 system_pods.go:61] "kube-apiserver-addons-671083" [71555bba-161f-472e-90f1-cfe377e16b84] Running
	I0816 16:50:09.697250   17475 system_pods.go:61] "kube-controller-manager-addons-671083" [3382946c-61b4-45a7-8b77-e63a0a7f9d34] Running
	I0816 16:50:09.697253   17475 system_pods.go:61] "kube-ingress-dns-minikube" [a737f23d-c62b-4073-9b90-6c95e9a3374b] Running
	I0816 16:50:09.697256   17475 system_pods.go:61] "kube-proxy-vcpxh" [fa9fb911-4140-45c4-b33c-e7c7616ee708] Running
	I0816 16:50:09.697260   17475 system_pods.go:61] "kube-scheduler-addons-671083" [944ee8a4-dc5e-481b-bedd-56a6c34ba6e7] Running
	I0816 16:50:09.697265   17475 system_pods.go:61] "metrics-server-8988944d9-qjczl" [499be229-e123-4025-afef-b0608d31b95d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 16:50:09.697271   17475 system_pods.go:61] "nvidia-device-plugin-daemonset-6fkvh" [fad33474-a661-4441-a3d3-61e1e753fc6a] Running
	I0816 16:50:09.697276   17475 system_pods.go:61] "registry-6fb4cdfc84-rvzfr" [ef669560-d120-4b0c-96ee-3b4786b10c8c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0816 16:50:09.697284   17475 system_pods.go:61] "registry-proxy-qpbf4" [afdfd628-7037-4056-b825-d6a9bf88c250] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0816 16:50:09.697291   17475 system_pods.go:61] "snapshot-controller-56fcc65765-2trrn" [bd75e67c-ed92-466e-8915-d2d5d1e87ad6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0816 16:50:09.697300   17475 system_pods.go:61] "snapshot-controller-56fcc65765-6kxd8" [2b18846d-7cdb-4733-8f3a-7f522ff67f18] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0816 16:50:09.697305   17475 system_pods.go:61] "storage-provisioner" [eb6c00fa-72db-4dfe-a3d9-054186223927] Running
	I0816 16:50:09.697311   17475 system_pods.go:61] "tiller-deploy-b48cc5f79-xdgrc" [9075d95d-30f9-45ec-944b-3ee3d7e01862] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0816 16:50:09.697316   17475 system_pods.go:74] duration metric: took 8.547956ms to wait for pod list to return data ...
	I0816 16:50:09.697324   17475 default_sa.go:34] waiting for default service account to be created ...
	I0816 16:50:09.699598   17475 default_sa.go:45] found service account: "default"
	I0816 16:50:09.699617   17475 default_sa.go:55] duration metric: took 2.286452ms for default service account to be created ...
	I0816 16:50:09.699643   17475 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 16:50:09.709799   17475 system_pods.go:86] 18 kube-system pods found
	I0816 16:50:09.709824   17475 system_pods.go:89] "coredns-6f6b679f8f-jq9bq" [50cf4e20-39bf-4c95-9744-3f86148fcb61] Running
	I0816 16:50:09.709833   17475 system_pods.go:89] "csi-hostpath-attacher-0" [828bbc78-aefd-4414-b73f-3386e27ddf03] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0816 16:50:09.709840   17475 system_pods.go:89] "csi-hostpath-resizer-0" [6e1d39ba-5f5f-4cdf-8109-b1382360eccb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0816 16:50:09.709847   17475 system_pods.go:89] "csi-hostpathplugin-lfs24" [344d6dad-37be-4ec3-8791-fde08e6ebd57] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0816 16:50:09.709852   17475 system_pods.go:89] "etcd-addons-671083" [147192dd-da81-4ad1-8a05-52eedfbc84fd] Running
	I0816 16:50:09.709857   17475 system_pods.go:89] "kube-apiserver-addons-671083" [71555bba-161f-472e-90f1-cfe377e16b84] Running
	I0816 16:50:09.709861   17475 system_pods.go:89] "kube-controller-manager-addons-671083" [3382946c-61b4-45a7-8b77-e63a0a7f9d34] Running
	I0816 16:50:09.709866   17475 system_pods.go:89] "kube-ingress-dns-minikube" [a737f23d-c62b-4073-9b90-6c95e9a3374b] Running
	I0816 16:50:09.709869   17475 system_pods.go:89] "kube-proxy-vcpxh" [fa9fb911-4140-45c4-b33c-e7c7616ee708] Running
	I0816 16:50:09.709873   17475 system_pods.go:89] "kube-scheduler-addons-671083" [944ee8a4-dc5e-481b-bedd-56a6c34ba6e7] Running
	I0816 16:50:09.709881   17475 system_pods.go:89] "metrics-server-8988944d9-qjczl" [499be229-e123-4025-afef-b0608d31b95d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 16:50:09.709886   17475 system_pods.go:89] "nvidia-device-plugin-daemonset-6fkvh" [fad33474-a661-4441-a3d3-61e1e753fc6a] Running
	I0816 16:50:09.709892   17475 system_pods.go:89] "registry-6fb4cdfc84-rvzfr" [ef669560-d120-4b0c-96ee-3b4786b10c8c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0816 16:50:09.709900   17475 system_pods.go:89] "registry-proxy-qpbf4" [afdfd628-7037-4056-b825-d6a9bf88c250] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0816 16:50:09.709906   17475 system_pods.go:89] "snapshot-controller-56fcc65765-2trrn" [bd75e67c-ed92-466e-8915-d2d5d1e87ad6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0816 16:50:09.709912   17475 system_pods.go:89] "snapshot-controller-56fcc65765-6kxd8" [2b18846d-7cdb-4733-8f3a-7f522ff67f18] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0816 16:50:09.709918   17475 system_pods.go:89] "storage-provisioner" [eb6c00fa-72db-4dfe-a3d9-054186223927] Running
	I0816 16:50:09.709924   17475 system_pods.go:89] "tiller-deploy-b48cc5f79-xdgrc" [9075d95d-30f9-45ec-944b-3ee3d7e01862] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0816 16:50:09.709931   17475 system_pods.go:126] duration metric: took 10.282712ms to wait for k8s-apps to be running ...
	I0816 16:50:09.709940   17475 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 16:50:09.709979   17475 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 16:50:09.723790   17475 system_svc.go:56] duration metric: took 13.84229ms WaitForService to wait for kubelet
	I0816 16:50:09.723820   17475 kubeadm.go:582] duration metric: took 28.040689568s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 16:50:09.723838   17475 node_conditions.go:102] verifying NodePressure condition ...
	I0816 16:50:09.726840   17475 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 16:50:09.726863   17475 node_conditions.go:123] node cpu capacity is 2
	I0816 16:50:09.726874   17475 node_conditions.go:105] duration metric: took 3.032489ms to run NodePressure ...
	I0816 16:50:09.726885   17475 start.go:241] waiting for startup goroutines ...
	I0816 16:50:09.860518   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:09.862404   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:09.864805   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:10.028321   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:10.360593   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:10.361621   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:10.365099   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:10.528915   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:10.861351   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:10.862786   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:10.864980   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:11.029110   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:11.360951   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:11.363856   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:11.366756   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:11.527787   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:11.860121   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:11.862412   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:11.864801   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:12.028311   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:12.359949   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:12.361710   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:12.364933   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:12.528444   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:12.860429   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:12.861987   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:12.865606   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:13.028157   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:13.361318   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:13.363054   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:13.365612   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:13.527966   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:13.860777   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:13.863081   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:13.864923   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:14.028656   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:14.360939   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:14.363038   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:14.366705   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:14.527977   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:14.861821   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:14.863606   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:14.865787   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:15.028289   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:15.359348   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:15.362133   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:15.365607   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:15.527553   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:15.859735   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:15.861965   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:15.865437   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:16.027811   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:16.360973   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:16.362531   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:16.368166   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:16.529397   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:16.860607   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:16.862916   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:16.866133   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:17.028938   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:17.360520   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:17.362744   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:17.364962   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:17.528876   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:17.861194   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:17.863021   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:17.865493   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:18.027810   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:18.361020   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:18.362337   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:18.365496   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:18.527958   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:18.859350   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:18.861493   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:18.864545   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:19.028181   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:19.359237   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:19.361677   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:19.364483   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:19.528007   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:19.860594   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:19.863399   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:19.867358   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:20.028421   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:20.359503   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:20.361675   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:20.364496   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:20.527916   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:20.860517   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:20.862939   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:20.867912   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:21.027829   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:21.360023   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:21.362233   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:21.365635   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:21.527974   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:21.862530   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:21.862626   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:21.864681   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:22.028093   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:22.361763   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:22.363337   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:22.365765   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:22.527945   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:22.860383   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:22.862402   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:22.865016   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:23.027787   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:23.359502   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:23.362524   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:23.365338   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:23.527671   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:23.860123   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:23.861975   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:23.867744   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:24.027740   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:24.359884   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:24.362739   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:24.365081   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:24.528848   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:24.862681   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:24.862917   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:24.870759   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:25.028240   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:25.361947   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:25.363475   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:25.365319   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:25.530241   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:25.859923   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:25.861924   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:25.865187   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:26.028999   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:26.360955   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:26.362729   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:26.364897   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:26.528878   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:26.860460   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:26.862060   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:26.865429   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:27.028003   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:27.360653   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:27.363275   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:27.365541   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:27.528087   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:27.860013   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:27.862854   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:27.864926   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:28.028493   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:28.542203   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:28.542591   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:28.543012   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:28.544368   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:28.860611   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:28.862712   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:28.865682   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:29.027982   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:29.361087   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:29.362686   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:29.365999   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:29.528874   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:29.860263   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:29.862223   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:29.865930   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:30.027859   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:30.363733   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:30.364350   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:30.368921   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:30.528932   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:30.860771   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:30.862965   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:30.865567   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:31.027729   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:31.360301   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:31.362972   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:31.365100   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:31.528940   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:31.862138   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:31.863473   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:31.866338   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:32.028968   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:32.360706   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:32.365524   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:32.367508   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:32.527824   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:32.860280   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:32.862802   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:32.864671   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:33.028371   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:33.360723   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:33.362424   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:33.372647   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:33.527875   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:33.862940   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:33.865771   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:33.865771   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:34.029225   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:34.360451   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:34.362260   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:34.365305   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:34.528058   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:34.862264   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:34.865647   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:34.866191   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:35.027514   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:35.360698   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:35.362567   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:35.365704   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:35.527563   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:35.860910   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:35.862418   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:35.865906   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:36.027954   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:36.360652   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:36.362901   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:36.364928   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:36.528116   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:36.859841   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:36.862330   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:36.865568   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:37.028470   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:37.359460   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:37.363051   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:37.366268   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:37.530384   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:37.859664   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:37.861992   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:37.865073   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:38.028547   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:38.360100   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:38.362526   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:38.364893   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:38.527710   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:38.860663   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:38.862716   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:38.864799   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:39.028010   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:39.794797   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:39.795224   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:39.795576   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:39.796218   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:39.860376   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:39.863163   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:39.865344   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:40.027488   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:40.359960   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:40.362645   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:40.367745   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:40.528062   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:40.859951   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:40.862471   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:40.866853   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:41.028168   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:41.360878   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:41.363269   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:41.367081   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:41.527920   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:41.859803   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:41.862207   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:41.865709   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:42.027949   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:42.360540   17475 kapi.go:107] duration metric: took 52.004412255s to wait for kubernetes.io/minikube-addons=registry ...
	I0816 16:50:42.362911   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:42.364930   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:42.528316   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:42.863554   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:42.865884   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:43.028655   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:43.364573   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:43.366792   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:43.528335   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:43.864684   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:43.867075   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:44.027875   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:44.365907   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:44.367079   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:44.528725   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:44.863371   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:44.865795   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:45.029931   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:45.362678   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:45.364893   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:45.528345   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:45.862243   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:45.866012   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:46.027882   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:46.365029   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:46.367018   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:46.528857   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:46.863142   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:46.865924   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:47.029094   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:47.363673   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:47.369300   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:47.528724   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:47.862321   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:47.865447   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:48.028485   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:48.366233   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:48.366582   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:48.527898   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:48.863701   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:48.865555   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:49.028601   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:49.365487   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:49.369208   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:49.533382   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:49.863442   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:49.865812   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:50.027862   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:50.365155   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:50.367813   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:50.528819   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:50.863413   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:50.865627   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:51.027954   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:51.362763   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:51.365011   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:51.528428   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:51.863348   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:51.866960   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:52.028687   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:52.362885   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:52.365329   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:52.527454   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:52.863075   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:52.866466   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:53.027928   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:53.368086   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:53.369333   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:53.534008   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:53.867163   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:53.868069   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:54.028761   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:54.365042   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:54.368168   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:54.528706   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:54.865194   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:54.867551   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:55.028176   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:55.363114   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:55.365685   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:55.527754   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:55.862084   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:55.865950   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:56.028645   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:56.363119   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:56.366567   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:56.528800   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:56.864169   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:56.866121   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:57.028108   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:57.363505   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:57.365766   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:57.528205   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:57.862943   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:57.865110   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:58.028487   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:58.770497   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:58.773998   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:58.774579   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:58.865527   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:58.865599   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:59.029254   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:59.363442   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:59.365904   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:59.527567   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:59.862234   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:59.865507   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:51:00.028193   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:51:00.363408   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:51:00.366531   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:51:00.528442   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:51:00.863140   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:51:00.866339   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:51:01.027814   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:51:01.362574   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:51:01.365337   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:51:01.661019   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:51:01.863194   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:51:01.865992   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:51:02.028536   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:51:02.362440   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:51:02.365669   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:51:02.527888   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:51:02.862505   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:51:02.866144   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:51:03.028918   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:51:03.363014   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:51:03.366846   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:51:03.529009   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:51:03.863598   17475 kapi.go:107] duration metric: took 1m13.505283747s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0816 16:51:03.866415   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:51:04.027831   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:51:04.365949   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:51:04.566853   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:51:04.865922   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:51:05.028121   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:51:05.366873   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:51:05.527746   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:51:05.865856   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:51:06.029004   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:51:06.368160   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:51:06.528098   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:51:06.865940   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:51:07.028510   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:51:07.366427   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:51:07.765929   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:51:07.930061   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:51:08.033467   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:51:08.366374   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:51:08.528612   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:51:08.867858   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:51:09.028678   17475 kapi.go:107] duration metric: took 1m16.004190043s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0816 16:51:09.030463   17475 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-671083 cluster.
	I0816 16:51:09.031770   17475 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0816 16:51:09.032931   17475 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0816 16:51:09.367562   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:51:09.867216   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:51:10.367044   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:51:10.865759   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:51:11.366694   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:51:11.866871   17475 kapi.go:107] duration metric: took 1m20.505239805s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0816 16:51:11.868681   17475 out.go:177] * Enabled addons: cloud-spanner, nvidia-device-plugin, helm-tiller, inspektor-gadget, storage-provisioner, metrics-server, ingress-dns, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0816 16:51:11.869969   17475 addons.go:510] duration metric: took 1m30.186821124s for enable addons: enabled=[cloud-spanner nvidia-device-plugin helm-tiller inspektor-gadget storage-provisioner metrics-server ingress-dns yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0816 16:51:11.869999   17475 start.go:246] waiting for cluster config update ...
	I0816 16:51:11.870016   17475 start.go:255] writing updated cluster config ...
	I0816 16:51:11.870250   17475 ssh_runner.go:195] Run: rm -f paused
	I0816 16:51:11.921390   17475 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0816 16:51:11.923333   17475 out.go:177] * Done! kubectl is now configured to use "addons-671083" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 16 16:54:41 addons-671083 crio[680]: time="2024-08-16 16:54:41.009911257Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723827281009884809,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ca816d2e-6e6f-4b3f-87a3-270402b8a109 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 16:54:41 addons-671083 crio[680]: time="2024-08-16 16:54:41.010473644Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=af492f9f-01f0-415e-b679-f8505724895c name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 16:54:41 addons-671083 crio[680]: time="2024-08-16 16:54:41.010528954Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=af492f9f-01f0-415e-b679-f8505724895c name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 16:54:41 addons-671083 crio[680]: time="2024-08-16 16:54:41.010798472Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f2c5cac8a8c8bdb7ced0494559d11953eb81232a7f5d807b79234a4ffe2d4e5c,PodSandboxId:ce3d999bfbec42e0208c5d963206c1219589a05073ea40e015c4966447954518,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1723827274111414073,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-srmzf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7068bb8f-88d1-41f1-b4ff-bd9559d40ee7,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2728ee91db6d9eea237bcf970aaa9ffab3dd088c5cefdb095638884c59f198d7,PodSandboxId:6686f5dd4c095493ee26735af3a18c0d48fb0a0c6e022bcb22e2f4fc3adb61ec,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1723827135580825464,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d0bf1f79-56d5-4c95-8a88-8e8d0007a72a,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:299d550fb32349bb9d9d7cce45a8abdb4782e4028dddba86ca97a077a395521b,PodSandboxId:b43c1481fdf82bb63aafdb868d1ae5bd26e7d6ec7087f10eb06e12a878e3b7f1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723827075363707655,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4f993024-c844-4f21-8
ed5-7df2f4b636be,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0769de1a3f711a5862de64148f60cf28f6f1c725631004ebf4b83dc040bd0616,PodSandboxId:8348cb178c21fa9fabcdcbe9d51fece4f34fb61167102fb0fd0bd11b03a53189,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1723827025489057762,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-jf7ql,io.kubernetes.pod.name
space: local-path-storage,io.kubernetes.pod.uid: 67fd9b18-127b-45cb-9434-b9b807138706,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1008dba1c022a67c1e5f7040f8e7db78ed122390ebda9fa686317a646762361c,PodSandboxId:4bf146cb19f670d5a91735293e5af7090f09fdc8b476bc5ddbc19b74324cb56d,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1723827017114777577,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metr
ics-server-8988944d9-qjczl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 499be229-e123-4025-afef-b0608d31b95d,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a055487b6473ea1f4a4d8325ff9e8ceeec758a776bcdd735b703f27cc4fafde5,PodSandboxId:42622eae74032d98c6953bdc561c5ce3ee399e1105565d3933425951d90372b6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723826987845184113,Labels
:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb6c00fa-72db-4dfe-a3d9-054186223927,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:421d348188441902e3955b86ce1478141bab0b13d05c11f202366ce16e5c5979,PodSandboxId:3ac4881b3f4898a71bd50733b145873cb83ecb57dbee9f1379dae9cc971b93b0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723826987073782131,Labels:map[string]string{io.ku
bernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jq9bq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50cf4e20-39bf-4c95-9744-3f86148fcb61,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ee6d2726d73385e27a2719444dc6f623400d8b8a63e729e5d6b431d5af73b7f,PodSandboxId:574d3d07260a8541ea77f7db73c55408246bdf4a71edfd1741cbc1d7ab9903fb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{
},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723826983394072696,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vcpxh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa9fb911-4140-45c4-b33c-e7c7616ee708,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:738c5d5fbb5383fe98b87374293bf6c3ae61553b928d7a5ac91a577aac118946,PodSandboxId:8c08976e9f90f1f518a3ebcaef8fbf75a26695bd5abfb0249bffaf85bc35a633,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723826972120943660,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-671083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fe7c910e37cd6e2e0e474ecd951dca1,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:665e133f73ce9972b588936faa219e58168f2f3bd3c18519302c89613e188332,PodSandboxId:b8adbe06dba655226a5a7f302adc0e7e7234bcc03e6c0b7aa6b6d3caad318048,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e591
3fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723826972136516089,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-671083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 161a4df41e89e657e65727c136980d27,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:543ca544863e9e2a5d4a129d883c277b09cba9632bf110ffe5fe1cbd96011991,PodSandboxId:cdc95547c0356ee5e6070f63895afc75d7ed65b01151acd587cc2b7e0e84c4f3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455
e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723826972105392619,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-671083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c965858a20f7454a1a3c5d6188150f4,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:074ca5d31f9b925dc5ae3ee9f953a10d88c2e8c3842a558542504fd550639eee,PodSandboxId:29325b5aece3179416642561b09b99784f74c6e902fb4c836a3076645248bb92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206
f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723826972046945926,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-671083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8284fdddcea6b0b0d292dda0281462e3,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=af492f9f-01f0-415e-b679-f8505724895c name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 16:54:41 addons-671083 crio[680]: time="2024-08-16 16:54:41.047148376Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d8cd786e-a38a-46c4-aa0d-a3ceced26cb4 name=/runtime.v1.RuntimeService/Version
	Aug 16 16:54:41 addons-671083 crio[680]: time="2024-08-16 16:54:41.047237274Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d8cd786e-a38a-46c4-aa0d-a3ceced26cb4 name=/runtime.v1.RuntimeService/Version
	Aug 16 16:54:41 addons-671083 crio[680]: time="2024-08-16 16:54:41.048424561Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7038fcc5-550e-4a7a-83a6-4b8b3dd6a8b3 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 16:54:41 addons-671083 crio[680]: time="2024-08-16 16:54:41.049664153Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723827281049635923,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7038fcc5-550e-4a7a-83a6-4b8b3dd6a8b3 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 16:54:41 addons-671083 crio[680]: time="2024-08-16 16:54:41.050470735Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b57c504f-aed9-4177-81a5-1b3f2b4bab4c name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 16:54:41 addons-671083 crio[680]: time="2024-08-16 16:54:41.050522288Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b57c504f-aed9-4177-81a5-1b3f2b4bab4c name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 16:54:41 addons-671083 crio[680]: time="2024-08-16 16:54:41.050792560Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f2c5cac8a8c8bdb7ced0494559d11953eb81232a7f5d807b79234a4ffe2d4e5c,PodSandboxId:ce3d999bfbec42e0208c5d963206c1219589a05073ea40e015c4966447954518,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1723827274111414073,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-srmzf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7068bb8f-88d1-41f1-b4ff-bd9559d40ee7,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2728ee91db6d9eea237bcf970aaa9ffab3dd088c5cefdb095638884c59f198d7,PodSandboxId:6686f5dd4c095493ee26735af3a18c0d48fb0a0c6e022bcb22e2f4fc3adb61ec,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1723827135580825464,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d0bf1f79-56d5-4c95-8a88-8e8d0007a72a,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:299d550fb32349bb9d9d7cce45a8abdb4782e4028dddba86ca97a077a395521b,PodSandboxId:b43c1481fdf82bb63aafdb868d1ae5bd26e7d6ec7087f10eb06e12a878e3b7f1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723827075363707655,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4f993024-c844-4f21-8
ed5-7df2f4b636be,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0769de1a3f711a5862de64148f60cf28f6f1c725631004ebf4b83dc040bd0616,PodSandboxId:8348cb178c21fa9fabcdcbe9d51fece4f34fb61167102fb0fd0bd11b03a53189,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1723827025489057762,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-jf7ql,io.kubernetes.pod.name
space: local-path-storage,io.kubernetes.pod.uid: 67fd9b18-127b-45cb-9434-b9b807138706,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1008dba1c022a67c1e5f7040f8e7db78ed122390ebda9fa686317a646762361c,PodSandboxId:4bf146cb19f670d5a91735293e5af7090f09fdc8b476bc5ddbc19b74324cb56d,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1723827017114777577,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metr
ics-server-8988944d9-qjczl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 499be229-e123-4025-afef-b0608d31b95d,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a055487b6473ea1f4a4d8325ff9e8ceeec758a776bcdd735b703f27cc4fafde5,PodSandboxId:42622eae74032d98c6953bdc561c5ce3ee399e1105565d3933425951d90372b6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723826987845184113,Labels
:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb6c00fa-72db-4dfe-a3d9-054186223927,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:421d348188441902e3955b86ce1478141bab0b13d05c11f202366ce16e5c5979,PodSandboxId:3ac4881b3f4898a71bd50733b145873cb83ecb57dbee9f1379dae9cc971b93b0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723826987073782131,Labels:map[string]string{io.ku
bernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jq9bq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50cf4e20-39bf-4c95-9744-3f86148fcb61,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ee6d2726d73385e27a2719444dc6f623400d8b8a63e729e5d6b431d5af73b7f,PodSandboxId:574d3d07260a8541ea77f7db73c55408246bdf4a71edfd1741cbc1d7ab9903fb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{
},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723826983394072696,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vcpxh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa9fb911-4140-45c4-b33c-e7c7616ee708,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:738c5d5fbb5383fe98b87374293bf6c3ae61553b928d7a5ac91a577aac118946,PodSandboxId:8c08976e9f90f1f518a3ebcaef8fbf75a26695bd5abfb0249bffaf85bc35a633,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723826972120943660,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-671083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fe7c910e37cd6e2e0e474ecd951dca1,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:665e133f73ce9972b588936faa219e58168f2f3bd3c18519302c89613e188332,PodSandboxId:b8adbe06dba655226a5a7f302adc0e7e7234bcc03e6c0b7aa6b6d3caad318048,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e591
3fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723826972136516089,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-671083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 161a4df41e89e657e65727c136980d27,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:543ca544863e9e2a5d4a129d883c277b09cba9632bf110ffe5fe1cbd96011991,PodSandboxId:cdc95547c0356ee5e6070f63895afc75d7ed65b01151acd587cc2b7e0e84c4f3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455
e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723826972105392619,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-671083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c965858a20f7454a1a3c5d6188150f4,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:074ca5d31f9b925dc5ae3ee9f953a10d88c2e8c3842a558542504fd550639eee,PodSandboxId:29325b5aece3179416642561b09b99784f74c6e902fb4c836a3076645248bb92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206
f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723826972046945926,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-671083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8284fdddcea6b0b0d292dda0281462e3,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b57c504f-aed9-4177-81a5-1b3f2b4bab4c name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 16:54:41 addons-671083 crio[680]: time="2024-08-16 16:54:41.087290211Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c477c06f-82aa-41a6-98de-f2458c2d7c10 name=/runtime.v1.RuntimeService/Version
	Aug 16 16:54:41 addons-671083 crio[680]: time="2024-08-16 16:54:41.087367202Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c477c06f-82aa-41a6-98de-f2458c2d7c10 name=/runtime.v1.RuntimeService/Version
	Aug 16 16:54:41 addons-671083 crio[680]: time="2024-08-16 16:54:41.088492826Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=648e0d34-8c3d-4130-a62f-cc302b73b1b2 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 16:54:41 addons-671083 crio[680]: time="2024-08-16 16:54:41.089766472Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723827281089739291,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=648e0d34-8c3d-4130-a62f-cc302b73b1b2 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 16:54:41 addons-671083 crio[680]: time="2024-08-16 16:54:41.090864523Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d0d86bb2-6d4e-443c-a892-0932b54d3766 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 16:54:41 addons-671083 crio[680]: time="2024-08-16 16:54:41.091063707Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d0d86bb2-6d4e-443c-a892-0932b54d3766 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 16:54:41 addons-671083 crio[680]: time="2024-08-16 16:54:41.091594853Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f2c5cac8a8c8bdb7ced0494559d11953eb81232a7f5d807b79234a4ffe2d4e5c,PodSandboxId:ce3d999bfbec42e0208c5d963206c1219589a05073ea40e015c4966447954518,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1723827274111414073,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-srmzf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7068bb8f-88d1-41f1-b4ff-bd9559d40ee7,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2728ee91db6d9eea237bcf970aaa9ffab3dd088c5cefdb095638884c59f198d7,PodSandboxId:6686f5dd4c095493ee26735af3a18c0d48fb0a0c6e022bcb22e2f4fc3adb61ec,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1723827135580825464,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d0bf1f79-56d5-4c95-8a88-8e8d0007a72a,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:299d550fb32349bb9d9d7cce45a8abdb4782e4028dddba86ca97a077a395521b,PodSandboxId:b43c1481fdf82bb63aafdb868d1ae5bd26e7d6ec7087f10eb06e12a878e3b7f1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723827075363707655,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4f993024-c844-4f21-8
ed5-7df2f4b636be,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0769de1a3f711a5862de64148f60cf28f6f1c725631004ebf4b83dc040bd0616,PodSandboxId:8348cb178c21fa9fabcdcbe9d51fece4f34fb61167102fb0fd0bd11b03a53189,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1723827025489057762,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-jf7ql,io.kubernetes.pod.name
space: local-path-storage,io.kubernetes.pod.uid: 67fd9b18-127b-45cb-9434-b9b807138706,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1008dba1c022a67c1e5f7040f8e7db78ed122390ebda9fa686317a646762361c,PodSandboxId:4bf146cb19f670d5a91735293e5af7090f09fdc8b476bc5ddbc19b74324cb56d,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1723827017114777577,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metr
ics-server-8988944d9-qjczl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 499be229-e123-4025-afef-b0608d31b95d,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a055487b6473ea1f4a4d8325ff9e8ceeec758a776bcdd735b703f27cc4fafde5,PodSandboxId:42622eae74032d98c6953bdc561c5ce3ee399e1105565d3933425951d90372b6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723826987845184113,Labels
:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb6c00fa-72db-4dfe-a3d9-054186223927,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:421d348188441902e3955b86ce1478141bab0b13d05c11f202366ce16e5c5979,PodSandboxId:3ac4881b3f4898a71bd50733b145873cb83ecb57dbee9f1379dae9cc971b93b0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723826987073782131,Labels:map[string]string{io.ku
bernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jq9bq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50cf4e20-39bf-4c95-9744-3f86148fcb61,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ee6d2726d73385e27a2719444dc6f623400d8b8a63e729e5d6b431d5af73b7f,PodSandboxId:574d3d07260a8541ea77f7db73c55408246bdf4a71edfd1741cbc1d7ab9903fb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{
},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723826983394072696,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vcpxh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa9fb911-4140-45c4-b33c-e7c7616ee708,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:738c5d5fbb5383fe98b87374293bf6c3ae61553b928d7a5ac91a577aac118946,PodSandboxId:8c08976e9f90f1f518a3ebcaef8fbf75a26695bd5abfb0249bffaf85bc35a633,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723826972120943660,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-671083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fe7c910e37cd6e2e0e474ecd951dca1,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:665e133f73ce9972b588936faa219e58168f2f3bd3c18519302c89613e188332,PodSandboxId:b8adbe06dba655226a5a7f302adc0e7e7234bcc03e6c0b7aa6b6d3caad318048,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e591
3fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723826972136516089,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-671083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 161a4df41e89e657e65727c136980d27,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:543ca544863e9e2a5d4a129d883c277b09cba9632bf110ffe5fe1cbd96011991,PodSandboxId:cdc95547c0356ee5e6070f63895afc75d7ed65b01151acd587cc2b7e0e84c4f3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455
e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723826972105392619,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-671083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c965858a20f7454a1a3c5d6188150f4,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:074ca5d31f9b925dc5ae3ee9f953a10d88c2e8c3842a558542504fd550639eee,PodSandboxId:29325b5aece3179416642561b09b99784f74c6e902fb4c836a3076645248bb92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206
f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723826972046945926,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-671083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8284fdddcea6b0b0d292dda0281462e3,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d0d86bb2-6d4e-443c-a892-0932b54d3766 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 16:54:41 addons-671083 crio[680]: time="2024-08-16 16:54:41.124430672Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=55b321bf-3018-4486-a8b5-5487b9693c42 name=/runtime.v1.RuntimeService/Version
	Aug 16 16:54:41 addons-671083 crio[680]: time="2024-08-16 16:54:41.124502621Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=55b321bf-3018-4486-a8b5-5487b9693c42 name=/runtime.v1.RuntimeService/Version
	Aug 16 16:54:41 addons-671083 crio[680]: time="2024-08-16 16:54:41.125625862Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=206778eb-a0ce-4bf4-8c98-aca5df6b1e53 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 16:54:41 addons-671083 crio[680]: time="2024-08-16 16:54:41.126997055Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723827281126935828,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=206778eb-a0ce-4bf4-8c98-aca5df6b1e53 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 16:54:41 addons-671083 crio[680]: time="2024-08-16 16:54:41.127542596Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=624170c0-43d1-47af-b611-adbf5802ae56 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 16:54:41 addons-671083 crio[680]: time="2024-08-16 16:54:41.127602489Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=624170c0-43d1-47af-b611-adbf5802ae56 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 16:54:41 addons-671083 crio[680]: time="2024-08-16 16:54:41.127870089Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f2c5cac8a8c8bdb7ced0494559d11953eb81232a7f5d807b79234a4ffe2d4e5c,PodSandboxId:ce3d999bfbec42e0208c5d963206c1219589a05073ea40e015c4966447954518,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1723827274111414073,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-srmzf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7068bb8f-88d1-41f1-b4ff-bd9559d40ee7,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2728ee91db6d9eea237bcf970aaa9ffab3dd088c5cefdb095638884c59f198d7,PodSandboxId:6686f5dd4c095493ee26735af3a18c0d48fb0a0c6e022bcb22e2f4fc3adb61ec,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1723827135580825464,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d0bf1f79-56d5-4c95-8a88-8e8d0007a72a,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:299d550fb32349bb9d9d7cce45a8abdb4782e4028dddba86ca97a077a395521b,PodSandboxId:b43c1481fdf82bb63aafdb868d1ae5bd26e7d6ec7087f10eb06e12a878e3b7f1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723827075363707655,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4f993024-c844-4f21-8
ed5-7df2f4b636be,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0769de1a3f711a5862de64148f60cf28f6f1c725631004ebf4b83dc040bd0616,PodSandboxId:8348cb178c21fa9fabcdcbe9d51fece4f34fb61167102fb0fd0bd11b03a53189,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1723827025489057762,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-jf7ql,io.kubernetes.pod.name
space: local-path-storage,io.kubernetes.pod.uid: 67fd9b18-127b-45cb-9434-b9b807138706,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1008dba1c022a67c1e5f7040f8e7db78ed122390ebda9fa686317a646762361c,PodSandboxId:4bf146cb19f670d5a91735293e5af7090f09fdc8b476bc5ddbc19b74324cb56d,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1723827017114777577,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metr
ics-server-8988944d9-qjczl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 499be229-e123-4025-afef-b0608d31b95d,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a055487b6473ea1f4a4d8325ff9e8ceeec758a776bcdd735b703f27cc4fafde5,PodSandboxId:42622eae74032d98c6953bdc561c5ce3ee399e1105565d3933425951d90372b6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723826987845184113,Labels
:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb6c00fa-72db-4dfe-a3d9-054186223927,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:421d348188441902e3955b86ce1478141bab0b13d05c11f202366ce16e5c5979,PodSandboxId:3ac4881b3f4898a71bd50733b145873cb83ecb57dbee9f1379dae9cc971b93b0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723826987073782131,Labels:map[string]string{io.ku
bernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jq9bq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50cf4e20-39bf-4c95-9744-3f86148fcb61,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ee6d2726d73385e27a2719444dc6f623400d8b8a63e729e5d6b431d5af73b7f,PodSandboxId:574d3d07260a8541ea77f7db73c55408246bdf4a71edfd1741cbc1d7ab9903fb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{
},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723826983394072696,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vcpxh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa9fb911-4140-45c4-b33c-e7c7616ee708,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:738c5d5fbb5383fe98b87374293bf6c3ae61553b928d7a5ac91a577aac118946,PodSandboxId:8c08976e9f90f1f518a3ebcaef8fbf75a26695bd5abfb0249bffaf85bc35a633,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723826972120943660,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-671083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fe7c910e37cd6e2e0e474ecd951dca1,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:665e133f73ce9972b588936faa219e58168f2f3bd3c18519302c89613e188332,PodSandboxId:b8adbe06dba655226a5a7f302adc0e7e7234bcc03e6c0b7aa6b6d3caad318048,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e591
3fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723826972136516089,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-671083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 161a4df41e89e657e65727c136980d27,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:543ca544863e9e2a5d4a129d883c277b09cba9632bf110ffe5fe1cbd96011991,PodSandboxId:cdc95547c0356ee5e6070f63895afc75d7ed65b01151acd587cc2b7e0e84c4f3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455
e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723826972105392619,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-671083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c965858a20f7454a1a3c5d6188150f4,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:074ca5d31f9b925dc5ae3ee9f953a10d88c2e8c3842a558542504fd550639eee,PodSandboxId:29325b5aece3179416642561b09b99784f74c6e902fb4c836a3076645248bb92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206
f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723826972046945926,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-671083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8284fdddcea6b0b0d292dda0281462e3,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=624170c0-43d1-47af-b611-adbf5802ae56 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f2c5cac8a8c8b       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   7 seconds ago       Running             hello-world-app           0                   ce3d999bfbec4       hello-world-app-55bf9c44b4-srmzf
	2728ee91db6d9       docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0                         2 minutes ago       Running             nginx                     0                   6686f5dd4c095       nginx
	299d550fb3234       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     3 minutes ago       Running             busybox                   0                   b43c1481fdf82       busybox
	0769de1a3f711       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef        4 minutes ago       Running             local-path-provisioner    0                   8348cb178c21f       local-path-provisioner-86d989889c-jf7ql
	1008dba1c022a       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   4 minutes ago       Running             metrics-server            0                   4bf146cb19f67       metrics-server-8988944d9-qjczl
	a055487b6473e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        4 minutes ago       Running             storage-provisioner       0                   42622eae74032       storage-provisioner
	421d348188441       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        4 minutes ago       Running             coredns                   0                   3ac4881b3f489       coredns-6f6b679f8f-jq9bq
	0ee6d2726d733       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                                        4 minutes ago       Running             kube-proxy                0                   574d3d07260a8       kube-proxy-vcpxh
	665e133f73ce9       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        5 minutes ago       Running             etcd                      0                   b8adbe06dba65       etcd-addons-671083
	738c5d5fbb538       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                                        5 minutes ago       Running             kube-scheduler            0                   8c08976e9f90f       kube-scheduler-addons-671083
	543ca544863e9       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                                        5 minutes ago       Running             kube-controller-manager   0                   cdc95547c0356       kube-controller-manager-addons-671083
	074ca5d31f9b9       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                                        5 minutes ago       Running             kube-apiserver            0                   29325b5aece31       kube-apiserver-addons-671083
	
	
	==> coredns [421d348188441902e3955b86ce1478141bab0b13d05c11f202366ce16e5c5979] <==
	[INFO] 10.244.0.7:42200 - 9049 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.001263993s
	[INFO] 10.244.0.7:48689 - 58184 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00009508s
	[INFO] 10.244.0.7:48689 - 46923 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00007067s
	[INFO] 10.244.0.7:42660 - 61551 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000130505s
	[INFO] 10.244.0.7:42660 - 5228 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000052733s
	[INFO] 10.244.0.7:41080 - 40141 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000104365s
	[INFO] 10.244.0.7:41080 - 16076 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000083295s
	[INFO] 10.244.0.7:34117 - 46189 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000089031s
	[INFO] 10.244.0.7:34117 - 57962 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000041026s
	[INFO] 10.244.0.7:34023 - 15348 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000048946s
	[INFO] 10.244.0.7:34023 - 502 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000024639s
	[INFO] 10.244.0.7:37032 - 15531 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000049525s
	[INFO] 10.244.0.7:37032 - 37797 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000021836s
	[INFO] 10.244.0.7:40989 - 27476 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.0000842s
	[INFO] 10.244.0.7:40989 - 64341 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000042739s
	[INFO] 10.244.0.22:51805 - 27391 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000342041s
	[INFO] 10.244.0.22:52546 - 37114 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000152636s
	[INFO] 10.244.0.22:45158 - 51536 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00007447s
	[INFO] 10.244.0.22:33090 - 44549 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00011485s
	[INFO] 10.244.0.22:55780 - 37157 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000098638s
	[INFO] 10.244.0.22:41087 - 18887 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000097595s
	[INFO] 10.244.0.22:37851 - 13996 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000676163s
	[INFO] 10.244.0.22:51319 - 64786 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.00040473s
	[INFO] 10.244.0.26:35684 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000337877s
	[INFO] 10.244.0.26:60301 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000084096s
	
	
	==> describe nodes <==
	Name:               addons-671083
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-671083
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd
	                    minikube.k8s.io/name=addons-671083
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_16T16_49_37_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-671083
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 16:49:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-671083
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 16:54:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 16:52:41 +0000   Fri, 16 Aug 2024 16:49:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 16:52:41 +0000   Fri, 16 Aug 2024 16:49:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 16:52:41 +0000   Fri, 16 Aug 2024 16:49:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 16:52:41 +0000   Fri, 16 Aug 2024 16:49:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.240
	  Hostname:    addons-671083
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 ae86c2bab65540d7978e0e5805419ef0
	  System UUID:                ae86c2ba-b655-40d7-978e-0e5805419ef0
	  Boot ID:                    b0c049e6-f0f9-4d60-a9b9-0af8d52b57a6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m29s
	  default                     hello-world-app-55bf9c44b4-srmzf           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m30s
	  kube-system                 coredns-6f6b679f8f-jq9bq                   100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m59s
	  kube-system                 etcd-addons-671083                         100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         5m4s
	  kube-system                 kube-apiserver-addons-671083               250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m4s
	  kube-system                 kube-controller-manager-addons-671083      200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m6s
	  kube-system                 kube-proxy-vcpxh                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m59s
	  kube-system                 kube-scheduler-addons-671083               100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m4s
	  kube-system                 metrics-server-8988944d9-qjczl             100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         4m54s
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m55s
	  local-path-storage          local-path-provisioner-86d989889c-jf7ql    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m57s  kube-proxy       
	  Normal  Starting                 5m4s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m4s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m4s   kubelet          Node addons-671083 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m4s   kubelet          Node addons-671083 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m4s   kubelet          Node addons-671083 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m3s   kubelet          Node addons-671083 status is now: NodeReady
	  Normal  RegisteredNode           5m     node-controller  Node addons-671083 event: Registered Node addons-671083 in Controller
	
	
	==> dmesg <==
	[  +5.024557] kauditd_printk_skb: 129 callbacks suppressed
	[  +5.173314] kauditd_printk_skb: 49 callbacks suppressed
	[Aug16 16:50] kauditd_printk_skb: 4 callbacks suppressed
	[  +7.551593] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.452215] kauditd_printk_skb: 34 callbacks suppressed
	[  +6.182027] kauditd_printk_skb: 1 callbacks suppressed
	[  +5.289785] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.115476] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.237894] kauditd_printk_skb: 83 callbacks suppressed
	[Aug16 16:51] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.005634] kauditd_printk_skb: 30 callbacks suppressed
	[ +11.536129] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.859176] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.891815] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.057029] kauditd_printk_skb: 25 callbacks suppressed
	[  +5.180224] kauditd_printk_skb: 72 callbacks suppressed
	[  +6.489063] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.538649] kauditd_printk_skb: 33 callbacks suppressed
	[Aug16 16:52] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.019379] kauditd_printk_skb: 8 callbacks suppressed
	[  +9.754739] kauditd_printk_skb: 21 callbacks suppressed
	[ +35.129381] kauditd_printk_skb: 7 callbacks suppressed
	[Aug16 16:53] kauditd_printk_skb: 33 callbacks suppressed
	[Aug16 16:54] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.146471] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [665e133f73ce9972b588936faa219e58168f2f3bd3c18519302c89613e188332] <==
	{"level":"info","ts":"2024-08-16T16:50:58.746061Z","caller":"traceutil/trace.go:171","msg":"trace[1297409268] linearizableReadLoop","detail":"{readStateIndex:1143; appliedIndex:1141; }","duration":"429.79018ms","start":"2024-08-16T16:50:58.316237Z","end":"2024-08-16T16:50:58.746027Z","steps":["trace[1297409268] 'read index received'  (duration: 422.293479ms)","trace[1297409268] 'applied index is now lower than readState.Index'  (duration: 7.496233ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-16T16:50:58.746425Z","caller":"traceutil/trace.go:171","msg":"trace[2025505688] transaction","detail":"{read_only:false; response_revision:1111; number_of_response:1; }","duration":"453.049305ms","start":"2024-08-16T16:50:58.293366Z","end":"2024-08-16T16:50:58.746416Z","steps":["trace[2025505688] 'process raft request'  (duration: 452.554641ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T16:50:58.746541Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T16:50:58.293348Z","time spent":"453.127204ms","remote":"127.0.0.1:44972","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":10366,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/gadget/gadget-vz8gb\" mod_revision:1086 > success:<request_put:<key:\"/registry/pods/gadget/gadget-vz8gb\" value_size:10324 >> failure:<request_range:<key:\"/registry/pods/gadget/gadget-vz8gb\" > >"}
	{"level":"warn","ts":"2024-08-16T16:50:58.746733Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"430.485724ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-16T16:50:58.746778Z","caller":"traceutil/trace.go:171","msg":"trace[536864379] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1111; }","duration":"430.53937ms","start":"2024-08-16T16:50:58.316230Z","end":"2024-08-16T16:50:58.746770Z","steps":["trace[536864379] 'agreement among raft nodes before linearized reading'  (duration: 430.449115ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T16:50:58.746798Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T16:50:58.316164Z","time spent":"430.629934ms","remote":"127.0.0.1:44792","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-08-16T16:50:58.746959Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"398.336105ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-16T16:50:58.747044Z","caller":"traceutil/trace.go:171","msg":"trace[147962105] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1111; }","duration":"398.423075ms","start":"2024-08-16T16:50:58.348616Z","end":"2024-08-16T16:50:58.747039Z","steps":["trace[147962105] 'agreement among raft nodes before linearized reading'  (duration: 398.323395ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T16:50:58.747068Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T16:50:58.348583Z","time spent":"398.479686ms","remote":"127.0.0.1:44972","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-08-16T16:50:58.747537Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"233.612814ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-16T16:50:58.747567Z","caller":"traceutil/trace.go:171","msg":"trace[220685933] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1111; }","duration":"233.657465ms","start":"2024-08-16T16:50:58.513904Z","end":"2024-08-16T16:50:58.747561Z","steps":["trace[220685933] 'agreement among raft nodes before linearized reading'  (duration: 233.60375ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T16:50:58.747694Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"396.628658ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-16T16:50:58.747708Z","caller":"traceutil/trace.go:171","msg":"trace[1734614247] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1111; }","duration":"396.644791ms","start":"2024-08-16T16:50:58.351059Z","end":"2024-08-16T16:50:58.747704Z","steps":["trace[1734614247] 'agreement among raft nodes before linearized reading'  (duration: 396.596762ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T16:50:58.747721Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T16:50:58.351033Z","time spent":"396.685106ms","remote":"127.0.0.1:44972","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2024-08-16T16:51:01.643169Z","caller":"traceutil/trace.go:171","msg":"trace[838376814] linearizableReadLoop","detail":"{readStateIndex:1151; appliedIndex:1150; }","duration":"128.922741ms","start":"2024-08-16T16:51:01.514233Z","end":"2024-08-16T16:51:01.643156Z","steps":["trace[838376814] 'read index received'  (duration: 128.613949ms)","trace[838376814] 'applied index is now lower than readState.Index'  (duration: 308.345µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-16T16:51:01.643471Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.173917ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-16T16:51:01.643565Z","caller":"traceutil/trace.go:171","msg":"trace[1014669857] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1119; }","duration":"129.326085ms","start":"2024-08-16T16:51:01.514230Z","end":"2024-08-16T16:51:01.643556Z","steps":["trace[1014669857] 'agreement among raft nodes before linearized reading'  (duration: 129.124399ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T16:51:01.643893Z","caller":"traceutil/trace.go:171","msg":"trace[824408050] transaction","detail":"{read_only:false; response_revision:1119; number_of_response:1; }","duration":"182.606835ms","start":"2024-08-16T16:51:01.461274Z","end":"2024-08-16T16:51:01.643881Z","steps":["trace[824408050] 'process raft request'  (duration: 181.625416ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T16:51:07.749604Z","caller":"traceutil/trace.go:171","msg":"trace[1652212945] linearizableReadLoop","detail":"{readStateIndex:1180; appliedIndex:1179; }","duration":"236.494744ms","start":"2024-08-16T16:51:07.513095Z","end":"2024-08-16T16:51:07.749590Z","steps":["trace[1652212945] 'read index received'  (duration: 236.364981ms)","trace[1652212945] 'applied index is now lower than readState.Index'  (duration: 129.363µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-16T16:51:07.749838Z","caller":"traceutil/trace.go:171","msg":"trace[1323612367] transaction","detail":"{read_only:false; response_revision:1146; number_of_response:1; }","duration":"299.413625ms","start":"2024-08-16T16:51:07.450415Z","end":"2024-08-16T16:51:07.749829Z","steps":["trace[1323612367] 'process raft request'  (duration: 299.08505ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T16:51:07.750015Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"222.046909ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/\" range_end:\"/registry/clusterroles0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-08-16T16:51:07.750050Z","caller":"traceutil/trace.go:171","msg":"trace[1565559132] range","detail":"{range_begin:/registry/clusterroles/; range_end:/registry/clusterroles0; response_count:0; response_revision:1146; }","duration":"222.138413ms","start":"2024-08-16T16:51:07.527904Z","end":"2024-08-16T16:51:07.750043Z","steps":["trace[1565559132] 'agreement among raft nodes before linearized reading'  (duration: 221.995813ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T16:51:07.750113Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"237.009443ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-16T16:51:07.750137Z","caller":"traceutil/trace.go:171","msg":"trace[690359742] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1146; }","duration":"237.038317ms","start":"2024-08-16T16:51:07.513091Z","end":"2024-08-16T16:51:07.750130Z","steps":["trace[690359742] 'agreement among raft nodes before linearized reading'  (duration: 236.995962ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T16:52:14.752435Z","caller":"traceutil/trace.go:171","msg":"trace[1066084898] transaction","detail":"{read_only:false; response_revision:1647; number_of_response:1; }","duration":"299.049343ms","start":"2024-08-16T16:52:14.453361Z","end":"2024-08-16T16:52:14.752410Z","steps":["trace[1066084898] 'process raft request'  (duration: 298.941178ms)"],"step_count":1}
	
	
	==> kernel <==
	 16:54:41 up 5 min,  0 users,  load average: 0.84, 1.20, 0.65
	Linux addons-671083 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [074ca5d31f9b925dc5ae3ee9f953a10d88c2e8c3842a558542504fd550639eee] <==
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0816 16:51:27.503378       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0816 16:51:27.517044       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0816 16:51:54.010764       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.107.237.105"}
	I0816 16:52:05.752034       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0816 16:52:06.888782       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0816 16:52:11.242353       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0816 16:52:11.427044       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.96.124.212"}
	I0816 16:52:27.215757       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0816 16:53:04.571544       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0816 16:53:04.572298       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0816 16:53:04.594480       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0816 16:53:04.594548       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0816 16:53:04.607711       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0816 16:53:04.608502       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0816 16:53:04.623682       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0816 16:53:04.623741       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0816 16:53:04.771491       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0816 16:53:04.771592       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0816 16:53:05.624129       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0816 16:53:05.772530       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0816 16:53:05.773673       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0816 16:54:31.501118       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.98.174.61"}
	
	
	==> kube-controller-manager [543ca544863e9e2a5d4a129d883c277b09cba9632bf110ffe5fe1cbd96011991] <==
	E0816 16:53:25.341454       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0816 16:53:27.584880       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 16:53:27.584946       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0816 16:53:37.763266       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 16:53:37.763338       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0816 16:53:40.201939       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 16:53:40.202015       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0816 16:53:50.732097       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 16:53:50.732242       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0816 16:54:05.635701       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 16:54:05.635782       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0816 16:54:15.008772       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 16:54:15.008869       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0816 16:54:16.896523       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 16:54:16.896671       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0816 16:54:31.305070       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="39.750196ms"
	I0816 16:54:31.317100       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="11.797758ms"
	I0816 16:54:31.317213       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="67.67µs"
	W0816 16:54:32.750463       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 16:54:32.750516       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0816 16:54:33.238722       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0816 16:54:33.243859       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="10.4µs"
	I0816 16:54:33.251529       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	I0816 16:54:34.666418       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="7.600262ms"
	I0816 16:54:34.666552       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="37.178µs"
	
	
	==> kube-proxy [0ee6d2726d73385e27a2719444dc6f623400d8b8a63e729e5d6b431d5af73b7f] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0816 16:49:44.050949       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0816 16:49:44.065378       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.240"]
	E0816 16:49:44.065448       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0816 16:49:44.130808       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0816 16:49:44.130853       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0816 16:49:44.130880       1 server_linux.go:169] "Using iptables Proxier"
	I0816 16:49:44.133240       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0816 16:49:44.133458       1 server.go:483] "Version info" version="v1.31.0"
	I0816 16:49:44.133484       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 16:49:44.136181       1 config.go:197] "Starting service config controller"
	I0816 16:49:44.136204       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0816 16:49:44.136241       1 config.go:104] "Starting endpoint slice config controller"
	I0816 16:49:44.136257       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0816 16:49:44.137042       1 config.go:326] "Starting node config controller"
	I0816 16:49:44.137063       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0816 16:49:44.236528       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0816 16:49:44.236581       1 shared_informer.go:320] Caches are synced for service config
	I0816 16:49:44.237154       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [738c5d5fbb5383fe98b87374293bf6c3ae61553b928d7a5ac91a577aac118946] <==
	W0816 16:49:34.669335       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0816 16:49:34.669533       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 16:49:34.669924       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 16:49:34.670047       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0816 16:49:35.501090       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 16:49:35.501138       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 16:49:35.508114       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 16:49:35.508158       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 16:49:35.543566       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 16:49:35.543614       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 16:49:35.595687       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 16:49:35.595739       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0816 16:49:35.630321       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 16:49:35.630441       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 16:49:35.742240       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 16:49:35.742334       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 16:49:35.746758       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0816 16:49:35.746931       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 16:49:35.788063       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 16:49:35.788172       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 16:49:35.800387       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0816 16:49:35.800507       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 16:49:35.804441       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 16:49:35.804487       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0816 16:49:38.538752       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 16 16:54:32 addons-671083 kubelet[1211]: E0816 16:54:32.657057    1211 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0337d112b62cfcd29b9f602169a655fd3699db037e496fef6cc32558ceecab3\": container with ID starting with b0337d112b62cfcd29b9f602169a655fd3699db037e496fef6cc32558ceecab3 not found: ID does not exist" containerID="b0337d112b62cfcd29b9f602169a655fd3699db037e496fef6cc32558ceecab3"
	Aug 16 16:54:32 addons-671083 kubelet[1211]: I0816 16:54:32.657103    1211 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0337d112b62cfcd29b9f602169a655fd3699db037e496fef6cc32558ceecab3"} err="failed to get container status \"b0337d112b62cfcd29b9f602169a655fd3699db037e496fef6cc32558ceecab3\": rpc error: code = NotFound desc = could not find container \"b0337d112b62cfcd29b9f602169a655fd3699db037e496fef6cc32558ceecab3\": container with ID starting with b0337d112b62cfcd29b9f602169a655fd3699db037e496fef6cc32558ceecab3 not found: ID does not exist"
	Aug 16 16:54:33 addons-671083 kubelet[1211]: I0816 16:54:33.237172    1211 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a737f23d-c62b-4073-9b90-6c95e9a3374b" path="/var/lib/kubelet/pods/a737f23d-c62b-4073-9b90-6c95e9a3374b/volumes"
	Aug 16 16:54:35 addons-671083 kubelet[1211]: I0816 16:54:35.235700    1211 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a324552d-1ae2-456d-a859-be3b10d61e68" path="/var/lib/kubelet/pods/a324552d-1ae2-456d-a859-be3b10d61e68/volumes"
	Aug 16 16:54:35 addons-671083 kubelet[1211]: I0816 16:54:35.236159    1211 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca903a95-f766-470a-861f-a261e46d39dc" path="/var/lib/kubelet/pods/ca903a95-f766-470a-861f-a261e46d39dc/volumes"
	Aug 16 16:54:36 addons-671083 kubelet[1211]: I0816 16:54:36.494569    1211 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hv4l5\" (UniqueName: \"kubernetes.io/projected/643a7c96-1923-4663-86f3-57ec7d049e9e-kube-api-access-hv4l5\") pod \"643a7c96-1923-4663-86f3-57ec7d049e9e\" (UID: \"643a7c96-1923-4663-86f3-57ec7d049e9e\") "
	Aug 16 16:54:36 addons-671083 kubelet[1211]: I0816 16:54:36.494612    1211 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/643a7c96-1923-4663-86f3-57ec7d049e9e-webhook-cert\") pod \"643a7c96-1923-4663-86f3-57ec7d049e9e\" (UID: \"643a7c96-1923-4663-86f3-57ec7d049e9e\") "
	Aug 16 16:54:36 addons-671083 kubelet[1211]: I0816 16:54:36.505639    1211 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/643a7c96-1923-4663-86f3-57ec7d049e9e-kube-api-access-hv4l5" (OuterVolumeSpecName: "kube-api-access-hv4l5") pod "643a7c96-1923-4663-86f3-57ec7d049e9e" (UID: "643a7c96-1923-4663-86f3-57ec7d049e9e"). InnerVolumeSpecName "kube-api-access-hv4l5". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 16 16:54:36 addons-671083 kubelet[1211]: I0816 16:54:36.505745    1211 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/643a7c96-1923-4663-86f3-57ec7d049e9e-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "643a7c96-1923-4663-86f3-57ec7d049e9e" (UID: "643a7c96-1923-4663-86f3-57ec7d049e9e"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 16 16:54:36 addons-671083 kubelet[1211]: I0816 16:54:36.595412    1211 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-hv4l5\" (UniqueName: \"kubernetes.io/projected/643a7c96-1923-4663-86f3-57ec7d049e9e-kube-api-access-hv4l5\") on node \"addons-671083\" DevicePath \"\""
	Aug 16 16:54:36 addons-671083 kubelet[1211]: I0816 16:54:36.595445    1211 reconciler_common.go:288] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/643a7c96-1923-4663-86f3-57ec7d049e9e-webhook-cert\") on node \"addons-671083\" DevicePath \"\""
	Aug 16 16:54:36 addons-671083 kubelet[1211]: I0816 16:54:36.654147    1211 scope.go:117] "RemoveContainer" containerID="84b494771f3dc92c05c31c0d9a8dc8febe71ce56fe4f18985ccf24f1ced5319f"
	Aug 16 16:54:36 addons-671083 kubelet[1211]: I0816 16:54:36.686989    1211 scope.go:117] "RemoveContainer" containerID="84b494771f3dc92c05c31c0d9a8dc8febe71ce56fe4f18985ccf24f1ced5319f"
	Aug 16 16:54:36 addons-671083 kubelet[1211]: E0816 16:54:36.687602    1211 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84b494771f3dc92c05c31c0d9a8dc8febe71ce56fe4f18985ccf24f1ced5319f\": container with ID starting with 84b494771f3dc92c05c31c0d9a8dc8febe71ce56fe4f18985ccf24f1ced5319f not found: ID does not exist" containerID="84b494771f3dc92c05c31c0d9a8dc8febe71ce56fe4f18985ccf24f1ced5319f"
	Aug 16 16:54:36 addons-671083 kubelet[1211]: I0816 16:54:36.687666    1211 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84b494771f3dc92c05c31c0d9a8dc8febe71ce56fe4f18985ccf24f1ced5319f"} err="failed to get container status \"84b494771f3dc92c05c31c0d9a8dc8febe71ce56fe4f18985ccf24f1ced5319f\": rpc error: code = NotFound desc = could not find container \"84b494771f3dc92c05c31c0d9a8dc8febe71ce56fe4f18985ccf24f1ced5319f\": container with ID starting with 84b494771f3dc92c05c31c0d9a8dc8febe71ce56fe4f18985ccf24f1ced5319f not found: ID does not exist"
	Aug 16 16:54:37 addons-671083 kubelet[1211]: I0816 16:54:37.238184    1211 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="643a7c96-1923-4663-86f3-57ec7d049e9e" path="/var/lib/kubelet/pods/643a7c96-1923-4663-86f3-57ec7d049e9e/volumes"
	Aug 16 16:54:37 addons-671083 kubelet[1211]: E0816 16:54:37.252915    1211 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 16:54:37 addons-671083 kubelet[1211]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 16:54:37 addons-671083 kubelet[1211]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 16:54:37 addons-671083 kubelet[1211]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 16:54:37 addons-671083 kubelet[1211]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 16:54:37 addons-671083 kubelet[1211]: E0816 16:54:37.550713    1211 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723827277550228690,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 16:54:37 addons-671083 kubelet[1211]: E0816 16:54:37.550737    1211 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723827277550228690,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 16:54:37 addons-671083 kubelet[1211]: I0816 16:54:37.709830    1211 scope.go:117] "RemoveContainer" containerID="3ac1657d38439e5f773aaed759461807139cd1971bb46d202deecc9a62c9309e"
	Aug 16 16:54:37 addons-671083 kubelet[1211]: I0816 16:54:37.724811    1211 scope.go:117] "RemoveContainer" containerID="62bc8fe6fdda25a73d9a5e06cc6470e188fc802073b3ae255bb8becff66e932b"
	
	
	==> storage-provisioner [a055487b6473ea1f4a4d8325ff9e8ceeec758a776bcdd735b703f27cc4fafde5] <==
	I0816 16:49:49.330518       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0816 16:49:49.464713       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0816 16:49:49.464775       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0816 16:49:49.637053       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0816 16:49:49.741365       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-671083_d4cf6f3f-58d0-454d-b31d-a9613105700e!
	I0816 16:49:49.741444       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d42da215-d37b-49ed-8472-72b6409bcac2", APIVersion:"v1", ResourceVersion:"666", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-671083_d4cf6f3f-58d0-454d-b31d-a9613105700e became leader
	I0816 16:49:50.042350       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-671083_d4cf6f3f-58d0-454d-b31d-a9613105700e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-671083 -n addons-671083
helpers_test.go:261: (dbg) Run:  kubectl --context addons-671083 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (151.16s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (319.49s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 4.804549ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-8988944d9-qjczl" [499be229-e123-4025-afef-b0608d31b95d] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004100115s
addons_test.go:417: (dbg) Run:  kubectl --context addons-671083 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-671083 top pods -n kube-system: exit status 1 (81.497958ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-jq9bq, age: 2m6.124022727s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-671083 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-671083 top pods -n kube-system: exit status 1 (64.996326ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-jq9bq, age: 2m9.996857052s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-671083 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-671083 top pods -n kube-system: exit status 1 (67.072801ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-jq9bq, age: 2m15.996165395s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-671083 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-671083 top pods -n kube-system: exit status 1 (71.534076ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-jq9bq, age: 2m25.355444418s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-671083 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-671083 top pods -n kube-system: exit status 1 (66.730636ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-jq9bq, age: 2m35.143341782s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-671083 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-671083 top pods -n kube-system: exit status 1 (68.477719ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-jq9bq, age: 2m55.635512135s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-671083 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-671083 top pods -n kube-system: exit status 1 (70.687891ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-jq9bq, age: 3m15.654961517s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-671083 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-671083 top pods -n kube-system: exit status 1 (62.192893ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-jq9bq, age: 3m56.461280619s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-671083 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-671083 top pods -n kube-system: exit status 1 (72.204507ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-jq9bq, age: 4m54.359409401s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-671083 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-671083 top pods -n kube-system: exit status 1 (63.343271ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-jq9bq, age: 6m0.630968971s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-671083 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-671083 top pods -n kube-system: exit status 1 (61.995608ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-jq9bq, age: 7m18.047888764s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-671083 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-671083 -n addons-671083
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-671083 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-671083 logs -n 25: (1.152102006s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-696494                                                                     | download-only-696494 | jenkins | v1.33.1 | 16 Aug 24 16:48 UTC | 16 Aug 24 16:48 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-250559 | jenkins | v1.33.1 | 16 Aug 24 16:48 UTC |                     |
	|         | binary-mirror-250559                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:41735                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-250559                                                                     | binary-mirror-250559 | jenkins | v1.33.1 | 16 Aug 24 16:48 UTC | 16 Aug 24 16:48 UTC |
	| addons  | enable dashboard -p                                                                         | addons-671083        | jenkins | v1.33.1 | 16 Aug 24 16:48 UTC |                     |
	|         | addons-671083                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-671083        | jenkins | v1.33.1 | 16 Aug 24 16:48 UTC |                     |
	|         | addons-671083                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-671083 --wait=true                                                                | addons-671083        | jenkins | v1.33.1 | 16 Aug 24 16:48 UTC | 16 Aug 24 16:51 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-671083 addons disable                                                                | addons-671083        | jenkins | v1.33.1 | 16 Aug 24 16:51 UTC | 16 Aug 24 16:51 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-671083 addons disable                                                                | addons-671083        | jenkins | v1.33.1 | 16 Aug 24 16:51 UTC | 16 Aug 24 16:51 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ssh     | addons-671083 ssh cat                                                                       | addons-671083        | jenkins | v1.33.1 | 16 Aug 24 16:51 UTC | 16 Aug 24 16:51 UTC |
	|         | /opt/local-path-provisioner/pvc-38437f91-cec1-425d-a656-8ecfa2176521_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-671083 addons disable                                                                | addons-671083        | jenkins | v1.33.1 | 16 Aug 24 16:51 UTC | 16 Aug 24 16:51 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-671083 ip                                                                            | addons-671083        | jenkins | v1.33.1 | 16 Aug 24 16:51 UTC | 16 Aug 24 16:51 UTC |
	| addons  | addons-671083 addons disable                                                                | addons-671083        | jenkins | v1.33.1 | 16 Aug 24 16:51 UTC | 16 Aug 24 16:51 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-671083        | jenkins | v1.33.1 | 16 Aug 24 16:51 UTC | 16 Aug 24 16:51 UTC |
	|         | -p addons-671083                                                                            |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-671083        | jenkins | v1.33.1 | 16 Aug 24 16:51 UTC | 16 Aug 24 16:51 UTC |
	|         | addons-671083                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-671083        | jenkins | v1.33.1 | 16 Aug 24 16:51 UTC | 16 Aug 24 16:51 UTC |
	|         | -p addons-671083                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-671083 addons disable                                                                | addons-671083        | jenkins | v1.33.1 | 16 Aug 24 16:51 UTC | 16 Aug 24 16:52 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-671083        | jenkins | v1.33.1 | 16 Aug 24 16:52 UTC | 16 Aug 24 16:52 UTC |
	|         | addons-671083                                                                               |                      |         |         |                     |                     |
	| addons  | addons-671083 addons disable                                                                | addons-671083        | jenkins | v1.33.1 | 16 Aug 24 16:52 UTC | 16 Aug 24 16:52 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-671083 ssh curl -s                                                                   | addons-671083        | jenkins | v1.33.1 | 16 Aug 24 16:52 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-671083 addons                                                                        | addons-671083        | jenkins | v1.33.1 | 16 Aug 24 16:52 UTC | 16 Aug 24 16:53 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-671083 addons                                                                        | addons-671083        | jenkins | v1.33.1 | 16 Aug 24 16:53 UTC | 16 Aug 24 16:53 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-671083 ip                                                                            | addons-671083        | jenkins | v1.33.1 | 16 Aug 24 16:54 UTC | 16 Aug 24 16:54 UTC |
	| addons  | addons-671083 addons disable                                                                | addons-671083        | jenkins | v1.33.1 | 16 Aug 24 16:54 UTC | 16 Aug 24 16:54 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-671083 addons disable                                                                | addons-671083        | jenkins | v1.33.1 | 16 Aug 24 16:54 UTC | 16 Aug 24 16:54 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-671083 addons                                                                        | addons-671083        | jenkins | v1.33.1 | 16 Aug 24 16:57 UTC | 16 Aug 24 16:57 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 16:48:58
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 16:48:58.066320   17475 out.go:345] Setting OutFile to fd 1 ...
	I0816 16:48:58.066549   17475 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 16:48:58.066557   17475 out.go:358] Setting ErrFile to fd 2...
	I0816 16:48:58.066561   17475 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 16:48:58.066729   17475 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-9545/.minikube/bin
	I0816 16:48:58.067959   17475 out.go:352] Setting JSON to false
	I0816 16:48:58.068791   17475 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1836,"bootTime":1723825102,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0816 16:48:58.068852   17475 start.go:139] virtualization: kvm guest
	I0816 16:48:58.070481   17475 out.go:177] * [addons-671083] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0816 16:48:58.071896   17475 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 16:48:58.071898   17475 notify.go:220] Checking for updates...
	I0816 16:48:58.073106   17475 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 16:48:58.074323   17475 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19461-9545/kubeconfig
	I0816 16:48:58.075526   17475 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19461-9545/.minikube
	I0816 16:48:58.076862   17475 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 16:48:58.077959   17475 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 16:48:58.079112   17475 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 16:48:58.109792   17475 out.go:177] * Using the kvm2 driver based on user configuration
	I0816 16:48:58.110961   17475 start.go:297] selected driver: kvm2
	I0816 16:48:58.111007   17475 start.go:901] validating driver "kvm2" against <nil>
	I0816 16:48:58.111026   17475 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 16:48:58.111718   17475 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 16:48:58.111787   17475 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19461-9545/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 16:48:58.126153   17475 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0816 16:48:58.126195   17475 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 16:48:58.126471   17475 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 16:48:58.126549   17475 cni.go:84] Creating CNI manager for ""
	I0816 16:48:58.126566   17475 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 16:48:58.126577   17475 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0816 16:48:58.126635   17475 start.go:340] cluster config:
	{Name:addons-671083 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-671083 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 16:48:58.126747   17475 iso.go:125] acquiring lock: {Name:mke35866c1d0f078e1f027c2d727692722810e04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 16:48:58.128601   17475 out.go:177] * Starting "addons-671083" primary control-plane node in "addons-671083" cluster
	I0816 16:48:58.129930   17475 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 16:48:58.129964   17475 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0816 16:48:58.129980   17475 cache.go:56] Caching tarball of preloaded images
	I0816 16:48:58.130060   17475 preload.go:172] Found /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 16:48:58.130073   17475 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0816 16:48:58.130545   17475 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/config.json ...
	I0816 16:48:58.130586   17475 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/config.json: {Name:mkd709046bf2fd424ed782edfe71f24ef626b9f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 16:48:58.130790   17475 start.go:360] acquireMachinesLock for addons-671083: {Name:mke1ffa1f2d6d714bdd85e184816ba8f4dfd08f1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 16:48:58.130856   17475 start.go:364] duration metric: took 47.276µs to acquireMachinesLock for "addons-671083"
	I0816 16:48:58.130880   17475 start.go:93] Provisioning new machine with config: &{Name:addons-671083 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:addons-671083 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 16:48:58.130953   17475 start.go:125] createHost starting for "" (driver="kvm2")
	I0816 16:48:58.132462   17475 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0816 16:48:58.132605   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:48:58.132665   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:48:58.146329   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37745
	I0816 16:48:58.146784   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:48:58.147300   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:48:58.147317   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:48:58.147728   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:48:58.147964   17475 main.go:141] libmachine: (addons-671083) Calling .GetMachineName
	I0816 16:48:58.148123   17475 main.go:141] libmachine: (addons-671083) Calling .DriverName
	I0816 16:48:58.148272   17475 start.go:159] libmachine.API.Create for "addons-671083" (driver="kvm2")
	I0816 16:48:58.148308   17475 client.go:168] LocalClient.Create starting
	I0816 16:48:58.148348   17475 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem
	I0816 16:48:58.212191   17475 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem
	I0816 16:48:58.364167   17475 main.go:141] libmachine: Running pre-create checks...
	I0816 16:48:58.364191   17475 main.go:141] libmachine: (addons-671083) Calling .PreCreateCheck
	I0816 16:48:58.364721   17475 main.go:141] libmachine: (addons-671083) Calling .GetConfigRaw
	I0816 16:48:58.365138   17475 main.go:141] libmachine: Creating machine...
	I0816 16:48:58.365153   17475 main.go:141] libmachine: (addons-671083) Calling .Create
	I0816 16:48:58.365323   17475 main.go:141] libmachine: (addons-671083) Creating KVM machine...
	I0816 16:48:58.366609   17475 main.go:141] libmachine: (addons-671083) DBG | found existing default KVM network
	I0816 16:48:58.367258   17475 main.go:141] libmachine: (addons-671083) DBG | I0816 16:48:58.367118   17497 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0816 16:48:58.367271   17475 main.go:141] libmachine: (addons-671083) DBG | created network xml: 
	I0816 16:48:58.367280   17475 main.go:141] libmachine: (addons-671083) DBG | <network>
	I0816 16:48:58.367286   17475 main.go:141] libmachine: (addons-671083) DBG |   <name>mk-addons-671083</name>
	I0816 16:48:58.367292   17475 main.go:141] libmachine: (addons-671083) DBG |   <dns enable='no'/>
	I0816 16:48:58.367296   17475 main.go:141] libmachine: (addons-671083) DBG |   
	I0816 16:48:58.367308   17475 main.go:141] libmachine: (addons-671083) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0816 16:48:58.367316   17475 main.go:141] libmachine: (addons-671083) DBG |     <dhcp>
	I0816 16:48:58.367326   17475 main.go:141] libmachine: (addons-671083) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0816 16:48:58.367335   17475 main.go:141] libmachine: (addons-671083) DBG |     </dhcp>
	I0816 16:48:58.367344   17475 main.go:141] libmachine: (addons-671083) DBG |   </ip>
	I0816 16:48:58.367353   17475 main.go:141] libmachine: (addons-671083) DBG |   
	I0816 16:48:58.367360   17475 main.go:141] libmachine: (addons-671083) DBG | </network>
	I0816 16:48:58.367374   17475 main.go:141] libmachine: (addons-671083) DBG | 
	I0816 16:48:58.372585   17475 main.go:141] libmachine: (addons-671083) DBG | trying to create private KVM network mk-addons-671083 192.168.39.0/24...
	I0816 16:48:58.437354   17475 main.go:141] libmachine: (addons-671083) DBG | private KVM network mk-addons-671083 192.168.39.0/24 created
	I0816 16:48:58.437389   17475 main.go:141] libmachine: (addons-671083) Setting up store path in /home/jenkins/minikube-integration/19461-9545/.minikube/machines/addons-671083 ...
	I0816 16:48:58.437404   17475 main.go:141] libmachine: (addons-671083) DBG | I0816 16:48:58.437305   17497 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19461-9545/.minikube
	I0816 16:48:58.437437   17475 main.go:141] libmachine: (addons-671083) Building disk image from file:///home/jenkins/minikube-integration/19461-9545/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0816 16:48:58.437459   17475 main.go:141] libmachine: (addons-671083) Downloading /home/jenkins/minikube-integration/19461-9545/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19461-9545/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0816 16:48:58.687608   17475 main.go:141] libmachine: (addons-671083) DBG | I0816 16:48:58.687450   17497 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/addons-671083/id_rsa...
	I0816 16:48:58.861012   17475 main.go:141] libmachine: (addons-671083) DBG | I0816 16:48:58.860911   17497 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/addons-671083/addons-671083.rawdisk...
	I0816 16:48:58.861033   17475 main.go:141] libmachine: (addons-671083) DBG | Writing magic tar header
	I0816 16:48:58.861043   17475 main.go:141] libmachine: (addons-671083) DBG | Writing SSH key tar header
	I0816 16:48:58.861101   17475 main.go:141] libmachine: (addons-671083) DBG | I0816 16:48:58.861041   17497 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19461-9545/.minikube/machines/addons-671083 ...
	I0816 16:48:58.861256   17475 main.go:141] libmachine: (addons-671083) Setting executable bit set on /home/jenkins/minikube-integration/19461-9545/.minikube/machines/addons-671083 (perms=drwx------)
	I0816 16:48:58.861301   17475 main.go:141] libmachine: (addons-671083) Setting executable bit set on /home/jenkins/minikube-integration/19461-9545/.minikube/machines (perms=drwxr-xr-x)
	I0816 16:48:58.861320   17475 main.go:141] libmachine: (addons-671083) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/addons-671083
	I0816 16:48:58.861332   17475 main.go:141] libmachine: (addons-671083) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19461-9545/.minikube/machines
	I0816 16:48:58.861343   17475 main.go:141] libmachine: (addons-671083) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19461-9545/.minikube
	I0816 16:48:58.861351   17475 main.go:141] libmachine: (addons-671083) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19461-9545
	I0816 16:48:58.861358   17475 main.go:141] libmachine: (addons-671083) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0816 16:48:58.861380   17475 main.go:141] libmachine: (addons-671083) Setting executable bit set on /home/jenkins/minikube-integration/19461-9545/.minikube (perms=drwxr-xr-x)
	I0816 16:48:58.861412   17475 main.go:141] libmachine: (addons-671083) Setting executable bit set on /home/jenkins/minikube-integration/19461-9545 (perms=drwxrwxr-x)
	I0816 16:48:58.861426   17475 main.go:141] libmachine: (addons-671083) DBG | Checking permissions on dir: /home/jenkins
	I0816 16:48:58.861438   17475 main.go:141] libmachine: (addons-671083) DBG | Checking permissions on dir: /home
	I0816 16:48:58.861448   17475 main.go:141] libmachine: (addons-671083) DBG | Skipping /home - not owner
	I0816 16:48:58.861465   17475 main.go:141] libmachine: (addons-671083) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0816 16:48:58.861477   17475 main.go:141] libmachine: (addons-671083) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0816 16:48:58.861502   17475 main.go:141] libmachine: (addons-671083) Creating domain...
	I0816 16:48:58.862374   17475 main.go:141] libmachine: (addons-671083) define libvirt domain using xml: 
	I0816 16:48:58.862398   17475 main.go:141] libmachine: (addons-671083) <domain type='kvm'>
	I0816 16:48:58.862409   17475 main.go:141] libmachine: (addons-671083)   <name>addons-671083</name>
	I0816 16:48:58.862416   17475 main.go:141] libmachine: (addons-671083)   <memory unit='MiB'>4000</memory>
	I0816 16:48:58.862436   17475 main.go:141] libmachine: (addons-671083)   <vcpu>2</vcpu>
	I0816 16:48:58.862454   17475 main.go:141] libmachine: (addons-671083)   <features>
	I0816 16:48:58.862462   17475 main.go:141] libmachine: (addons-671083)     <acpi/>
	I0816 16:48:58.862466   17475 main.go:141] libmachine: (addons-671083)     <apic/>
	I0816 16:48:58.862472   17475 main.go:141] libmachine: (addons-671083)     <pae/>
	I0816 16:48:58.862479   17475 main.go:141] libmachine: (addons-671083)     
	I0816 16:48:58.862484   17475 main.go:141] libmachine: (addons-671083)   </features>
	I0816 16:48:58.862491   17475 main.go:141] libmachine: (addons-671083)   <cpu mode='host-passthrough'>
	I0816 16:48:58.862495   17475 main.go:141] libmachine: (addons-671083)   
	I0816 16:48:58.862503   17475 main.go:141] libmachine: (addons-671083)   </cpu>
	I0816 16:48:58.862509   17475 main.go:141] libmachine: (addons-671083)   <os>
	I0816 16:48:58.862515   17475 main.go:141] libmachine: (addons-671083)     <type>hvm</type>
	I0816 16:48:58.862520   17475 main.go:141] libmachine: (addons-671083)     <boot dev='cdrom'/>
	I0816 16:48:58.862525   17475 main.go:141] libmachine: (addons-671083)     <boot dev='hd'/>
	I0816 16:48:58.862553   17475 main.go:141] libmachine: (addons-671083)     <bootmenu enable='no'/>
	I0816 16:48:58.862573   17475 main.go:141] libmachine: (addons-671083)   </os>
	I0816 16:48:58.862585   17475 main.go:141] libmachine: (addons-671083)   <devices>
	I0816 16:48:58.862597   17475 main.go:141] libmachine: (addons-671083)     <disk type='file' device='cdrom'>
	I0816 16:48:58.862612   17475 main.go:141] libmachine: (addons-671083)       <source file='/home/jenkins/minikube-integration/19461-9545/.minikube/machines/addons-671083/boot2docker.iso'/>
	I0816 16:48:58.862624   17475 main.go:141] libmachine: (addons-671083)       <target dev='hdc' bus='scsi'/>
	I0816 16:48:58.862636   17475 main.go:141] libmachine: (addons-671083)       <readonly/>
	I0816 16:48:58.862650   17475 main.go:141] libmachine: (addons-671083)     </disk>
	I0816 16:48:58.862663   17475 main.go:141] libmachine: (addons-671083)     <disk type='file' device='disk'>
	I0816 16:48:58.862676   17475 main.go:141] libmachine: (addons-671083)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0816 16:48:58.862692   17475 main.go:141] libmachine: (addons-671083)       <source file='/home/jenkins/minikube-integration/19461-9545/.minikube/machines/addons-671083/addons-671083.rawdisk'/>
	I0816 16:48:58.862712   17475 main.go:141] libmachine: (addons-671083)       <target dev='hda' bus='virtio'/>
	I0816 16:48:58.862732   17475 main.go:141] libmachine: (addons-671083)     </disk>
	I0816 16:48:58.862750   17475 main.go:141] libmachine: (addons-671083)     <interface type='network'>
	I0816 16:48:58.862759   17475 main.go:141] libmachine: (addons-671083)       <source network='mk-addons-671083'/>
	I0816 16:48:58.862764   17475 main.go:141] libmachine: (addons-671083)       <model type='virtio'/>
	I0816 16:48:58.862770   17475 main.go:141] libmachine: (addons-671083)     </interface>
	I0816 16:48:58.862777   17475 main.go:141] libmachine: (addons-671083)     <interface type='network'>
	I0816 16:48:58.862783   17475 main.go:141] libmachine: (addons-671083)       <source network='default'/>
	I0816 16:48:58.862790   17475 main.go:141] libmachine: (addons-671083)       <model type='virtio'/>
	I0816 16:48:58.862795   17475 main.go:141] libmachine: (addons-671083)     </interface>
	I0816 16:48:58.862802   17475 main.go:141] libmachine: (addons-671083)     <serial type='pty'>
	I0816 16:48:58.862808   17475 main.go:141] libmachine: (addons-671083)       <target port='0'/>
	I0816 16:48:58.862812   17475 main.go:141] libmachine: (addons-671083)     </serial>
	I0816 16:48:58.862825   17475 main.go:141] libmachine: (addons-671083)     <console type='pty'>
	I0816 16:48:58.862837   17475 main.go:141] libmachine: (addons-671083)       <target type='serial' port='0'/>
	I0816 16:48:58.862845   17475 main.go:141] libmachine: (addons-671083)     </console>
	I0816 16:48:58.862849   17475 main.go:141] libmachine: (addons-671083)     <rng model='virtio'>
	I0816 16:48:58.862856   17475 main.go:141] libmachine: (addons-671083)       <backend model='random'>/dev/random</backend>
	I0816 16:48:58.862863   17475 main.go:141] libmachine: (addons-671083)     </rng>
	I0816 16:48:58.862868   17475 main.go:141] libmachine: (addons-671083)     
	I0816 16:48:58.862873   17475 main.go:141] libmachine: (addons-671083)     
	I0816 16:48:58.862879   17475 main.go:141] libmachine: (addons-671083)   </devices>
	I0816 16:48:58.862883   17475 main.go:141] libmachine: (addons-671083) </domain>
	I0816 16:48:58.862890   17475 main.go:141] libmachine: (addons-671083) 
	I0816 16:48:58.869540   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:96:88:f3 in network default
	I0816 16:48:58.870032   17475 main.go:141] libmachine: (addons-671083) Ensuring networks are active...
	I0816 16:48:58.870060   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:48:58.870572   17475 main.go:141] libmachine: (addons-671083) Ensuring network default is active
	I0816 16:48:58.870899   17475 main.go:141] libmachine: (addons-671083) Ensuring network mk-addons-671083 is active
	I0816 16:48:58.871971   17475 main.go:141] libmachine: (addons-671083) Getting domain xml...
	I0816 16:48:58.872549   17475 main.go:141] libmachine: (addons-671083) Creating domain...
	I0816 16:49:00.249291   17475 main.go:141] libmachine: (addons-671083) Waiting to get IP...
	I0816 16:49:00.250017   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:00.250334   17475 main.go:141] libmachine: (addons-671083) DBG | unable to find current IP address of domain addons-671083 in network mk-addons-671083
	I0816 16:49:00.250391   17475 main.go:141] libmachine: (addons-671083) DBG | I0816 16:49:00.250329   17497 retry.go:31] will retry after 283.890348ms: waiting for machine to come up
	I0816 16:49:00.535939   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:00.536338   17475 main.go:141] libmachine: (addons-671083) DBG | unable to find current IP address of domain addons-671083 in network mk-addons-671083
	I0816 16:49:00.536365   17475 main.go:141] libmachine: (addons-671083) DBG | I0816 16:49:00.536278   17497 retry.go:31] will retry after 272.589716ms: waiting for machine to come up
	I0816 16:49:00.810717   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:00.811053   17475 main.go:141] libmachine: (addons-671083) DBG | unable to find current IP address of domain addons-671083 in network mk-addons-671083
	I0816 16:49:00.811076   17475 main.go:141] libmachine: (addons-671083) DBG | I0816 16:49:00.811017   17497 retry.go:31] will retry after 327.359128ms: waiting for machine to come up
	I0816 16:49:01.139598   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:01.140077   17475 main.go:141] libmachine: (addons-671083) DBG | unable to find current IP address of domain addons-671083 in network mk-addons-671083
	I0816 16:49:01.140105   17475 main.go:141] libmachine: (addons-671083) DBG | I0816 16:49:01.139964   17497 retry.go:31] will retry after 531.723403ms: waiting for machine to come up
	I0816 16:49:01.673755   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:01.674244   17475 main.go:141] libmachine: (addons-671083) DBG | unable to find current IP address of domain addons-671083 in network mk-addons-671083
	I0816 16:49:01.674275   17475 main.go:141] libmachine: (addons-671083) DBG | I0816 16:49:01.674193   17497 retry.go:31] will retry after 675.414072ms: waiting for machine to come up
	I0816 16:49:02.351169   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:02.351653   17475 main.go:141] libmachine: (addons-671083) DBG | unable to find current IP address of domain addons-671083 in network mk-addons-671083
	I0816 16:49:02.351681   17475 main.go:141] libmachine: (addons-671083) DBG | I0816 16:49:02.351600   17497 retry.go:31] will retry after 640.251541ms: waiting for machine to come up
	I0816 16:49:02.993371   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:02.993740   17475 main.go:141] libmachine: (addons-671083) DBG | unable to find current IP address of domain addons-671083 in network mk-addons-671083
	I0816 16:49:02.993763   17475 main.go:141] libmachine: (addons-671083) DBG | I0816 16:49:02.993706   17497 retry.go:31] will retry after 1.168312298s: waiting for machine to come up
	I0816 16:49:04.163701   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:04.164021   17475 main.go:141] libmachine: (addons-671083) DBG | unable to find current IP address of domain addons-671083 in network mk-addons-671083
	I0816 16:49:04.164044   17475 main.go:141] libmachine: (addons-671083) DBG | I0816 16:49:04.163972   17497 retry.go:31] will retry after 1.340581367s: waiting for machine to come up
	I0816 16:49:05.505783   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:05.506209   17475 main.go:141] libmachine: (addons-671083) DBG | unable to find current IP address of domain addons-671083 in network mk-addons-671083
	I0816 16:49:05.506238   17475 main.go:141] libmachine: (addons-671083) DBG | I0816 16:49:05.506128   17497 retry.go:31] will retry after 1.298392326s: waiting for machine to come up
	I0816 16:49:06.806595   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:06.806996   17475 main.go:141] libmachine: (addons-671083) DBG | unable to find current IP address of domain addons-671083 in network mk-addons-671083
	I0816 16:49:06.807031   17475 main.go:141] libmachine: (addons-671083) DBG | I0816 16:49:06.806964   17497 retry.go:31] will retry after 2.080408667s: waiting for machine to come up
	I0816 16:49:08.889159   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:08.889759   17475 main.go:141] libmachine: (addons-671083) DBG | unable to find current IP address of domain addons-671083 in network mk-addons-671083
	I0816 16:49:08.889781   17475 main.go:141] libmachine: (addons-671083) DBG | I0816 16:49:08.889712   17497 retry.go:31] will retry after 2.264587812s: waiting for machine to come up
	I0816 16:49:11.156974   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:11.157347   17475 main.go:141] libmachine: (addons-671083) DBG | unable to find current IP address of domain addons-671083 in network mk-addons-671083
	I0816 16:49:11.157376   17475 main.go:141] libmachine: (addons-671083) DBG | I0816 16:49:11.157323   17497 retry.go:31] will retry after 2.310982395s: waiting for machine to come up
	I0816 16:49:13.470389   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:13.470775   17475 main.go:141] libmachine: (addons-671083) DBG | unable to find current IP address of domain addons-671083 in network mk-addons-671083
	I0816 16:49:13.470793   17475 main.go:141] libmachine: (addons-671083) DBG | I0816 16:49:13.470750   17497 retry.go:31] will retry after 3.3460659s: waiting for machine to come up
	I0816 16:49:16.821167   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:16.821588   17475 main.go:141] libmachine: (addons-671083) DBG | unable to find current IP address of domain addons-671083 in network mk-addons-671083
	I0816 16:49:16.821611   17475 main.go:141] libmachine: (addons-671083) DBG | I0816 16:49:16.821544   17497 retry.go:31] will retry after 3.950147872s: waiting for machine to come up
	I0816 16:49:20.775320   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:20.775789   17475 main.go:141] libmachine: (addons-671083) Found IP for machine: 192.168.39.240
	I0816 16:49:20.775803   17475 main.go:141] libmachine: (addons-671083) Reserving static IP address...
	I0816 16:49:20.775812   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has current primary IP address 192.168.39.240 and MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:20.776271   17475 main.go:141] libmachine: (addons-671083) DBG | unable to find host DHCP lease matching {name: "addons-671083", mac: "52:54:00:4b:34:d9", ip: "192.168.39.240"} in network mk-addons-671083
	I0816 16:49:20.845508   17475 main.go:141] libmachine: (addons-671083) Reserved static IP address: 192.168.39.240
	I0816 16:49:20.845534   17475 main.go:141] libmachine: (addons-671083) Waiting for SSH to be available...
	I0816 16:49:20.845543   17475 main.go:141] libmachine: (addons-671083) DBG | Getting to WaitForSSH function...
	I0816 16:49:20.847610   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:20.848015   17475 main.go:141] libmachine: (addons-671083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:34:d9", ip: ""} in network mk-addons-671083: {Iface:virbr1 ExpiryTime:2024-08-16 17:49:12 +0000 UTC Type:0 Mac:52:54:00:4b:34:d9 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:minikube Clientid:01:52:54:00:4b:34:d9}
	I0816 16:49:20.848049   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined IP address 192.168.39.240 and MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:20.848229   17475 main.go:141] libmachine: (addons-671083) DBG | Using SSH client type: external
	I0816 16:49:20.848263   17475 main.go:141] libmachine: (addons-671083) DBG | Using SSH private key: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/addons-671083/id_rsa (-rw-------)
	I0816 16:49:20.848309   17475 main.go:141] libmachine: (addons-671083) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.240 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19461-9545/.minikube/machines/addons-671083/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 16:49:20.848323   17475 main.go:141] libmachine: (addons-671083) DBG | About to run SSH command:
	I0816 16:49:20.848335   17475 main.go:141] libmachine: (addons-671083) DBG | exit 0
	I0816 16:49:20.976967   17475 main.go:141] libmachine: (addons-671083) DBG | SSH cmd err, output: <nil>: 
	I0816 16:49:20.977279   17475 main.go:141] libmachine: (addons-671083) KVM machine creation complete!
	I0816 16:49:20.977674   17475 main.go:141] libmachine: (addons-671083) Calling .GetConfigRaw
	I0816 16:49:20.978151   17475 main.go:141] libmachine: (addons-671083) Calling .DriverName
	I0816 16:49:20.978313   17475 main.go:141] libmachine: (addons-671083) Calling .DriverName
	I0816 16:49:20.978474   17475 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0816 16:49:20.978486   17475 main.go:141] libmachine: (addons-671083) Calling .GetState
	I0816 16:49:20.979759   17475 main.go:141] libmachine: Detecting operating system of created instance...
	I0816 16:49:20.979774   17475 main.go:141] libmachine: Waiting for SSH to be available...
	I0816 16:49:20.979780   17475 main.go:141] libmachine: Getting to WaitForSSH function...
	I0816 16:49:20.979786   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHHostname
	I0816 16:49:20.982049   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:20.982411   17475 main.go:141] libmachine: (addons-671083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:34:d9", ip: ""} in network mk-addons-671083: {Iface:virbr1 ExpiryTime:2024-08-16 17:49:12 +0000 UTC Type:0 Mac:52:54:00:4b:34:d9 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:addons-671083 Clientid:01:52:54:00:4b:34:d9}
	I0816 16:49:20.982438   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined IP address 192.168.39.240 and MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:20.982558   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHPort
	I0816 16:49:20.982724   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHKeyPath
	I0816 16:49:20.982912   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHKeyPath
	I0816 16:49:20.983045   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHUsername
	I0816 16:49:20.983215   17475 main.go:141] libmachine: Using SSH client type: native
	I0816 16:49:20.983379   17475 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0816 16:49:20.983389   17475 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0816 16:49:21.079814   17475 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 16:49:21.079834   17475 main.go:141] libmachine: Detecting the provisioner...
	I0816 16:49:21.079842   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHHostname
	I0816 16:49:21.082532   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:21.082912   17475 main.go:141] libmachine: (addons-671083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:34:d9", ip: ""} in network mk-addons-671083: {Iface:virbr1 ExpiryTime:2024-08-16 17:49:12 +0000 UTC Type:0 Mac:52:54:00:4b:34:d9 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:addons-671083 Clientid:01:52:54:00:4b:34:d9}
	I0816 16:49:21.082936   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined IP address 192.168.39.240 and MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:21.083024   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHPort
	I0816 16:49:21.083232   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHKeyPath
	I0816 16:49:21.083380   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHKeyPath
	I0816 16:49:21.083507   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHUsername
	I0816 16:49:21.083770   17475 main.go:141] libmachine: Using SSH client type: native
	I0816 16:49:21.083958   17475 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0816 16:49:21.083970   17475 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0816 16:49:21.180964   17475 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0816 16:49:21.181038   17475 main.go:141] libmachine: found compatible host: buildroot
	I0816 16:49:21.181048   17475 main.go:141] libmachine: Provisioning with buildroot...
	I0816 16:49:21.181055   17475 main.go:141] libmachine: (addons-671083) Calling .GetMachineName
	I0816 16:49:21.181426   17475 buildroot.go:166] provisioning hostname "addons-671083"
	I0816 16:49:21.181451   17475 main.go:141] libmachine: (addons-671083) Calling .GetMachineName
	I0816 16:49:21.181629   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHHostname
	I0816 16:49:21.184121   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:21.184541   17475 main.go:141] libmachine: (addons-671083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:34:d9", ip: ""} in network mk-addons-671083: {Iface:virbr1 ExpiryTime:2024-08-16 17:49:12 +0000 UTC Type:0 Mac:52:54:00:4b:34:d9 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:addons-671083 Clientid:01:52:54:00:4b:34:d9}
	I0816 16:49:21.184581   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined IP address 192.168.39.240 and MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:21.184760   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHPort
	I0816 16:49:21.184933   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHKeyPath
	I0816 16:49:21.185085   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHKeyPath
	I0816 16:49:21.185225   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHUsername
	I0816 16:49:21.185430   17475 main.go:141] libmachine: Using SSH client type: native
	I0816 16:49:21.185624   17475 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0816 16:49:21.185641   17475 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-671083 && echo "addons-671083" | sudo tee /etc/hostname
	I0816 16:49:21.299478   17475 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-671083
	
	I0816 16:49:21.299509   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHHostname
	I0816 16:49:21.302474   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:21.302806   17475 main.go:141] libmachine: (addons-671083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:34:d9", ip: ""} in network mk-addons-671083: {Iface:virbr1 ExpiryTime:2024-08-16 17:49:12 +0000 UTC Type:0 Mac:52:54:00:4b:34:d9 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:addons-671083 Clientid:01:52:54:00:4b:34:d9}
	I0816 16:49:21.302833   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined IP address 192.168.39.240 and MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:21.302986   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHPort
	I0816 16:49:21.303177   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHKeyPath
	I0816 16:49:21.303385   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHKeyPath
	I0816 16:49:21.303544   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHUsername
	I0816 16:49:21.303704   17475 main.go:141] libmachine: Using SSH client type: native
	I0816 16:49:21.303929   17475 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0816 16:49:21.303948   17475 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-671083' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-671083/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-671083' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 16:49:21.408027   17475 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 16:49:21.408053   17475 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19461-9545/.minikube CaCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19461-9545/.minikube}
	I0816 16:49:21.408078   17475 buildroot.go:174] setting up certificates
	I0816 16:49:21.408093   17475 provision.go:84] configureAuth start
	I0816 16:49:21.408103   17475 main.go:141] libmachine: (addons-671083) Calling .GetMachineName
	I0816 16:49:21.408401   17475 main.go:141] libmachine: (addons-671083) Calling .GetIP
	I0816 16:49:21.410788   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:21.411067   17475 main.go:141] libmachine: (addons-671083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:34:d9", ip: ""} in network mk-addons-671083: {Iface:virbr1 ExpiryTime:2024-08-16 17:49:12 +0000 UTC Type:0 Mac:52:54:00:4b:34:d9 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:addons-671083 Clientid:01:52:54:00:4b:34:d9}
	I0816 16:49:21.411100   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined IP address 192.168.39.240 and MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:21.411293   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHHostname
	I0816 16:49:21.413459   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:21.413787   17475 main.go:141] libmachine: (addons-671083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:34:d9", ip: ""} in network mk-addons-671083: {Iface:virbr1 ExpiryTime:2024-08-16 17:49:12 +0000 UTC Type:0 Mac:52:54:00:4b:34:d9 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:addons-671083 Clientid:01:52:54:00:4b:34:d9}
	I0816 16:49:21.413811   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined IP address 192.168.39.240 and MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:21.413887   17475 provision.go:143] copyHostCerts
	I0816 16:49:21.413976   17475 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem (1082 bytes)
	I0816 16:49:21.414114   17475 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem (1123 bytes)
	I0816 16:49:21.414227   17475 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem (1679 bytes)
	I0816 16:49:21.414310   17475 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem org=jenkins.addons-671083 san=[127.0.0.1 192.168.39.240 addons-671083 localhost minikube]
	I0816 16:49:21.726952   17475 provision.go:177] copyRemoteCerts
	I0816 16:49:21.727010   17475 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 16:49:21.727032   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHHostname
	I0816 16:49:21.729698   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:21.730018   17475 main.go:141] libmachine: (addons-671083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:34:d9", ip: ""} in network mk-addons-671083: {Iface:virbr1 ExpiryTime:2024-08-16 17:49:12 +0000 UTC Type:0 Mac:52:54:00:4b:34:d9 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:addons-671083 Clientid:01:52:54:00:4b:34:d9}
	I0816 16:49:21.730046   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined IP address 192.168.39.240 and MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:21.730227   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHPort
	I0816 16:49:21.730418   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHKeyPath
	I0816 16:49:21.730638   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHUsername
	I0816 16:49:21.730778   17475 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/addons-671083/id_rsa Username:docker}
	I0816 16:49:21.806159   17475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0816 16:49:21.827190   17475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 16:49:21.848400   17475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 16:49:21.868815   17475 provision.go:87] duration metric: took 460.707117ms to configureAuth
	I0816 16:49:21.868848   17475 buildroot.go:189] setting minikube options for container-runtime
	I0816 16:49:21.869048   17475 config.go:182] Loaded profile config "addons-671083": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 16:49:21.869140   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHHostname
	I0816 16:49:21.871548   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:21.871868   17475 main.go:141] libmachine: (addons-671083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:34:d9", ip: ""} in network mk-addons-671083: {Iface:virbr1 ExpiryTime:2024-08-16 17:49:12 +0000 UTC Type:0 Mac:52:54:00:4b:34:d9 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:addons-671083 Clientid:01:52:54:00:4b:34:d9}
	I0816 16:49:21.871896   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined IP address 192.168.39.240 and MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:21.872043   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHPort
	I0816 16:49:21.872239   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHKeyPath
	I0816 16:49:21.872408   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHKeyPath
	I0816 16:49:21.872527   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHUsername
	I0816 16:49:21.872696   17475 main.go:141] libmachine: Using SSH client type: native
	I0816 16:49:21.872847   17475 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0816 16:49:21.872860   17475 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 16:49:22.134070   17475 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 16:49:22.134095   17475 main.go:141] libmachine: Checking connection to Docker...
	I0816 16:49:22.134102   17475 main.go:141] libmachine: (addons-671083) Calling .GetURL
	I0816 16:49:22.135572   17475 main.go:141] libmachine: (addons-671083) DBG | Using libvirt version 6000000
	I0816 16:49:22.137843   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:22.138190   17475 main.go:141] libmachine: (addons-671083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:34:d9", ip: ""} in network mk-addons-671083: {Iface:virbr1 ExpiryTime:2024-08-16 17:49:12 +0000 UTC Type:0 Mac:52:54:00:4b:34:d9 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:addons-671083 Clientid:01:52:54:00:4b:34:d9}
	I0816 16:49:22.138221   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined IP address 192.168.39.240 and MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:22.138371   17475 main.go:141] libmachine: Docker is up and running!
	I0816 16:49:22.138386   17475 main.go:141] libmachine: Reticulating splines...
	I0816 16:49:22.138393   17475 client.go:171] duration metric: took 23.990076596s to LocalClient.Create
	I0816 16:49:22.138413   17475 start.go:167] duration metric: took 23.990143896s to libmachine.API.Create "addons-671083"
	I0816 16:49:22.138422   17475 start.go:293] postStartSetup for "addons-671083" (driver="kvm2")
	I0816 16:49:22.138430   17475 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 16:49:22.138446   17475 main.go:141] libmachine: (addons-671083) Calling .DriverName
	I0816 16:49:22.138662   17475 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 16:49:22.138684   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHHostname
	I0816 16:49:22.140585   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:22.140926   17475 main.go:141] libmachine: (addons-671083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:34:d9", ip: ""} in network mk-addons-671083: {Iface:virbr1 ExpiryTime:2024-08-16 17:49:12 +0000 UTC Type:0 Mac:52:54:00:4b:34:d9 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:addons-671083 Clientid:01:52:54:00:4b:34:d9}
	I0816 16:49:22.140952   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined IP address 192.168.39.240 and MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:22.141067   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHPort
	I0816 16:49:22.141217   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHKeyPath
	I0816 16:49:22.141360   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHUsername
	I0816 16:49:22.141514   17475 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/addons-671083/id_rsa Username:docker}
	I0816 16:49:22.220583   17475 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 16:49:22.224660   17475 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 16:49:22.224679   17475 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/addons for local assets ...
	I0816 16:49:22.224767   17475 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/files for local assets ...
	I0816 16:49:22.224801   17475 start.go:296] duration metric: took 86.372451ms for postStartSetup
	I0816 16:49:22.224841   17475 main.go:141] libmachine: (addons-671083) Calling .GetConfigRaw
	I0816 16:49:22.225400   17475 main.go:141] libmachine: (addons-671083) Calling .GetIP
	I0816 16:49:22.228015   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:22.228329   17475 main.go:141] libmachine: (addons-671083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:34:d9", ip: ""} in network mk-addons-671083: {Iface:virbr1 ExpiryTime:2024-08-16 17:49:12 +0000 UTC Type:0 Mac:52:54:00:4b:34:d9 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:addons-671083 Clientid:01:52:54:00:4b:34:d9}
	I0816 16:49:22.228356   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined IP address 192.168.39.240 and MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:22.228607   17475 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/config.json ...
	I0816 16:49:22.228808   17475 start.go:128] duration metric: took 24.097843577s to createHost
	I0816 16:49:22.228830   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHHostname
	I0816 16:49:22.231121   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:22.231427   17475 main.go:141] libmachine: (addons-671083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:34:d9", ip: ""} in network mk-addons-671083: {Iface:virbr1 ExpiryTime:2024-08-16 17:49:12 +0000 UTC Type:0 Mac:52:54:00:4b:34:d9 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:addons-671083 Clientid:01:52:54:00:4b:34:d9}
	I0816 16:49:22.231449   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined IP address 192.168.39.240 and MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:22.231581   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHPort
	I0816 16:49:22.231776   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHKeyPath
	I0816 16:49:22.231916   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHKeyPath
	I0816 16:49:22.232045   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHUsername
	I0816 16:49:22.232188   17475 main.go:141] libmachine: Using SSH client type: native
	I0816 16:49:22.232328   17475 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0816 16:49:22.232338   17475 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 16:49:22.329268   17475 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723826962.306837323
	
	I0816 16:49:22.329293   17475 fix.go:216] guest clock: 1723826962.306837323
	I0816 16:49:22.329302   17475 fix.go:229] Guest: 2024-08-16 16:49:22.306837323 +0000 UTC Remote: 2024-08-16 16:49:22.228820507 +0000 UTC m=+24.194451298 (delta=78.016816ms)
	I0816 16:49:22.329347   17475 fix.go:200] guest clock delta is within tolerance: 78.016816ms
	I0816 16:49:22.329352   17475 start.go:83] releasing machines lock for "addons-671083", held for 24.198483464s
	I0816 16:49:22.329370   17475 main.go:141] libmachine: (addons-671083) Calling .DriverName
	I0816 16:49:22.329601   17475 main.go:141] libmachine: (addons-671083) Calling .GetIP
	I0816 16:49:22.331847   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:22.332122   17475 main.go:141] libmachine: (addons-671083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:34:d9", ip: ""} in network mk-addons-671083: {Iface:virbr1 ExpiryTime:2024-08-16 17:49:12 +0000 UTC Type:0 Mac:52:54:00:4b:34:d9 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:addons-671083 Clientid:01:52:54:00:4b:34:d9}
	I0816 16:49:22.332148   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined IP address 192.168.39.240 and MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:22.332295   17475 main.go:141] libmachine: (addons-671083) Calling .DriverName
	I0816 16:49:22.332787   17475 main.go:141] libmachine: (addons-671083) Calling .DriverName
	I0816 16:49:22.332972   17475 main.go:141] libmachine: (addons-671083) Calling .DriverName
	I0816 16:49:22.333074   17475 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 16:49:22.333128   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHHostname
	I0816 16:49:22.333213   17475 ssh_runner.go:195] Run: cat /version.json
	I0816 16:49:22.333241   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHHostname
	I0816 16:49:22.335809   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:22.336125   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:22.336152   17475 main.go:141] libmachine: (addons-671083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:34:d9", ip: ""} in network mk-addons-671083: {Iface:virbr1 ExpiryTime:2024-08-16 17:49:12 +0000 UTC Type:0 Mac:52:54:00:4b:34:d9 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:addons-671083 Clientid:01:52:54:00:4b:34:d9}
	I0816 16:49:22.336170   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined IP address 192.168.39.240 and MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:22.336315   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHPort
	I0816 16:49:22.336496   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHKeyPath
	I0816 16:49:22.336587   17475 main.go:141] libmachine: (addons-671083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:34:d9", ip: ""} in network mk-addons-671083: {Iface:virbr1 ExpiryTime:2024-08-16 17:49:12 +0000 UTC Type:0 Mac:52:54:00:4b:34:d9 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:addons-671083 Clientid:01:52:54:00:4b:34:d9}
	I0816 16:49:22.336610   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined IP address 192.168.39.240 and MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:22.336657   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHUsername
	I0816 16:49:22.336785   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHPort
	I0816 16:49:22.336850   17475 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/addons-671083/id_rsa Username:docker}
	I0816 16:49:22.336890   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHKeyPath
	I0816 16:49:22.337035   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHUsername
	I0816 16:49:22.337166   17475 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/addons-671083/id_rsa Username:docker}
	I0816 16:49:22.455116   17475 ssh_runner.go:195] Run: systemctl --version
	I0816 16:49:22.461526   17475 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 16:49:22.625159   17475 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 16:49:22.630466   17475 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 16:49:22.630529   17475 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 16:49:22.645886   17475 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 16:49:22.645910   17475 start.go:495] detecting cgroup driver to use...
	I0816 16:49:22.645966   17475 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 16:49:22.665926   17475 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 16:49:22.679933   17475 docker.go:217] disabling cri-docker service (if available) ...
	I0816 16:49:22.680000   17475 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 16:49:22.693228   17475 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 16:49:22.706115   17475 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 16:49:22.827685   17475 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 16:49:22.970987   17475 docker.go:233] disabling docker service ...
	I0816 16:49:22.971051   17475 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 16:49:22.984803   17475 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 16:49:22.998013   17475 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 16:49:23.137822   17475 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 16:49:23.266235   17475 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 16:49:23.286162   17475 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 16:49:23.302966   17475 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 16:49:23.303026   17475 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 16:49:23.312392   17475 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 16:49:23.312464   17475 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 16:49:23.321863   17475 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 16:49:23.331321   17475 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 16:49:23.340694   17475 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 16:49:23.350176   17475 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 16:49:23.359512   17475 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 16:49:23.375249   17475 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 16:49:23.384525   17475 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 16:49:23.393049   17475 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 16:49:23.393097   17475 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 16:49:23.404223   17475 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 16:49:23.412877   17475 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 16:49:23.523051   17475 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 16:49:23.654922   17475 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 16:49:23.655064   17475 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 16:49:23.659523   17475 start.go:563] Will wait 60s for crictl version
	I0816 16:49:23.659599   17475 ssh_runner.go:195] Run: which crictl
	I0816 16:49:23.663037   17475 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 16:49:23.698352   17475 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 16:49:23.698483   17475 ssh_runner.go:195] Run: crio --version
	I0816 16:49:23.724087   17475 ssh_runner.go:195] Run: crio --version
	I0816 16:49:23.751473   17475 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 16:49:23.752926   17475 main.go:141] libmachine: (addons-671083) Calling .GetIP
	I0816 16:49:23.755470   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:23.755818   17475 main.go:141] libmachine: (addons-671083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:34:d9", ip: ""} in network mk-addons-671083: {Iface:virbr1 ExpiryTime:2024-08-16 17:49:12 +0000 UTC Type:0 Mac:52:54:00:4b:34:d9 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:addons-671083 Clientid:01:52:54:00:4b:34:d9}
	I0816 16:49:23.755839   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined IP address 192.168.39.240 and MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:23.756083   17475 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0816 16:49:23.760086   17475 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 16:49:23.771879   17475 kubeadm.go:883] updating cluster {Name:addons-671083 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:addons-671083 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 16:49:23.771997   17475 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 16:49:23.772041   17475 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 16:49:23.801894   17475 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0816 16:49:23.801959   17475 ssh_runner.go:195] Run: which lz4
	I0816 16:49:23.805923   17475 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 16:49:23.809737   17475 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 16:49:23.809762   17475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0816 16:49:24.866202   17475 crio.go:462] duration metric: took 1.060313922s to copy over tarball
	I0816 16:49:24.866281   17475 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 16:49:26.924459   17475 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.058146903s)
	I0816 16:49:26.924493   17475 crio.go:469] duration metric: took 2.058266681s to extract the tarball
	I0816 16:49:26.924503   17475 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 16:49:26.961094   17475 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 16:49:27.001598   17475 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 16:49:27.001626   17475 cache_images.go:84] Images are preloaded, skipping loading
	I0816 16:49:27.001634   17475 kubeadm.go:934] updating node { 192.168.39.240 8443 v1.31.0 crio true true} ...
	I0816 16:49:27.001731   17475 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-671083 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.240
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-671083 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 16:49:27.001791   17475 ssh_runner.go:195] Run: crio config
	I0816 16:49:27.041757   17475 cni.go:84] Creating CNI manager for ""
	I0816 16:49:27.041779   17475 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 16:49:27.041791   17475 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 16:49:27.041820   17475 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.240 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-671083 NodeName:addons-671083 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.240"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.240 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 16:49:27.041972   17475 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.240
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-671083"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.240
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.240"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 16:49:27.042029   17475 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 16:49:27.051237   17475 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 16:49:27.051308   17475 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 16:49:27.060411   17475 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0816 16:49:27.075960   17475 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 16:49:27.090578   17475 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0816 16:49:27.106363   17475 ssh_runner.go:195] Run: grep 192.168.39.240	control-plane.minikube.internal$ /etc/hosts
	I0816 16:49:27.109970   17475 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.240	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 16:49:27.121189   17475 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 16:49:27.232304   17475 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 16:49:27.248032   17475 certs.go:68] Setting up /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083 for IP: 192.168.39.240
	I0816 16:49:27.248059   17475 certs.go:194] generating shared ca certs ...
	I0816 16:49:27.248077   17475 certs.go:226] acquiring lock for ca certs: {Name:mk1e000575c94c138513704c2900b8a68810eb65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 16:49:27.248237   17475 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key
	I0816 16:49:27.381753   17475 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt ...
	I0816 16:49:27.381782   17475 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt: {Name:mk6d327ac07a7e309565320b227eab2f0c3c16b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 16:49:27.381938   17475 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key ...
	I0816 16:49:27.381948   17475 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key: {Name:mk531a862bb1f6818fc284bd4510b9af89a30ca4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 16:49:27.382017   17475 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key
	I0816 16:49:27.529203   17475 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.crt ...
	I0816 16:49:27.529229   17475 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.crt: {Name:mk085bb605cf2710eff87a2d7387ebf03b6d81a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 16:49:27.529377   17475 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key ...
	I0816 16:49:27.529388   17475 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key: {Name:mk97b7b7a6a59b99d7bef0f92b9ec38593c29a70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 16:49:27.529450   17475 certs.go:256] generating profile certs ...
	I0816 16:49:27.529500   17475 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/client.key
	I0816 16:49:27.529513   17475 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/client.crt with IP's: []
	I0816 16:49:27.586097   17475 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/client.crt ...
	I0816 16:49:27.586123   17475 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/client.crt: {Name:mke44386a63cceabbe31b6f26838a3bc63e55d4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 16:49:27.586270   17475 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/client.key ...
	I0816 16:49:27.586280   17475 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/client.key: {Name:mk86f28cecac6f2f60291769bb16fc2a2c7ce4aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 16:49:27.586353   17475 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/apiserver.key.60d270f6
	I0816 16:49:27.586371   17475 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/apiserver.crt.60d270f6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.240]
	I0816 16:49:27.739560   17475 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/apiserver.crt.60d270f6 ...
	I0816 16:49:27.739590   17475 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/apiserver.crt.60d270f6: {Name:mk83f1b4bb87ab0b9301b076c432e8b854cf7240 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 16:49:27.739749   17475 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/apiserver.key.60d270f6 ...
	I0816 16:49:27.739762   17475 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/apiserver.key.60d270f6: {Name:mk5d84c5a9e73e6534ba86728e8ada61126679ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 16:49:27.739829   17475 certs.go:381] copying /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/apiserver.crt.60d270f6 -> /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/apiserver.crt
	I0816 16:49:27.739897   17475 certs.go:385] copying /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/apiserver.key.60d270f6 -> /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/apiserver.key
	I0816 16:49:27.739941   17475 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/proxy-client.key
	I0816 16:49:27.739958   17475 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/proxy-client.crt with IP's: []
	I0816 16:49:27.837567   17475 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/proxy-client.crt ...
	I0816 16:49:27.837596   17475 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/proxy-client.crt: {Name:mk15d605ea322d53750c270c4b1e85f4322af7fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 16:49:27.837762   17475 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/proxy-client.key ...
	I0816 16:49:27.837777   17475 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/proxy-client.key: {Name:mk5de68870834ec73c34e593e465169a09f08758 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 16:49:27.837964   17475 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 16:49:27.838005   17475 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem (1082 bytes)
	I0816 16:49:27.838053   17475 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem (1123 bytes)
	I0816 16:49:27.838101   17475 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem (1679 bytes)
	I0816 16:49:27.838742   17475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 16:49:27.861769   17475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 16:49:27.883070   17475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 16:49:27.904384   17475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 16:49:27.927156   17475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0816 16:49:27.953057   17475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 16:49:27.976222   17475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 16:49:27.996750   17475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 16:49:28.017829   17475 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 16:49:28.039777   17475 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 16:49:28.054610   17475 ssh_runner.go:195] Run: openssl version
	I0816 16:49:28.059792   17475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 16:49:28.069136   17475 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 16:49:28.073021   17475 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:49 /usr/share/ca-certificates/minikubeCA.pem
	I0816 16:49:28.073064   17475 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 16:49:28.078402   17475 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 16:49:28.087529   17475 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 16:49:28.091174   17475 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0816 16:49:28.091222   17475 kubeadm.go:392] StartCluster: {Name:addons-671083 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:addons-671083 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 16:49:28.091322   17475 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 16:49:28.091382   17475 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 16:49:28.124558   17475 cri.go:89] found id: ""
	I0816 16:49:28.124672   17475 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 16:49:28.133925   17475 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 16:49:28.142576   17475 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 16:49:28.151416   17475 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 16:49:28.151437   17475 kubeadm.go:157] found existing configuration files:
	
	I0816 16:49:28.151490   17475 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 16:49:28.159896   17475 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 16:49:28.159962   17475 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 16:49:28.168648   17475 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 16:49:28.176866   17475 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 16:49:28.176931   17475 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 16:49:28.185493   17475 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 16:49:28.193528   17475 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 16:49:28.193594   17475 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 16:49:28.201955   17475 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 16:49:28.209840   17475 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 16:49:28.209899   17475 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 16:49:28.218065   17475 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 16:49:28.263114   17475 kubeadm.go:310] W0816 16:49:28.246550     829 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 16:49:28.263765   17475 kubeadm.go:310] W0816 16:49:28.247504     829 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 16:49:28.365306   17475 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 16:49:37.921355   17475 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0816 16:49:37.921434   17475 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 16:49:37.921534   17475 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 16:49:37.921675   17475 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 16:49:37.921820   17475 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0816 16:49:37.921895   17475 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 16:49:37.923502   17475 out.go:235]   - Generating certificates and keys ...
	I0816 16:49:37.923602   17475 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 16:49:37.923667   17475 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 16:49:37.923730   17475 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0816 16:49:37.923782   17475 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0816 16:49:37.923832   17475 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0816 16:49:37.923879   17475 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0816 16:49:37.923948   17475 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0816 16:49:37.924092   17475 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-671083 localhost] and IPs [192.168.39.240 127.0.0.1 ::1]
	I0816 16:49:37.924148   17475 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0816 16:49:37.924263   17475 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-671083 localhost] and IPs [192.168.39.240 127.0.0.1 ::1]
	I0816 16:49:37.924369   17475 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0816 16:49:37.924473   17475 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0816 16:49:37.924537   17475 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0816 16:49:37.924611   17475 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 16:49:37.924692   17475 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 16:49:37.924781   17475 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0816 16:49:37.924845   17475 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 16:49:37.924915   17475 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 16:49:37.925003   17475 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 16:49:37.925110   17475 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 16:49:37.925211   17475 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 16:49:37.926690   17475 out.go:235]   - Booting up control plane ...
	I0816 16:49:37.926769   17475 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 16:49:37.926833   17475 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 16:49:37.926890   17475 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 16:49:37.927009   17475 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 16:49:37.927094   17475 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 16:49:37.927136   17475 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 16:49:37.927241   17475 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0816 16:49:37.927346   17475 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0816 16:49:37.927410   17475 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 507.361999ms
	I0816 16:49:37.927471   17475 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0816 16:49:37.927529   17475 kubeadm.go:310] [api-check] The API server is healthy after 5.002080019s
	I0816 16:49:37.927618   17475 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0816 16:49:37.927733   17475 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0816 16:49:37.927786   17475 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0816 16:49:37.927959   17475 kubeadm.go:310] [mark-control-plane] Marking the node addons-671083 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0816 16:49:37.928017   17475 kubeadm.go:310] [bootstrap-token] Using token: xuuct1.enaoa72wl8k12y87
	I0816 16:49:37.929298   17475 out.go:235]   - Configuring RBAC rules ...
	I0816 16:49:37.929425   17475 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0816 16:49:37.929502   17475 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0816 16:49:37.929620   17475 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0816 16:49:37.929729   17475 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0816 16:49:37.929835   17475 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0816 16:49:37.929926   17475 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0816 16:49:37.930029   17475 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0816 16:49:37.930090   17475 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0816 16:49:37.930129   17475 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0816 16:49:37.930135   17475 kubeadm.go:310] 
	I0816 16:49:37.930197   17475 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0816 16:49:37.930209   17475 kubeadm.go:310] 
	I0816 16:49:37.930282   17475 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0816 16:49:37.930290   17475 kubeadm.go:310] 
	I0816 16:49:37.930310   17475 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0816 16:49:37.930367   17475 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0816 16:49:37.930416   17475 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0816 16:49:37.930422   17475 kubeadm.go:310] 
	I0816 16:49:37.930475   17475 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0816 16:49:37.930489   17475 kubeadm.go:310] 
	I0816 16:49:37.930532   17475 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0816 16:49:37.930539   17475 kubeadm.go:310] 
	I0816 16:49:37.930588   17475 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0816 16:49:37.930651   17475 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0816 16:49:37.930707   17475 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0816 16:49:37.930712   17475 kubeadm.go:310] 
	I0816 16:49:37.930782   17475 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0816 16:49:37.930849   17475 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0816 16:49:37.930855   17475 kubeadm.go:310] 
	I0816 16:49:37.930922   17475 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token xuuct1.enaoa72wl8k12y87 \
	I0816 16:49:37.931007   17475 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:340cd7e6f4afd769ffe0b97bb40a910498795aaf29fab06cd6b8c9a3957ccd57 \
	I0816 16:49:37.931026   17475 kubeadm.go:310] 	--control-plane 
	I0816 16:49:37.931032   17475 kubeadm.go:310] 
	I0816 16:49:37.931111   17475 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0816 16:49:37.931125   17475 kubeadm.go:310] 
	I0816 16:49:37.931190   17475 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xuuct1.enaoa72wl8k12y87 \
	I0816 16:49:37.931298   17475 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:340cd7e6f4afd769ffe0b97bb40a910498795aaf29fab06cd6b8c9a3957ccd57 
	I0816 16:49:37.931312   17475 cni.go:84] Creating CNI manager for ""
	I0816 16:49:37.931327   17475 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 16:49:37.933466   17475 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 16:49:37.934520   17475 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 16:49:37.945860   17475 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 16:49:37.965501   17475 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 16:49:37.965576   17475 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-671083 minikube.k8s.io/updated_at=2024_08_16T16_49_37_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd minikube.k8s.io/name=addons-671083 minikube.k8s.io/primary=true
	I0816 16:49:37.965590   17475 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 16:49:37.993859   17475 ops.go:34] apiserver oom_adj: -16
	I0816 16:49:38.104530   17475 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 16:49:38.604771   17475 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 16:49:39.105224   17475 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 16:49:39.604849   17475 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 16:49:40.105589   17475 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 16:49:40.604979   17475 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 16:49:41.105313   17475 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 16:49:41.604842   17475 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 16:49:41.682288   17475 kubeadm.go:1113] duration metric: took 3.716787052s to wait for elevateKubeSystemPrivileges
	I0816 16:49:41.682325   17475 kubeadm.go:394] duration metric: took 13.591107205s to StartCluster
	I0816 16:49:41.682349   17475 settings.go:142] acquiring lock: {Name:mk7d6bbff611e99a9222156e58c7ea3b0a5541f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 16:49:41.682478   17475 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19461-9545/kubeconfig
	I0816 16:49:41.682872   17475 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/kubeconfig: {Name:mk7d5abb3a1b9b654c5d7490d32aeae2c9a1bdd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 16:49:41.683062   17475 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 16:49:41.683094   17475 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 16:49:41.683152   17475 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0816 16:49:41.683258   17475 config.go:182] Loaded profile config "addons-671083": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 16:49:41.683268   17475 addons.go:69] Setting cloud-spanner=true in profile "addons-671083"
	I0816 16:49:41.683275   17475 addons.go:69] Setting registry=true in profile "addons-671083"
	I0816 16:49:41.683256   17475 addons.go:69] Setting yakd=true in profile "addons-671083"
	I0816 16:49:41.683319   17475 addons.go:69] Setting ingress=true in profile "addons-671083"
	I0816 16:49:41.683321   17475 addons.go:69] Setting ingress-dns=true in profile "addons-671083"
	I0816 16:49:41.683315   17475 addons.go:69] Setting volcano=true in profile "addons-671083"
	I0816 16:49:41.683319   17475 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-671083"
	I0816 16:49:41.683337   17475 addons.go:234] Setting addon ingress=true in "addons-671083"
	I0816 16:49:41.683339   17475 addons.go:234] Setting addon ingress-dns=true in "addons-671083"
	I0816 16:49:41.683347   17475 addons.go:234] Setting addon volcano=true in "addons-671083"
	I0816 16:49:41.683364   17475 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-671083"
	I0816 16:49:41.683372   17475 host.go:66] Checking if "addons-671083" exists ...
	I0816 16:49:41.683374   17475 host.go:66] Checking if "addons-671083" exists ...
	I0816 16:49:41.683374   17475 host.go:66] Checking if "addons-671083" exists ...
	I0816 16:49:41.683270   17475 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-671083"
	I0816 16:49:41.683449   17475 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-671083"
	I0816 16:49:41.683476   17475 host.go:66] Checking if "addons-671083" exists ...
	I0816 16:49:41.683338   17475 addons.go:234] Setting addon yakd=true in "addons-671083"
	I0816 16:49:41.683263   17475 addons.go:69] Setting inspektor-gadget=true in profile "addons-671083"
	I0816 16:49:41.683533   17475 host.go:66] Checking if "addons-671083" exists ...
	I0816 16:49:41.683581   17475 addons.go:234] Setting addon inspektor-gadget=true in "addons-671083"
	I0816 16:49:41.683614   17475 host.go:66] Checking if "addons-671083" exists ...
	I0816 16:49:41.683807   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:49:41.683811   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:49:41.683815   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:49:41.683830   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:49:41.683833   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:49:41.683839   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:49:41.683856   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:49:41.683265   17475 addons.go:69] Setting metrics-server=true in profile "addons-671083"
	I0816 16:49:41.683904   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:49:41.683928   17475 addons.go:234] Setting addon metrics-server=true in "addons-671083"
	I0816 16:49:41.683939   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:49:41.683956   17475 host.go:66] Checking if "addons-671083" exists ...
	I0816 16:49:41.683297   17475 addons.go:234] Setting addon cloud-spanner=true in "addons-671083"
	I0816 16:49:41.683841   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:49:41.683301   17475 addons.go:234] Setting addon registry=true in "addons-671083"
	I0816 16:49:41.683818   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:49:41.684112   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:49:41.683307   17475 addons.go:69] Setting default-storageclass=true in profile "addons-671083"
	I0816 16:49:41.684167   17475 host.go:66] Checking if "addons-671083" exists ...
	I0816 16:49:41.683303   17475 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-671083"
	I0816 16:49:41.684234   17475 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-671083"
	I0816 16:49:41.684190   17475 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-671083"
	I0816 16:49:41.683313   17475 addons.go:69] Setting storage-provisioner=true in profile "addons-671083"
	I0816 16:49:41.684279   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:49:41.684297   17475 addons.go:234] Setting addon storage-provisioner=true in "addons-671083"
	I0816 16:49:41.683317   17475 addons.go:69] Setting helm-tiller=true in profile "addons-671083"
	I0816 16:49:41.684323   17475 addons.go:234] Setting addon helm-tiller=true in "addons-671083"
	I0816 16:49:41.684307   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:49:41.683312   17475 addons.go:69] Setting gcp-auth=true in profile "addons-671083"
	I0816 16:49:41.684395   17475 mustload.go:65] Loading cluster: addons-671083
	I0816 16:49:41.684498   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:49:41.684523   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:49:41.683306   17475 addons.go:69] Setting volumesnapshots=true in profile "addons-671083"
	I0816 16:49:41.684573   17475 config.go:182] Loaded profile config "addons-671083": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 16:49:41.684593   17475 addons.go:234] Setting addon volumesnapshots=true in "addons-671083"
	I0816 16:49:41.684647   17475 host.go:66] Checking if "addons-671083" exists ...
	I0816 16:49:41.684667   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:49:41.684694   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:49:41.684837   17475 host.go:66] Checking if "addons-671083" exists ...
	I0816 16:49:41.684918   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:49:41.684999   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:49:41.685234   17475 host.go:66] Checking if "addons-671083" exists ...
	I0816 16:49:41.685275   17475 host.go:66] Checking if "addons-671083" exists ...
	I0816 16:49:41.685324   17475 host.go:66] Checking if "addons-671083" exists ...
	I0816 16:49:41.685598   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:49:41.685620   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:49:41.685620   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:49:41.685649   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:49:41.685661   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:49:41.685677   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:49:41.684966   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:49:41.685946   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:49:41.686455   17475 out.go:177] * Verifying Kubernetes components...
	I0816 16:49:41.688411   17475 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 16:49:41.705854   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32863
	I0816 16:49:41.706123   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39983
	I0816 16:49:41.706257   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41751
	I0816 16:49:41.706392   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:49:41.706515   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:49:41.706861   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:49:41.706883   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:49:41.706923   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:49:41.706955   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:49:41.707247   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:49:41.707798   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:49:41.707839   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:49:41.707846   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33933
	I0816 16:49:41.708004   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:49:41.708164   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:49:41.708438   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:49:41.708590   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:49:41.708603   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:49:41.708641   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:49:41.708675   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:49:41.708812   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:49:41.708826   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:49:41.708876   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:49:41.714148   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:49:41.714207   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34775
	I0816 16:49:41.714725   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:49:41.715069   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:49:41.715098   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:49:41.720826   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45619
	I0816 16:49:41.720942   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:49:41.720963   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:49:41.721029   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36689
	I0816 16:49:41.720949   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45199
	I0816 16:49:41.721143   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:49:41.721160   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:49:41.721204   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:49:41.721236   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:49:41.721416   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:49:41.721436   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:49:41.723286   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:49:41.723735   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:49:41.723832   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:49:41.724288   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:49:41.724301   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:49:41.724397   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:49:41.724403   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:49:41.724776   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:49:41.725201   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:49:41.725233   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:49:41.734988   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:49:41.735206   17475 main.go:141] libmachine: (addons-671083) Calling .GetState
	I0816 16:49:41.735257   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:49:41.735623   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:49:41.735642   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:49:41.736046   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:49:41.736612   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:49:41.736657   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:49:41.737479   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:49:41.737515   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:49:41.740201   17475 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-671083"
	I0816 16:49:41.740242   17475 host.go:66] Checking if "addons-671083" exists ...
	I0816 16:49:41.740588   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:49:41.740646   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:49:41.740834   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39579
	I0816 16:49:41.741355   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:49:41.741881   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:49:41.741898   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:49:41.742268   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:49:41.742476   17475 main.go:141] libmachine: (addons-671083) Calling .GetState
	I0816 16:49:41.744589   17475 main.go:141] libmachine: (addons-671083) Calling .DriverName
	I0816 16:49:41.746136   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38195
	I0816 16:49:41.746861   17475 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0816 16:49:41.747092   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:49:41.747602   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:49:41.747621   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:49:41.748003   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:49:41.748141   17475 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0816 16:49:41.748159   17475 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0816 16:49:41.748179   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHHostname
	I0816 16:49:41.748185   17475 main.go:141] libmachine: (addons-671083) Calling .GetState
	I0816 16:49:41.748986   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33509
	I0816 16:49:41.749444   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:49:41.750231   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:49:41.750254   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:49:41.750831   17475 main.go:141] libmachine: (addons-671083) Calling .DriverName
	I0816 16:49:41.752359   17475 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0816 16:49:41.752823   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:41.753440   17475 main.go:141] libmachine: (addons-671083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:34:d9", ip: ""} in network mk-addons-671083: {Iface:virbr1 ExpiryTime:2024-08-16 17:49:12 +0000 UTC Type:0 Mac:52:54:00:4b:34:d9 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:addons-671083 Clientid:01:52:54:00:4b:34:d9}
	I0816 16:49:41.753475   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined IP address 192.168.39.240 and MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:41.753663   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHPort
	I0816 16:49:41.753728   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:49:41.753810   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHKeyPath
	I0816 16:49:41.753834   17475 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0816 16:49:41.753848   17475 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0816 16:49:41.753865   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHHostname
	I0816 16:49:41.753935   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHUsername
	I0816 16:49:41.754058   17475 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/addons-671083/id_rsa Username:docker}
	I0816 16:49:41.754301   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:49:41.754336   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:49:41.757732   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:41.758205   17475 main.go:141] libmachine: (addons-671083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:34:d9", ip: ""} in network mk-addons-671083: {Iface:virbr1 ExpiryTime:2024-08-16 17:49:12 +0000 UTC Type:0 Mac:52:54:00:4b:34:d9 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:addons-671083 Clientid:01:52:54:00:4b:34:d9}
	I0816 16:49:41.758226   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined IP address 192.168.39.240 and MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:41.758547   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHPort
	I0816 16:49:41.758751   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHKeyPath
	I0816 16:49:41.758894   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHUsername
	I0816 16:49:41.759060   17475 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/addons-671083/id_rsa Username:docker}
	I0816 16:49:41.769371   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42009
	I0816 16:49:41.770102   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46495
	I0816 16:49:41.770235   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:49:41.770620   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:49:41.770989   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46309
	I0816 16:49:41.771171   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:49:41.771194   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:49:41.771441   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:49:41.771517   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:49:41.771716   17475 main.go:141] libmachine: (addons-671083) Calling .GetState
	I0816 16:49:41.771900   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:49:41.771918   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:49:41.772292   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:49:41.772386   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35971
	I0816 16:49:41.772899   17475 main.go:141] libmachine: (addons-671083) Calling .GetState
	I0816 16:49:41.773236   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:49:41.773808   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:49:41.773824   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:49:41.773885   17475 main.go:141] libmachine: (addons-671083) Calling .DriverName
	I0816 16:49:41.773956   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:49:41.773970   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:49:41.774475   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:49:41.774802   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:49:41.775066   17475 main.go:141] libmachine: (addons-671083) Calling .GetState
	I0816 16:49:41.775518   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:49:41.775553   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:49:41.776177   17475 main.go:141] libmachine: (addons-671083) Calling .DriverName
	I0816 16:49:41.776241   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32865
	I0816 16:49:41.776514   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43471
	I0816 16:49:41.777061   17475 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0816 16:49:41.777492   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:49:41.777498   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:49:41.777928   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:49:41.777941   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:49:41.778342   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:49:41.778417   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36395
	I0816 16:49:41.778577   17475 addons.go:234] Setting addon default-storageclass=true in "addons-671083"
	I0816 16:49:41.778589   17475 main.go:141] libmachine: (addons-671083) Calling .GetState
	I0816 16:49:41.778608   17475 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0816 16:49:41.778611   17475 host.go:66] Checking if "addons-671083" exists ...
	I0816 16:49:41.778621   17475 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0816 16:49:41.778637   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHHostname
	I0816 16:49:41.778957   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:49:41.778989   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:49:41.779209   17475 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0816 16:49:41.779718   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:49:41.779735   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:49:41.780139   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:49:41.780307   17475 main.go:141] libmachine: (addons-671083) Calling .GetState
	I0816 16:49:41.780390   17475 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0816 16:49:41.780416   17475 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0816 16:49:41.780437   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHHostname
	I0816 16:49:41.780556   17475 main.go:141] libmachine: (addons-671083) Calling .DriverName
	I0816 16:49:41.780634   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:49:41.781682   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:49:41.781699   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:49:41.782047   17475 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 16:49:41.782596   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:49:41.783270   17475 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 16:49:41.783287   17475 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 16:49:41.783303   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHHostname
	I0816 16:49:41.783331   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:41.783377   17475 host.go:66] Checking if "addons-671083" exists ...
	I0816 16:49:41.783666   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:49:41.783688   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:49:41.784435   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:49:41.784471   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:49:41.784709   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:41.784752   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45851
	I0816 16:49:41.785072   17475 main.go:141] libmachine: (addons-671083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:34:d9", ip: ""} in network mk-addons-671083: {Iface:virbr1 ExpiryTime:2024-08-16 17:49:12 +0000 UTC Type:0 Mac:52:54:00:4b:34:d9 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:addons-671083 Clientid:01:52:54:00:4b:34:d9}
	I0816 16:49:41.785092   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined IP address 192.168.39.240 and MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:41.785290   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHPort
	I0816 16:49:41.785293   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:49:41.785494   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHKeyPath
	I0816 16:49:41.785984   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:49:41.786003   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:49:41.786062   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHUsername
	I0816 16:49:41.786070   17475 main.go:141] libmachine: (addons-671083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:34:d9", ip: ""} in network mk-addons-671083: {Iface:virbr1 ExpiryTime:2024-08-16 17:49:12 +0000 UTC Type:0 Mac:52:54:00:4b:34:d9 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:addons-671083 Clientid:01:52:54:00:4b:34:d9}
	I0816 16:49:41.786092   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined IP address 192.168.39.240 and MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:41.786234   17475 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/addons-671083/id_rsa Username:docker}
	I0816 16:49:41.786543   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:49:41.786545   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHPort
	I0816 16:49:41.786747   17475 main.go:141] libmachine: (addons-671083) Calling .GetState
	I0816 16:49:41.786801   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHKeyPath
	I0816 16:49:41.786982   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHUsername
	I0816 16:49:41.787148   17475 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/addons-671083/id_rsa Username:docker}
	I0816 16:49:41.787411   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:41.787443   17475 main.go:141] libmachine: (addons-671083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:34:d9", ip: ""} in network mk-addons-671083: {Iface:virbr1 ExpiryTime:2024-08-16 17:49:12 +0000 UTC Type:0 Mac:52:54:00:4b:34:d9 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:addons-671083 Clientid:01:52:54:00:4b:34:d9}
	I0816 16:49:41.787456   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined IP address 192.168.39.240 and MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:41.788197   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHPort
	I0816 16:49:41.788424   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHKeyPath
	I0816 16:49:41.788617   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHUsername
	I0816 16:49:41.788786   17475 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/addons-671083/id_rsa Username:docker}
	I0816 16:49:41.790493   17475 main.go:141] libmachine: (addons-671083) Calling .DriverName
	I0816 16:49:41.790751   17475 main.go:141] libmachine: Making call to close driver server
	I0816 16:49:41.790763   17475 main.go:141] libmachine: (addons-671083) Calling .Close
	I0816 16:49:41.792523   17475 main.go:141] libmachine: (addons-671083) DBG | Closing plugin on server side
	I0816 16:49:41.792547   17475 main.go:141] libmachine: Successfully made call to close driver server
	I0816 16:49:41.792553   17475 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 16:49:41.792559   17475 main.go:141] libmachine: Making call to close driver server
	I0816 16:49:41.792563   17475 main.go:141] libmachine: (addons-671083) Calling .Close
	I0816 16:49:41.792787   17475 main.go:141] libmachine: Successfully made call to close driver server
	I0816 16:49:41.792800   17475 main.go:141] libmachine: Making call to close connection to plugin binary
	W0816 16:49:41.792887   17475 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0816 16:49:41.798158   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41901
	I0816 16:49:41.798674   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:49:41.799237   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:49:41.799256   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:49:41.799627   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:49:41.799833   17475 main.go:141] libmachine: (addons-671083) Calling .GetState
	I0816 16:49:41.803488   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45565
	I0816 16:49:41.803662   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33917
	I0816 16:49:41.804093   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:49:41.804199   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:49:41.804684   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:49:41.804712   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:49:41.804869   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:49:41.804888   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:49:41.805224   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:49:41.805768   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:49:41.805809   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:49:41.806031   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33101
	I0816 16:49:41.806474   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:49:41.806530   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:49:41.806668   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33485
	I0816 16:49:41.806922   17475 main.go:141] libmachine: (addons-671083) Calling .GetState
	I0816 16:49:41.807046   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:49:41.807055   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:49:41.807620   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:49:41.807929   17475 main.go:141] libmachine: (addons-671083) Calling .GetState
	I0816 16:49:41.808338   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37013
	I0816 16:49:41.809298   17475 main.go:141] libmachine: (addons-671083) Calling .DriverName
	I0816 16:49:41.809620   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33117
	I0816 16:49:41.809769   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:49:41.809980   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:49:41.810003   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34957
	I0816 16:49:41.809852   17475 main.go:141] libmachine: (addons-671083) Calling .DriverName
	I0816 16:49:41.810455   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:49:41.810475   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:49:41.810636   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:49:41.810787   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:49:41.810807   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:49:41.811021   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:49:41.811121   17475 main.go:141] libmachine: (addons-671083) Calling .DriverName
	I0816 16:49:41.811182   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:49:41.811221   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:49:41.811690   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:49:41.811728   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:49:41.811783   17475 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0816 16:49:41.812135   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:49:41.812177   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:49:41.812486   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:49:41.812509   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:49:41.812564   17475 out.go:177]   - Using image docker.io/registry:2.8.3
	I0816 16:49:41.812789   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:49:41.812812   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:49:41.812880   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:49:41.813067   17475 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0816 16:49:41.813158   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:49:41.813441   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36287
	I0816 16:49:41.813540   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:49:41.813581   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:49:41.813603   17475 main.go:141] libmachine: (addons-671083) Calling .GetState
	I0816 16:49:41.814514   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:49:41.814531   17475 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0816 16:49:41.814563   17475 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0816 16:49:41.814898   17475 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0816 16:49:41.814917   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHHostname
	I0816 16:49:41.815335   17475 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0816 16:49:41.815395   17475 main.go:141] libmachine: (addons-671083) Calling .DriverName
	I0816 16:49:41.815899   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:49:41.815917   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:49:41.816761   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:49:41.816978   17475 main.go:141] libmachine: (addons-671083) Calling .DriverName
	I0816 16:49:41.817025   17475 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0816 16:49:41.817038   17475 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0816 16:49:41.817061   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHHostname
	I0816 16:49:41.818153   17475 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0816 16:49:41.818239   17475 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0816 16:49:41.819712   17475 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0816 16:49:41.819730   17475 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0816 16:49:41.819747   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHHostname
	I0816 16:49:41.819966   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:41.820044   17475 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0816 16:49:41.820061   17475 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0816 16:49:41.820075   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHHostname
	I0816 16:49:41.820350   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:41.820798   17475 main.go:141] libmachine: (addons-671083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:34:d9", ip: ""} in network mk-addons-671083: {Iface:virbr1 ExpiryTime:2024-08-16 17:49:12 +0000 UTC Type:0 Mac:52:54:00:4b:34:d9 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:addons-671083 Clientid:01:52:54:00:4b:34:d9}
	I0816 16:49:41.820818   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined IP address 192.168.39.240 and MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:41.821200   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHPort
	I0816 16:49:41.821635   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHKeyPath
	I0816 16:49:41.821862   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHUsername
	I0816 16:49:41.822113   17475 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/addons-671083/id_rsa Username:docker}
	I0816 16:49:41.823897   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:41.823929   17475 main.go:141] libmachine: (addons-671083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:34:d9", ip: ""} in network mk-addons-671083: {Iface:virbr1 ExpiryTime:2024-08-16 17:49:12 +0000 UTC Type:0 Mac:52:54:00:4b:34:d9 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:addons-671083 Clientid:01:52:54:00:4b:34:d9}
	I0816 16:49:41.823948   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined IP address 192.168.39.240 and MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:41.824035   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHPort
	I0816 16:49:41.824291   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHKeyPath
	I0816 16:49:41.824343   17475 main.go:141] libmachine: (addons-671083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:34:d9", ip: ""} in network mk-addons-671083: {Iface:virbr1 ExpiryTime:2024-08-16 17:49:12 +0000 UTC Type:0 Mac:52:54:00:4b:34:d9 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:addons-671083 Clientid:01:52:54:00:4b:34:d9}
	I0816 16:49:41.824357   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined IP address 192.168.39.240 and MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:41.824394   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHPort
	I0816 16:49:41.824591   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHUsername
	I0816 16:49:41.824679   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHKeyPath
	I0816 16:49:41.824841   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHUsername
	I0816 16:49:41.825009   17475 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/addons-671083/id_rsa Username:docker}
	I0816 16:49:41.825011   17475 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/addons-671083/id_rsa Username:docker}
	I0816 16:49:41.825267   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:41.825588   17475 main.go:141] libmachine: (addons-671083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:34:d9", ip: ""} in network mk-addons-671083: {Iface:virbr1 ExpiryTime:2024-08-16 17:49:12 +0000 UTC Type:0 Mac:52:54:00:4b:34:d9 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:addons-671083 Clientid:01:52:54:00:4b:34:d9}
	I0816 16:49:41.825611   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined IP address 192.168.39.240 and MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:41.825853   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHPort
	I0816 16:49:41.826025   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHKeyPath
	I0816 16:49:41.826154   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHUsername
	I0816 16:49:41.826271   17475 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/addons-671083/id_rsa Username:docker}
	I0816 16:49:41.829052   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44613
	I0816 16:49:41.829363   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:49:41.829826   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:49:41.829845   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:49:41.830266   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:49:41.830728   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:49:41.830757   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:49:41.835187   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36789
	I0816 16:49:41.835328   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45315
	I0816 16:49:41.836257   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:49:41.836266   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:49:41.836746   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:49:41.836769   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:49:41.836902   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:49:41.836919   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:49:41.837273   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:49:41.837604   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:49:41.837646   17475 main.go:141] libmachine: (addons-671083) Calling .GetState
	I0816 16:49:41.837733   17475 main.go:141] libmachine: (addons-671083) Calling .GetState
	I0816 16:49:41.839652   17475 main.go:141] libmachine: (addons-671083) Calling .DriverName
	I0816 16:49:41.839910   17475 main.go:141] libmachine: (addons-671083) Calling .DriverName
	I0816 16:49:41.841666   17475 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0816 16:49:41.841685   17475 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0816 16:49:41.842075   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38235
	I0816 16:49:41.842451   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:49:41.842798   17475 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 16:49:41.842823   17475 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0816 16:49:41.842842   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHHostname
	I0816 16:49:41.842910   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:49:41.842925   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:49:41.842912   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41741
	I0816 16:49:41.843009   17475 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0816 16:49:41.843023   17475 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0816 16:49:41.843041   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHHostname
	I0816 16:49:41.843381   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:49:41.843561   17475 main.go:141] libmachine: (addons-671083) Calling .GetState
	I0816 16:49:41.843903   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:49:41.844573   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:49:41.844606   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:49:41.845183   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:49:41.845644   17475 main.go:141] libmachine: (addons-671083) Calling .GetState
	I0816 16:49:41.846501   17475 main.go:141] libmachine: (addons-671083) Calling .DriverName
	I0816 16:49:41.846883   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:41.847276   17475 main.go:141] libmachine: (addons-671083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:34:d9", ip: ""} in network mk-addons-671083: {Iface:virbr1 ExpiryTime:2024-08-16 17:49:12 +0000 UTC Type:0 Mac:52:54:00:4b:34:d9 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:addons-671083 Clientid:01:52:54:00:4b:34:d9}
	I0816 16:49:41.847300   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined IP address 192.168.39.240 and MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:41.847429   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHPort
	I0816 16:49:41.847783   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHKeyPath
	I0816 16:49:41.847907   17475 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0816 16:49:41.848284   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:41.848321   17475 main.go:141] libmachine: (addons-671083) Calling .DriverName
	I0816 16:49:41.848377   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHUsername
	I0816 16:49:41.848554   17475 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/addons-671083/id_rsa Username:docker}
	I0816 16:49:41.849193   17475 main.go:141] libmachine: (addons-671083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:34:d9", ip: ""} in network mk-addons-671083: {Iface:virbr1 ExpiryTime:2024-08-16 17:49:12 +0000 UTC Type:0 Mac:52:54:00:4b:34:d9 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:addons-671083 Clientid:01:52:54:00:4b:34:d9}
	I0816 16:49:41.849218   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined IP address 192.168.39.240 and MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:41.849392   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHPort
	I0816 16:49:41.849545   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHKeyPath
	I0816 16:49:41.849684   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHUsername
	I0816 16:49:41.849728   17475 out.go:177]   - Using image docker.io/busybox:stable
	I0816 16:49:41.849745   17475 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0816 16:49:41.849838   17475 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/addons-671083/id_rsa Username:docker}
	I0816 16:49:41.851112   17475 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0816 16:49:41.851129   17475 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0816 16:49:41.851147   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHHostname
	I0816 16:49:41.851976   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46275
	I0816 16:49:41.852006   17475 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0816 16:49:41.852554   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:49:41.853154   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:49:41.853178   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:49:41.853551   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:49:41.853727   17475 main.go:141] libmachine: (addons-671083) Calling .GetState
	I0816 16:49:41.854441   17475 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0816 16:49:41.854598   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:41.855094   17475 main.go:141] libmachine: (addons-671083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:34:d9", ip: ""} in network mk-addons-671083: {Iface:virbr1 ExpiryTime:2024-08-16 17:49:12 +0000 UTC Type:0 Mac:52:54:00:4b:34:d9 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:addons-671083 Clientid:01:52:54:00:4b:34:d9}
	I0816 16:49:41.855124   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined IP address 192.168.39.240 and MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:41.855166   17475 main.go:141] libmachine: (addons-671083) Calling .DriverName
	I0816 16:49:41.855361   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHPort
	I0816 16:49:41.855363   17475 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 16:49:41.855400   17475 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 16:49:41.855408   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHHostname
	I0816 16:49:41.855496   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHKeyPath
	I0816 16:49:41.855677   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHUsername
	I0816 16:49:41.855795   17475 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/addons-671083/id_rsa Username:docker}
	I0816 16:49:41.856511   17475 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0816 16:49:41.857585   17475 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0816 16:49:41.858390   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:41.858748   17475 main.go:141] libmachine: (addons-671083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:34:d9", ip: ""} in network mk-addons-671083: {Iface:virbr1 ExpiryTime:2024-08-16 17:49:12 +0000 UTC Type:0 Mac:52:54:00:4b:34:d9 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:addons-671083 Clientid:01:52:54:00:4b:34:d9}
	I0816 16:49:41.858774   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined IP address 192.168.39.240 and MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:41.858926   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHPort
	I0816 16:49:41.859151   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHKeyPath
	I0816 16:49:41.859306   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHUsername
	I0816 16:49:41.859441   17475 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/addons-671083/id_rsa Username:docker}
	I0816 16:49:41.859787   17475 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0816 16:49:41.860975   17475 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0816 16:49:41.862091   17475 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0816 16:49:41.863137   17475 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0816 16:49:41.863159   17475 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0816 16:49:41.863181   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHHostname
	I0816 16:49:41.866255   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:41.866639   17475 main.go:141] libmachine: (addons-671083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:34:d9", ip: ""} in network mk-addons-671083: {Iface:virbr1 ExpiryTime:2024-08-16 17:49:12 +0000 UTC Type:0 Mac:52:54:00:4b:34:d9 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:addons-671083 Clientid:01:52:54:00:4b:34:d9}
	I0816 16:49:41.866654   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined IP address 192.168.39.240 and MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:41.866816   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHPort
	I0816 16:49:41.866970   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHKeyPath
	I0816 16:49:41.867058   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHUsername
	I0816 16:49:41.867130   17475 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/addons-671083/id_rsa Username:docker}
	W0816 16:49:41.877808   17475 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:46902->192.168.39.240:22: read: connection reset by peer
	I0816 16:49:41.877847   17475 retry.go:31] will retry after 205.707768ms: ssh: handshake failed: read tcp 192.168.39.1:46902->192.168.39.240:22: read: connection reset by peer
	I0816 16:49:42.128251   17475 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0816 16:49:42.128276   17475 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0816 16:49:42.147026   17475 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 16:49:42.147049   17475 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0816 16:49:42.148901   17475 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 16:49:42.148918   17475 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0816 16:49:42.211397   17475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0816 16:49:42.214063   17475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0816 16:49:42.234360   17475 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0816 16:49:42.234390   17475 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0816 16:49:42.244323   17475 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0816 16:49:42.244348   17475 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0816 16:49:42.246982   17475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0816 16:49:42.248024   17475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0816 16:49:42.261768   17475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0816 16:49:42.279231   17475 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0816 16:49:42.279252   17475 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0816 16:49:42.281911   17475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 16:49:42.284283   17475 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0816 16:49:42.284305   17475 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0816 16:49:42.289272   17475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 16:49:42.293655   17475 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0816 16:49:42.293676   17475 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0816 16:49:42.420307   17475 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0816 16:49:42.420339   17475 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0816 16:49:42.452168   17475 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0816 16:49:42.452194   17475 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0816 16:49:42.463898   17475 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 16:49:42.463931   17475 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0816 16:49:42.506638   17475 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0816 16:49:42.506657   17475 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0816 16:49:42.531253   17475 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0816 16:49:42.531281   17475 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0816 16:49:42.576917   17475 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0816 16:49:42.576946   17475 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0816 16:49:42.580927   17475 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0816 16:49:42.580947   17475 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0816 16:49:42.616752   17475 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0816 16:49:42.616774   17475 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0816 16:49:42.670886   17475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0816 16:49:42.727245   17475 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 16:49:42.727277   17475 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0816 16:49:42.728653   17475 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0816 16:49:42.728672   17475 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0816 16:49:42.752374   17475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0816 16:49:42.795598   17475 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0816 16:49:42.795633   17475 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0816 16:49:42.813915   17475 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0816 16:49:42.813942   17475 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0816 16:49:42.855297   17475 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0816 16:49:42.855333   17475 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0816 16:49:42.864883   17475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 16:49:42.889069   17475 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0816 16:49:42.889093   17475 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0816 16:49:42.972089   17475 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0816 16:49:42.972112   17475 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0816 16:49:42.993675   17475 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0816 16:49:42.993700   17475 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0816 16:49:43.031829   17475 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0816 16:49:43.031849   17475 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0816 16:49:43.078742   17475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0816 16:49:43.156969   17475 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0816 16:49:43.156995   17475 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0816 16:49:43.211009   17475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0816 16:49:43.245841   17475 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0816 16:49:43.245910   17475 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0816 16:49:43.436179   17475 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0816 16:49:43.436212   17475 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0816 16:49:43.471720   17475 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0816 16:49:43.471748   17475 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0816 16:49:43.645354   17475 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0816 16:49:43.645375   17475 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0816 16:49:43.686628   17475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0816 16:49:43.911641   17475 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0816 16:49:43.911791   17475 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0816 16:49:44.006173   17475 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0816 16:49:44.006195   17475 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0816 16:49:44.159626   17475 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0816 16:49:44.159651   17475 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0816 16:49:44.289626   17475 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0816 16:49:44.289656   17475 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0816 16:49:44.553241   17475 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.406162375s)
	I0816 16:49:44.553275   17475 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0816 16:49:44.553313   17475 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.406257866s)
	I0816 16:49:44.554511   17475 node_ready.go:35] waiting up to 6m0s for node "addons-671083" to be "Ready" ...
	I0816 16:49:44.569361   17475 node_ready.go:49] node "addons-671083" has status "Ready":"True"
	I0816 16:49:44.569383   17475 node_ready.go:38] duration metric: took 14.852002ms for node "addons-671083" to be "Ready" ...
	I0816 16:49:44.569393   17475 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 16:49:44.652691   17475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0816 16:49:44.653817   17475 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-jq9bq" in "kube-system" namespace to be "Ready" ...
	I0816 16:49:45.094988   17475 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-671083" context rescaled to 1 replicas
	I0816 16:49:46.663921   17475 pod_ready.go:103] pod "coredns-6f6b679f8f-jq9bq" in "kube-system" namespace has status "Ready":"False"
	I0816 16:49:48.690025   17475 pod_ready.go:103] pod "coredns-6f6b679f8f-jq9bq" in "kube-system" namespace has status "Ready":"False"
	I0816 16:49:48.841302   17475 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0816 16:49:48.841341   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHHostname
	I0816 16:49:48.844288   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:48.844610   17475 main.go:141] libmachine: (addons-671083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:34:d9", ip: ""} in network mk-addons-671083: {Iface:virbr1 ExpiryTime:2024-08-16 17:49:12 +0000 UTC Type:0 Mac:52:54:00:4b:34:d9 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:addons-671083 Clientid:01:52:54:00:4b:34:d9}
	I0816 16:49:48.844654   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined IP address 192.168.39.240 and MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:48.844789   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHPort
	I0816 16:49:48.845023   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHKeyPath
	I0816 16:49:48.845183   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHUsername
	I0816 16:49:48.845422   17475 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/addons-671083/id_rsa Username:docker}
	I0816 16:49:49.074767   17475 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0816 16:49:49.102644   17475 addons.go:234] Setting addon gcp-auth=true in "addons-671083"
	I0816 16:49:49.102732   17475 host.go:66] Checking if "addons-671083" exists ...
	I0816 16:49:49.103176   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:49:49.103215   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:49:49.118790   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35945
	I0816 16:49:49.119299   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:49:49.119862   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:49:49.119894   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:49:49.120273   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:49:49.120925   17475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 16:49:49.120960   17475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 16:49:49.136215   17475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37611
	I0816 16:49:49.136701   17475 main.go:141] libmachine: () Calling .GetVersion
	I0816 16:49:49.137278   17475 main.go:141] libmachine: Using API Version  1
	I0816 16:49:49.137306   17475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 16:49:49.137700   17475 main.go:141] libmachine: () Calling .GetMachineName
	I0816 16:49:49.137914   17475 main.go:141] libmachine: (addons-671083) Calling .GetState
	I0816 16:49:49.139630   17475 main.go:141] libmachine: (addons-671083) Calling .DriverName
	I0816 16:49:49.139887   17475 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0816 16:49:49.139914   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHHostname
	I0816 16:49:49.143232   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:49.143673   17475 main.go:141] libmachine: (addons-671083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:34:d9", ip: ""} in network mk-addons-671083: {Iface:virbr1 ExpiryTime:2024-08-16 17:49:12 +0000 UTC Type:0 Mac:52:54:00:4b:34:d9 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:addons-671083 Clientid:01:52:54:00:4b:34:d9}
	I0816 16:49:49.143705   17475 main.go:141] libmachine: (addons-671083) DBG | domain addons-671083 has defined IP address 192.168.39.240 and MAC address 52:54:00:4b:34:d9 in network mk-addons-671083
	I0816 16:49:49.143842   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHPort
	I0816 16:49:49.144046   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHKeyPath
	I0816 16:49:49.144217   17475 main.go:141] libmachine: (addons-671083) Calling .GetSSHUsername
	I0816 16:49:49.144394   17475 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/addons-671083/id_rsa Username:docker}
	I0816 16:49:50.347685   17475 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.136249715s)
	I0816 16:49:50.347735   17475 main.go:141] libmachine: Making call to close driver server
	I0816 16:49:50.347748   17475 main.go:141] libmachine: (addons-671083) Calling .Close
	I0816 16:49:50.347786   17475 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.133690502s)
	I0816 16:49:50.347837   17475 main.go:141] libmachine: Making call to close driver server
	I0816 16:49:50.347850   17475 main.go:141] libmachine: (addons-671083) Calling .Close
	I0816 16:49:50.347851   17475 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.100823994s)
	I0816 16:49:50.347871   17475 main.go:141] libmachine: Making call to close driver server
	I0816 16:49:50.347884   17475 main.go:141] libmachine: (addons-671083) Calling .Close
	I0816 16:49:50.347965   17475 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.099895276s)
	I0816 16:49:50.348012   17475 main.go:141] libmachine: Making call to close driver server
	I0816 16:49:50.348051   17475 main.go:141] libmachine: (addons-671083) Calling .Close
	I0816 16:49:50.348198   17475 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.086405668s)
	I0816 16:49:50.348226   17475 main.go:141] libmachine: Making call to close driver server
	I0816 16:49:50.348233   17475 main.go:141] libmachine: (addons-671083) Calling .Close
	I0816 16:49:50.348299   17475 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.066369333s)
	I0816 16:49:50.348319   17475 main.go:141] libmachine: Making call to close driver server
	I0816 16:49:50.348327   17475 main.go:141] libmachine: (addons-671083) Calling .Close
	I0816 16:49:50.348405   17475 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.0591027s)
	I0816 16:49:50.348418   17475 main.go:141] libmachine: Making call to close driver server
	I0816 16:49:50.348425   17475 main.go:141] libmachine: (addons-671083) Calling .Close
	I0816 16:49:50.348495   17475 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.677584762s)
	I0816 16:49:50.348515   17475 main.go:141] libmachine: Making call to close driver server
	I0816 16:49:50.348529   17475 main.go:141] libmachine: (addons-671083) Calling .Close
	I0816 16:49:50.348574   17475 main.go:141] libmachine: (addons-671083) DBG | Closing plugin on server side
	I0816 16:49:50.348582   17475 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.596181723s)
	I0816 16:49:50.348595   17475 main.go:141] libmachine: Making call to close driver server
	I0816 16:49:50.348603   17475 main.go:141] libmachine: (addons-671083) Calling .Close
	I0816 16:49:50.348614   17475 main.go:141] libmachine: Successfully made call to close driver server
	I0816 16:49:50.348651   17475 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 16:49:50.348660   17475 main.go:141] libmachine: Making call to close driver server
	I0816 16:49:50.348667   17475 main.go:141] libmachine: (addons-671083) Calling .Close
	I0816 16:49:50.348698   17475 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.483789547s)
	I0816 16:49:50.348714   17475 main.go:141] libmachine: Making call to close driver server
	I0816 16:49:50.348720   17475 main.go:141] libmachine: Successfully made call to close driver server
	I0816 16:49:50.348722   17475 main.go:141] libmachine: (addons-671083) Calling .Close
	I0816 16:49:50.348729   17475 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 16:49:50.348737   17475 main.go:141] libmachine: Making call to close driver server
	I0816 16:49:50.348745   17475 main.go:141] libmachine: (addons-671083) Calling .Close
	I0816 16:49:50.348784   17475 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.270009033s)
	I0816 16:49:50.348798   17475 main.go:141] libmachine: Making call to close driver server
	I0816 16:49:50.348805   17475 main.go:141] libmachine: (addons-671083) Calling .Close
	I0816 16:49:50.348912   17475 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.13786678s)
	W0816 16:49:50.348939   17475 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0816 16:49:50.348965   17475 retry.go:31] will retry after 150.564904ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0816 16:49:50.349042   17475 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.66238718s)
	I0816 16:49:50.349056   17475 main.go:141] libmachine: Making call to close driver server
	I0816 16:49:50.349063   17475 main.go:141] libmachine: (addons-671083) Calling .Close
	I0816 16:49:50.349108   17475 main.go:141] libmachine: (addons-671083) DBG | Closing plugin on server side
	I0816 16:49:50.349128   17475 main.go:141] libmachine: Successfully made call to close driver server
	I0816 16:49:50.349140   17475 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 16:49:50.349276   17475 main.go:141] libmachine: (addons-671083) DBG | Closing plugin on server side
	I0816 16:49:50.349300   17475 main.go:141] libmachine: (addons-671083) DBG | Closing plugin on server side
	I0816 16:49:50.349315   17475 main.go:141] libmachine: (addons-671083) DBG | Closing plugin on server side
	I0816 16:49:50.349331   17475 main.go:141] libmachine: (addons-671083) DBG | Closing plugin on server side
	I0816 16:49:50.349354   17475 main.go:141] libmachine: Successfully made call to close driver server
	I0816 16:49:50.349365   17475 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 16:49:50.350094   17475 main.go:141] libmachine: Successfully made call to close driver server
	I0816 16:49:50.350117   17475 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 16:49:50.350126   17475 main.go:141] libmachine: Making call to close driver server
	I0816 16:49:50.350133   17475 main.go:141] libmachine: (addons-671083) Calling .Close
	I0816 16:49:50.350184   17475 main.go:141] libmachine: (addons-671083) DBG | Closing plugin on server side
	I0816 16:49:50.350209   17475 main.go:141] libmachine: Successfully made call to close driver server
	I0816 16:49:50.350216   17475 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 16:49:50.350224   17475 main.go:141] libmachine: Making call to close driver server
	I0816 16:49:50.350232   17475 main.go:141] libmachine: (addons-671083) Calling .Close
	I0816 16:49:50.350533   17475 main.go:141] libmachine: (addons-671083) DBG | Closing plugin on server side
	I0816 16:49:50.350587   17475 main.go:141] libmachine: Successfully made call to close driver server
	I0816 16:49:50.350602   17475 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 16:49:50.350611   17475 main.go:141] libmachine: Making call to close driver server
	I0816 16:49:50.350621   17475 main.go:141] libmachine: (addons-671083) Calling .Close
	I0816 16:49:50.350680   17475 main.go:141] libmachine: (addons-671083) DBG | Closing plugin on server side
	I0816 16:49:50.350706   17475 main.go:141] libmachine: Successfully made call to close driver server
	I0816 16:49:50.350717   17475 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 16:49:50.350758   17475 main.go:141] libmachine: (addons-671083) DBG | Closing plugin on server side
	I0816 16:49:50.350888   17475 main.go:141] libmachine: (addons-671083) DBG | Closing plugin on server side
	I0816 16:49:50.350943   17475 main.go:141] libmachine: Successfully made call to close driver server
	I0816 16:49:50.350954   17475 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 16:49:50.350963   17475 addons.go:475] Verifying addon registry=true in "addons-671083"
	I0816 16:49:50.351484   17475 main.go:141] libmachine: (addons-671083) DBG | Closing plugin on server side
	I0816 16:49:50.351515   17475 main.go:141] libmachine: Successfully made call to close driver server
	I0816 16:49:50.351526   17475 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 16:49:50.352393   17475 out.go:177] * Verifying registry addon...
	I0816 16:49:50.352749   17475 main.go:141] libmachine: (addons-671083) DBG | Closing plugin on server side
	I0816 16:49:50.352799   17475 main.go:141] libmachine: (addons-671083) DBG | Closing plugin on server side
	I0816 16:49:50.352834   17475 main.go:141] libmachine: Successfully made call to close driver server
	I0816 16:49:50.352861   17475 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 16:49:50.352884   17475 main.go:141] libmachine: Making call to close driver server
	I0816 16:49:50.352900   17475 main.go:141] libmachine: (addons-671083) Calling .Close
	I0816 16:49:50.348014   17475 main.go:141] libmachine: Successfully made call to close driver server
	I0816 16:49:50.352934   17475 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 16:49:50.352945   17475 main.go:141] libmachine: Making call to close driver server
	I0816 16:49:50.352953   17475 main.go:141] libmachine: (addons-671083) Calling .Close
	I0816 16:49:50.348116   17475 main.go:141] libmachine: Successfully made call to close driver server
	I0816 16:49:50.352994   17475 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 16:49:50.353002   17475 main.go:141] libmachine: Making call to close driver server
	I0816 16:49:50.353009   17475 main.go:141] libmachine: (addons-671083) Calling .Close
	I0816 16:49:50.353017   17475 main.go:141] libmachine: Successfully made call to close driver server
	I0816 16:49:50.353025   17475 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 16:49:50.348135   17475 main.go:141] libmachine: (addons-671083) DBG | Closing plugin on server side
	I0816 16:49:50.353051   17475 main.go:141] libmachine: Successfully made call to close driver server
	I0816 16:49:50.353059   17475 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 16:49:50.353067   17475 main.go:141] libmachine: Making call to close driver server
	I0816 16:49:50.348155   17475 main.go:141] libmachine: (addons-671083) DBG | Closing plugin on server side
	I0816 16:49:50.353033   17475 main.go:141] libmachine: Making call to close driver server
	I0816 16:49:50.353083   17475 main.go:141] libmachine: (addons-671083) Calling .Close
	I0816 16:49:50.353075   17475 main.go:141] libmachine: (addons-671083) Calling .Close
	I0816 16:49:50.353909   17475 main.go:141] libmachine: (addons-671083) DBG | Closing plugin on server side
	I0816 16:49:50.353933   17475 main.go:141] libmachine: Successfully made call to close driver server
	I0816 16:49:50.353940   17475 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 16:49:50.354066   17475 main.go:141] libmachine: (addons-671083) DBG | Closing plugin on server side
	I0816 16:49:50.354098   17475 main.go:141] libmachine: Successfully made call to close driver server
	I0816 16:49:50.354106   17475 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 16:49:50.354113   17475 main.go:141] libmachine: Making call to close driver server
	I0816 16:49:50.354121   17475 main.go:141] libmachine: (addons-671083) Calling .Close
	I0816 16:49:50.354169   17475 main.go:141] libmachine: (addons-671083) DBG | Closing plugin on server side
	I0816 16:49:50.354189   17475 main.go:141] libmachine: (addons-671083) DBG | Closing plugin on server side
	I0816 16:49:50.354221   17475 main.go:141] libmachine: Successfully made call to close driver server
	I0816 16:49:50.354229   17475 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 16:49:50.354236   17475 addons.go:475] Verifying addon ingress=true in "addons-671083"
	I0816 16:49:50.354273   17475 main.go:141] libmachine: (addons-671083) DBG | Closing plugin on server side
	I0816 16:49:50.354327   17475 main.go:141] libmachine: Successfully made call to close driver server
	I0816 16:49:50.354335   17475 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 16:49:50.354343   17475 addons.go:475] Verifying addon metrics-server=true in "addons-671083"
	I0816 16:49:50.348086   17475 main.go:141] libmachine: Successfully made call to close driver server
	I0816 16:49:50.354597   17475 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 16:49:50.354608   17475 main.go:141] libmachine: Making call to close driver server
	I0816 16:49:50.354616   17475 main.go:141] libmachine: (addons-671083) Calling .Close
	I0816 16:49:50.355035   17475 main.go:141] libmachine: (addons-671083) DBG | Closing plugin on server side
	I0816 16:49:50.355068   17475 main.go:141] libmachine: Successfully made call to close driver server
	I0816 16:49:50.355075   17475 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 16:49:50.355613   17475 main.go:141] libmachine: Successfully made call to close driver server
	I0816 16:49:50.355623   17475 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 16:49:50.354304   17475 main.go:141] libmachine: Successfully made call to close driver server
	I0816 16:49:50.355782   17475 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 16:49:50.356037   17475 out.go:177] * Verifying ingress addon...
	I0816 16:49:50.356124   17475 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0816 16:49:50.357023   17475 main.go:141] libmachine: (addons-671083) DBG | Closing plugin on server side
	I0816 16:49:50.357086   17475 main.go:141] libmachine: Successfully made call to close driver server
	I0816 16:49:50.357123   17475 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 16:49:50.357337   17475 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-671083 service yakd-dashboard -n yakd-dashboard
	
	I0816 16:49:50.358313   17475 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0816 16:49:50.382266   17475 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0816 16:49:50.382295   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:49:50.385956   17475 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0816 16:49:50.385975   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:49:50.398837   17475 main.go:141] libmachine: Making call to close driver server
	I0816 16:49:50.398862   17475 main.go:141] libmachine: (addons-671083) Calling .Close
	I0816 16:49:50.399297   17475 main.go:141] libmachine: (addons-671083) DBG | Closing plugin on server side
	I0816 16:49:50.399345   17475 main.go:141] libmachine: Successfully made call to close driver server
	I0816 16:49:50.399354   17475 main.go:141] libmachine: Making call to close connection to plugin binary
	W0816 16:49:50.399433   17475 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0816 16:49:50.407098   17475 main.go:141] libmachine: Making call to close driver server
	I0816 16:49:50.407118   17475 main.go:141] libmachine: (addons-671083) Calling .Close
	I0816 16:49:50.407404   17475 main.go:141] libmachine: Successfully made call to close driver server
	I0816 16:49:50.407424   17475 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 16:49:50.499922   17475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0816 16:49:50.870589   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:49:50.872882   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:49:51.175544   17475 pod_ready.go:103] pod "coredns-6f6b679f8f-jq9bq" in "kube-system" namespace has status "Ready":"False"
	I0816 16:49:51.355975   17475 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.703230957s)
	I0816 16:49:51.356030   17475 main.go:141] libmachine: Making call to close driver server
	I0816 16:49:51.356043   17475 main.go:141] libmachine: (addons-671083) Calling .Close
	I0816 16:49:51.355988   17475 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.216077119s)
	I0816 16:49:51.356388   17475 main.go:141] libmachine: Successfully made call to close driver server
	I0816 16:49:51.356410   17475 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 16:49:51.356418   17475 main.go:141] libmachine: Making call to close driver server
	I0816 16:49:51.356424   17475 main.go:141] libmachine: (addons-671083) Calling .Close
	I0816 16:49:51.356695   17475 main.go:141] libmachine: (addons-671083) DBG | Closing plugin on server side
	I0816 16:49:51.356708   17475 main.go:141] libmachine: Successfully made call to close driver server
	I0816 16:49:51.356722   17475 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 16:49:51.356731   17475 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-671083"
	I0816 16:49:51.359305   17475 out.go:177] * Verifying csi-hostpath-driver addon...
	I0816 16:49:51.359320   17475 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0816 16:49:51.360714   17475 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0816 16:49:51.361539   17475 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0816 16:49:51.361552   17475 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0816 16:49:51.361626   17475 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0816 16:49:51.397203   17475 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0816 16:49:51.397226   17475 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0816 16:49:51.399360   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:49:51.399962   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:49:51.400495   17475 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0816 16:49:51.400519   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:49:51.503342   17475 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0816 16:49:51.503369   17475 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0816 16:49:51.556046   17475 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0816 16:49:51.860495   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:49:51.863128   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:49:51.866033   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:49:52.450400   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:49:52.451065   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:49:52.451185   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:49:52.661731   17475 pod_ready.go:93] pod "coredns-6f6b679f8f-jq9bq" in "kube-system" namespace has status "Ready":"True"
	I0816 16:49:52.661755   17475 pod_ready.go:82] duration metric: took 8.007913783s for pod "coredns-6f6b679f8f-jq9bq" in "kube-system" namespace to be "Ready" ...
	I0816 16:49:52.661766   17475 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-z4wg6" in "kube-system" namespace to be "Ready" ...
	I0816 16:49:52.728497   17475 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.228529416s)
	I0816 16:49:52.728555   17475 main.go:141] libmachine: Making call to close driver server
	I0816 16:49:52.728583   17475 main.go:141] libmachine: (addons-671083) Calling .Close
	I0816 16:49:52.728872   17475 main.go:141] libmachine: (addons-671083) DBG | Closing plugin on server side
	I0816 16:49:52.728923   17475 main.go:141] libmachine: Successfully made call to close driver server
	I0816 16:49:52.728934   17475 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 16:49:52.728954   17475 main.go:141] libmachine: Making call to close driver server
	I0816 16:49:52.728967   17475 main.go:141] libmachine: (addons-671083) Calling .Close
	I0816 16:49:52.729166   17475 main.go:141] libmachine: Successfully made call to close driver server
	I0816 16:49:52.729182   17475 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 16:49:52.874580   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:49:52.876521   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:49:52.880122   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:49:53.018606   17475 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.462523547s)
	I0816 16:49:53.018652   17475 main.go:141] libmachine: Making call to close driver server
	I0816 16:49:53.018667   17475 main.go:141] libmachine: (addons-671083) Calling .Close
	I0816 16:49:53.018955   17475 main.go:141] libmachine: Successfully made call to close driver server
	I0816 16:49:53.019011   17475 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 16:49:53.019031   17475 main.go:141] libmachine: Making call to close driver server
	I0816 16:49:53.019039   17475 main.go:141] libmachine: (addons-671083) Calling .Close
	I0816 16:49:53.019056   17475 main.go:141] libmachine: (addons-671083) DBG | Closing plugin on server side
	I0816 16:49:53.019339   17475 main.go:141] libmachine: (addons-671083) DBG | Closing plugin on server side
	I0816 16:49:53.019341   17475 main.go:141] libmachine: Successfully made call to close driver server
	I0816 16:49:53.019370   17475 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 16:49:53.021232   17475 addons.go:475] Verifying addon gcp-auth=true in "addons-671083"
	I0816 16:49:53.022678   17475 out.go:177] * Verifying gcp-auth addon...
	I0816 16:49:53.024482   17475 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0816 16:49:53.071131   17475 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0816 16:49:53.071153   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:49:53.368574   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:49:53.368617   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:49:53.372207   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:49:53.527731   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:49:53.862012   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:49:53.866439   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:49:53.869853   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:49:54.028501   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:49:54.170616   17475 pod_ready.go:98] pod "coredns-6f6b679f8f-z4wg6" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-16 16:49:54 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-16 16:49:42 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-16 16:49:42 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-16 16:49:42 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-16 16:49:42 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.240 HostIPs:[{IP:192.168.39
.240}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-08-16 16:49:42 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-08-16 16:49:48 +0000 UTC,FinishedAt:2024-08-16 16:49:53 +0000 UTC,ContainerID:cri-o://663dbdc93002796eec820a926f18a3c3a5d9f6411dcdfbeceae5c1106c031142,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://663dbdc93002796eec820a926f18a3c3a5d9f6411dcdfbeceae5c1106c031142 Started:0xc002151f20 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001af3eb0} {Name:kube-api-access-nxb56 MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc001af3ec0}] User:ni
l AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0816 16:49:54.170647   17475 pod_ready.go:82] duration metric: took 1.508873244s for pod "coredns-6f6b679f8f-z4wg6" in "kube-system" namespace to be "Ready" ...
	E0816 16:49:54.170661   17475 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-6f6b679f8f-z4wg6" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-16 16:49:54 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-16 16:49:42 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-16 16:49:42 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-16 16:49:42 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-16 16:49:42 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.240 HostIPs:[{IP:192.168.39.240}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-08-16 16:49:42 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-08-16 16:49:48 +0000 UTC,FinishedAt:2024-08-16 16:49:53 +0000 UTC,ContainerID:cri-o://663dbdc93002796eec820a926f18a3c3a5d9f6411dcdfbeceae5c1106c031142,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://663dbdc93002796eec820a926f18a3c3a5d9f6411dcdfbeceae5c1106c031142 Started:0xc002151f20 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001af3eb0} {Name:kube-api-access-nxb56 MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRe
adOnly:0xc001af3ec0}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0816 16:49:54.170673   17475 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-671083" in "kube-system" namespace to be "Ready" ...
	I0816 16:49:54.176700   17475 pod_ready.go:93] pod "etcd-addons-671083" in "kube-system" namespace has status "Ready":"True"
	I0816 16:49:54.176717   17475 pod_ready.go:82] duration metric: took 6.035654ms for pod "etcd-addons-671083" in "kube-system" namespace to be "Ready" ...
	I0816 16:49:54.176725   17475 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-671083" in "kube-system" namespace to be "Ready" ...
	I0816 16:49:54.182254   17475 pod_ready.go:93] pod "kube-apiserver-addons-671083" in "kube-system" namespace has status "Ready":"True"
	I0816 16:49:54.182270   17475 pod_ready.go:82] duration metric: took 5.53894ms for pod "kube-apiserver-addons-671083" in "kube-system" namespace to be "Ready" ...
	I0816 16:49:54.182277   17475 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-671083" in "kube-system" namespace to be "Ready" ...
	I0816 16:49:54.187072   17475 pod_ready.go:93] pod "kube-controller-manager-addons-671083" in "kube-system" namespace has status "Ready":"True"
	I0816 16:49:54.187086   17475 pod_ready.go:82] duration metric: took 4.802902ms for pod "kube-controller-manager-addons-671083" in "kube-system" namespace to be "Ready" ...
	I0816 16:49:54.187093   17475 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vcpxh" in "kube-system" namespace to be "Ready" ...
	I0816 16:49:54.259709   17475 pod_ready.go:93] pod "kube-proxy-vcpxh" in "kube-system" namespace has status "Ready":"True"
	I0816 16:49:54.259728   17475 pod_ready.go:82] duration metric: took 72.630163ms for pod "kube-proxy-vcpxh" in "kube-system" namespace to be "Ready" ...
	I0816 16:49:54.259736   17475 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-671083" in "kube-system" namespace to be "Ready" ...
	I0816 16:49:54.360303   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:49:54.362427   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:49:54.364968   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:49:54.528524   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:49:54.658649   17475 pod_ready.go:93] pod "kube-scheduler-addons-671083" in "kube-system" namespace has status "Ready":"True"
	I0816 16:49:54.658678   17475 pod_ready.go:82] duration metric: took 398.934745ms for pod "kube-scheduler-addons-671083" in "kube-system" namespace to be "Ready" ...
	I0816 16:49:54.658691   17475 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-6fkvh" in "kube-system" namespace to be "Ready" ...
	I0816 16:49:54.860436   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:49:54.861875   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:49:54.864931   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:49:55.028403   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:49:55.359913   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:49:55.362116   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:49:55.365064   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:49:55.528643   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:49:55.860825   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:49:55.862378   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:49:55.865783   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:49:56.027608   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:49:56.360344   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:49:56.362033   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:49:56.365250   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:49:56.527711   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:49:56.664818   17475 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-6fkvh" in "kube-system" namespace has status "Ready":"False"
	I0816 16:49:56.860638   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:49:56.863671   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:49:56.865090   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:49:57.028263   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:49:57.360395   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:49:57.362817   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:49:57.365142   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:49:57.528811   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:49:57.860107   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:49:57.862225   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:49:57.865723   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:49:58.029726   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:49:58.360126   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:49:58.362274   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:49:58.365902   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:49:58.528104   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:49:58.665512   17475 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-6fkvh" in "kube-system" namespace has status "Ready":"False"
	I0816 16:49:58.860044   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:49:58.862238   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:49:58.865239   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:49:59.027536   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:49:59.360358   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:49:59.362376   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:49:59.366226   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:49:59.528833   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:49:59.859701   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:49:59.862032   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:49:59.865298   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:00.028459   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:00.361804   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:00.363972   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:00.365409   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:00.528070   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:00.862045   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:00.862530   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:00.865523   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:01.029990   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:01.166119   17475 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-6fkvh" in "kube-system" namespace has status "Ready":"False"
	I0816 16:50:01.361255   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:01.363021   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:01.365135   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:01.528355   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:01.860016   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:01.862095   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:01.865336   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:02.027282   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:02.368576   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:02.373830   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:02.374943   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:02.527797   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:02.860332   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:02.862817   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:02.865094   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:03.028346   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:03.360420   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:03.363612   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:03.367257   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:03.527459   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:03.664683   17475 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-6fkvh" in "kube-system" namespace has status "Ready":"False"
	I0816 16:50:03.860772   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:03.862218   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:03.865423   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:04.027421   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:04.362681   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:04.365575   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:04.370202   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:04.528717   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:04.860421   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:04.861986   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:04.865449   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:05.028402   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:05.360473   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:05.363148   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:05.367676   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:05.528896   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:05.665386   17475 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-6fkvh" in "kube-system" namespace has status "Ready":"False"
	I0816 16:50:05.859986   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:05.861864   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:05.864990   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:06.028196   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:06.362331   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:06.362468   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:06.368886   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:06.528062   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:06.859992   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:06.862545   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:06.865837   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:07.029046   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:07.360305   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:07.362694   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:07.365104   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:07.528799   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:07.860236   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:07.862438   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:07.865459   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:08.028155   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:08.164898   17475 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-6fkvh" in "kube-system" namespace has status "Ready":"False"
	I0816 16:50:08.373773   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:08.373863   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:08.374288   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:08.528884   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:08.861329   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:08.863059   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:08.865303   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:09.027577   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:09.360522   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:09.362691   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:09.364936   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:09.528957   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:09.664732   17475 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-6fkvh" in "kube-system" namespace has status "Ready":"True"
	I0816 16:50:09.664755   17475 pod_ready.go:82] duration metric: took 15.00605745s for pod "nvidia-device-plugin-daemonset-6fkvh" in "kube-system" namespace to be "Ready" ...
	I0816 16:50:09.664763   17475 pod_ready.go:39] duration metric: took 25.095357982s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 16:50:09.664778   17475 api_server.go:52] waiting for apiserver process to appear ...
	I0816 16:50:09.664827   17475 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 16:50:09.682601   17475 api_server.go:72] duration metric: took 27.999466706s to wait for apiserver process to appear ...
	I0816 16:50:09.682628   17475 api_server.go:88] waiting for apiserver healthz status ...
	I0816 16:50:09.682645   17475 api_server.go:253] Checking apiserver healthz at https://192.168.39.240:8443/healthz ...
	I0816 16:50:09.687727   17475 api_server.go:279] https://192.168.39.240:8443/healthz returned 200:
	ok
	I0816 16:50:09.688733   17475 api_server.go:141] control plane version: v1.31.0
	I0816 16:50:09.688755   17475 api_server.go:131] duration metric: took 6.121364ms to wait for apiserver health ...
	I0816 16:50:09.688763   17475 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 16:50:09.697181   17475 system_pods.go:59] 18 kube-system pods found
	I0816 16:50:09.697212   17475 system_pods.go:61] "coredns-6f6b679f8f-jq9bq" [50cf4e20-39bf-4c95-9744-3f86148fcb61] Running
	I0816 16:50:09.697222   17475 system_pods.go:61] "csi-hostpath-attacher-0" [828bbc78-aefd-4414-b73f-3386e27ddf03] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0816 16:50:09.697228   17475 system_pods.go:61] "csi-hostpath-resizer-0" [6e1d39ba-5f5f-4cdf-8109-b1382360eccb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0816 16:50:09.697237   17475 system_pods.go:61] "csi-hostpathplugin-lfs24" [344d6dad-37be-4ec3-8791-fde08e6ebd57] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0816 16:50:09.697242   17475 system_pods.go:61] "etcd-addons-671083" [147192dd-da81-4ad1-8a05-52eedfbc84fd] Running
	I0816 16:50:09.697246   17475 system_pods.go:61] "kube-apiserver-addons-671083" [71555bba-161f-472e-90f1-cfe377e16b84] Running
	I0816 16:50:09.697250   17475 system_pods.go:61] "kube-controller-manager-addons-671083" [3382946c-61b4-45a7-8b77-e63a0a7f9d34] Running
	I0816 16:50:09.697253   17475 system_pods.go:61] "kube-ingress-dns-minikube" [a737f23d-c62b-4073-9b90-6c95e9a3374b] Running
	I0816 16:50:09.697256   17475 system_pods.go:61] "kube-proxy-vcpxh" [fa9fb911-4140-45c4-b33c-e7c7616ee708] Running
	I0816 16:50:09.697260   17475 system_pods.go:61] "kube-scheduler-addons-671083" [944ee8a4-dc5e-481b-bedd-56a6c34ba6e7] Running
	I0816 16:50:09.697265   17475 system_pods.go:61] "metrics-server-8988944d9-qjczl" [499be229-e123-4025-afef-b0608d31b95d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 16:50:09.697271   17475 system_pods.go:61] "nvidia-device-plugin-daemonset-6fkvh" [fad33474-a661-4441-a3d3-61e1e753fc6a] Running
	I0816 16:50:09.697276   17475 system_pods.go:61] "registry-6fb4cdfc84-rvzfr" [ef669560-d120-4b0c-96ee-3b4786b10c8c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0816 16:50:09.697284   17475 system_pods.go:61] "registry-proxy-qpbf4" [afdfd628-7037-4056-b825-d6a9bf88c250] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0816 16:50:09.697291   17475 system_pods.go:61] "snapshot-controller-56fcc65765-2trrn" [bd75e67c-ed92-466e-8915-d2d5d1e87ad6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0816 16:50:09.697300   17475 system_pods.go:61] "snapshot-controller-56fcc65765-6kxd8" [2b18846d-7cdb-4733-8f3a-7f522ff67f18] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0816 16:50:09.697305   17475 system_pods.go:61] "storage-provisioner" [eb6c00fa-72db-4dfe-a3d9-054186223927] Running
	I0816 16:50:09.697311   17475 system_pods.go:61] "tiller-deploy-b48cc5f79-xdgrc" [9075d95d-30f9-45ec-944b-3ee3d7e01862] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0816 16:50:09.697316   17475 system_pods.go:74] duration metric: took 8.547956ms to wait for pod list to return data ...
	I0816 16:50:09.697324   17475 default_sa.go:34] waiting for default service account to be created ...
	I0816 16:50:09.699598   17475 default_sa.go:45] found service account: "default"
	I0816 16:50:09.699617   17475 default_sa.go:55] duration metric: took 2.286452ms for default service account to be created ...
	I0816 16:50:09.699643   17475 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 16:50:09.709799   17475 system_pods.go:86] 18 kube-system pods found
	I0816 16:50:09.709824   17475 system_pods.go:89] "coredns-6f6b679f8f-jq9bq" [50cf4e20-39bf-4c95-9744-3f86148fcb61] Running
	I0816 16:50:09.709833   17475 system_pods.go:89] "csi-hostpath-attacher-0" [828bbc78-aefd-4414-b73f-3386e27ddf03] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0816 16:50:09.709840   17475 system_pods.go:89] "csi-hostpath-resizer-0" [6e1d39ba-5f5f-4cdf-8109-b1382360eccb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0816 16:50:09.709847   17475 system_pods.go:89] "csi-hostpathplugin-lfs24" [344d6dad-37be-4ec3-8791-fde08e6ebd57] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0816 16:50:09.709852   17475 system_pods.go:89] "etcd-addons-671083" [147192dd-da81-4ad1-8a05-52eedfbc84fd] Running
	I0816 16:50:09.709857   17475 system_pods.go:89] "kube-apiserver-addons-671083" [71555bba-161f-472e-90f1-cfe377e16b84] Running
	I0816 16:50:09.709861   17475 system_pods.go:89] "kube-controller-manager-addons-671083" [3382946c-61b4-45a7-8b77-e63a0a7f9d34] Running
	I0816 16:50:09.709866   17475 system_pods.go:89] "kube-ingress-dns-minikube" [a737f23d-c62b-4073-9b90-6c95e9a3374b] Running
	I0816 16:50:09.709869   17475 system_pods.go:89] "kube-proxy-vcpxh" [fa9fb911-4140-45c4-b33c-e7c7616ee708] Running
	I0816 16:50:09.709873   17475 system_pods.go:89] "kube-scheduler-addons-671083" [944ee8a4-dc5e-481b-bedd-56a6c34ba6e7] Running
	I0816 16:50:09.709881   17475 system_pods.go:89] "metrics-server-8988944d9-qjczl" [499be229-e123-4025-afef-b0608d31b95d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 16:50:09.709886   17475 system_pods.go:89] "nvidia-device-plugin-daemonset-6fkvh" [fad33474-a661-4441-a3d3-61e1e753fc6a] Running
	I0816 16:50:09.709892   17475 system_pods.go:89] "registry-6fb4cdfc84-rvzfr" [ef669560-d120-4b0c-96ee-3b4786b10c8c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0816 16:50:09.709900   17475 system_pods.go:89] "registry-proxy-qpbf4" [afdfd628-7037-4056-b825-d6a9bf88c250] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0816 16:50:09.709906   17475 system_pods.go:89] "snapshot-controller-56fcc65765-2trrn" [bd75e67c-ed92-466e-8915-d2d5d1e87ad6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0816 16:50:09.709912   17475 system_pods.go:89] "snapshot-controller-56fcc65765-6kxd8" [2b18846d-7cdb-4733-8f3a-7f522ff67f18] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0816 16:50:09.709918   17475 system_pods.go:89] "storage-provisioner" [eb6c00fa-72db-4dfe-a3d9-054186223927] Running
	I0816 16:50:09.709924   17475 system_pods.go:89] "tiller-deploy-b48cc5f79-xdgrc" [9075d95d-30f9-45ec-944b-3ee3d7e01862] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0816 16:50:09.709931   17475 system_pods.go:126] duration metric: took 10.282712ms to wait for k8s-apps to be running ...
	I0816 16:50:09.709940   17475 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 16:50:09.709979   17475 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 16:50:09.723790   17475 system_svc.go:56] duration metric: took 13.84229ms WaitForService to wait for kubelet
	I0816 16:50:09.723820   17475 kubeadm.go:582] duration metric: took 28.040689568s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 16:50:09.723838   17475 node_conditions.go:102] verifying NodePressure condition ...
	I0816 16:50:09.726840   17475 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 16:50:09.726863   17475 node_conditions.go:123] node cpu capacity is 2
	I0816 16:50:09.726874   17475 node_conditions.go:105] duration metric: took 3.032489ms to run NodePressure ...
	I0816 16:50:09.726885   17475 start.go:241] waiting for startup goroutines ...
	I0816 16:50:09.860518   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:09.862404   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:09.864805   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:10.028321   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:10.360593   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:10.361621   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:10.365099   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:10.528915   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:10.861351   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:10.862786   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:10.864980   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:11.029110   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:11.360951   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:11.363856   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:11.366756   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:11.527787   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:11.860121   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:11.862412   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:11.864801   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:12.028311   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:12.359949   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:12.361710   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:12.364933   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:12.528444   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:12.860429   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:12.861987   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:12.865606   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:13.028157   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:13.361318   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:13.363054   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:13.365612   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:13.527966   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:13.860777   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:13.863081   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:13.864923   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:14.028656   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:14.360939   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:14.363038   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:14.366705   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:14.527977   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:14.861821   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:14.863606   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:14.865787   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:15.028289   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:15.359348   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:15.362133   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:15.365607   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:15.527553   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:15.859735   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:15.861965   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:15.865437   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:16.027811   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:16.360973   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:16.362531   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:16.368166   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:16.529397   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:16.860607   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:16.862916   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:16.866133   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:17.028938   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:17.360520   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:17.362744   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:17.364962   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:17.528876   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:17.861194   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:17.863021   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:17.865493   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:18.027810   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:18.361020   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:18.362337   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:18.365496   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:18.527958   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:18.859350   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:18.861493   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:18.864545   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:19.028181   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:19.359237   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:19.361677   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:19.364483   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:19.528007   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:19.860594   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:19.863399   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:19.867358   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:20.028421   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:20.359503   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:20.361675   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:20.364496   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:20.527916   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:20.860517   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:20.862939   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:20.867912   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:21.027829   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:21.360023   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:21.362233   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:21.365635   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:21.527974   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:21.862530   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:21.862626   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:21.864681   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:22.028093   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:22.361763   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:22.363337   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:22.365765   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:22.527945   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:22.860383   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:22.862402   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:22.865016   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:23.027787   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:23.359502   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:23.362524   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:23.365338   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:23.527671   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:23.860123   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:23.861975   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:23.867744   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:24.027740   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:24.359884   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:24.362739   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:24.365081   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:24.528848   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:24.862681   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:24.862917   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:24.870759   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:25.028240   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:25.361947   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:25.363475   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:25.365319   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:25.530241   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:25.859923   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:25.861924   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:25.865187   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:26.028999   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:26.360955   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:26.362729   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:26.364897   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:26.528878   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:26.860460   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:26.862060   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:26.865429   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:27.028003   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:27.360653   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:27.363275   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:27.365541   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:27.528087   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:27.860013   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:27.862854   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:27.864926   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:28.028493   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:28.542203   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:28.542591   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:28.543012   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:28.544368   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:28.860611   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:28.862712   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:28.865682   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:29.027982   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:29.361087   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:29.362686   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:29.365999   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:29.528874   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:29.860263   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:29.862223   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:29.865930   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:30.027859   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:30.363733   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:30.364350   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:30.368921   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:30.528932   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:30.860771   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:30.862965   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:30.865567   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:31.027729   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:31.360301   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:31.362972   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:31.365100   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:31.528940   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:31.862138   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:31.863473   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:31.866338   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:32.028968   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:32.360706   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:32.365524   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:32.367508   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:32.527824   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:32.860280   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:32.862802   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:32.864671   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:33.028371   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:33.360723   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:33.362424   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:33.372647   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:33.527875   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:33.862940   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:33.865771   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:33.865771   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:34.029225   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:34.360451   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:34.362260   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:34.365305   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:34.528058   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:34.862264   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:34.865647   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:34.866191   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:35.027514   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:35.360698   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:35.362567   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:35.365704   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:35.527563   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:35.860910   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:35.862418   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:35.865906   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:36.027954   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:36.360652   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:36.362901   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:36.364928   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:36.528116   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:36.859841   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:36.862330   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:36.865568   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:37.028470   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:37.359460   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:37.363051   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:37.366268   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:37.530384   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:37.859664   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:37.861992   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:37.865073   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:38.028547   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:38.360100   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:38.362526   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:38.364893   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:38.527710   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:38.860663   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:38.862716   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:38.864799   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:39.028010   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:39.794797   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:39.795224   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:39.795576   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:39.796218   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:39.860376   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:39.863163   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:39.865344   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:40.027488   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:40.359960   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:40.362645   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:40.367745   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:40.528062   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:40.859951   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:40.862471   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:40.866853   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:41.028168   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:41.360878   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:41.363269   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:41.367081   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:41.527920   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:41.859803   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 16:50:41.862207   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:41.865709   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:42.027949   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:42.360540   17475 kapi.go:107] duration metric: took 52.004412255s to wait for kubernetes.io/minikube-addons=registry ...
	I0816 16:50:42.362911   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:42.364930   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:42.528316   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:42.863554   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:42.865884   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:43.028655   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:43.364573   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:43.366792   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:43.528335   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:43.864684   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:43.867075   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:44.027875   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:44.365907   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:44.367079   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:44.528725   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:44.863371   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:44.865795   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:45.029931   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:45.362678   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:45.364893   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:45.528345   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:45.862243   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:45.866012   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:46.027882   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:46.365029   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:46.367018   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:46.528857   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:46.863142   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:46.865924   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:47.029094   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:47.363673   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:47.369300   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:47.528724   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:47.862321   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:47.865447   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:48.028485   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:48.366233   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:48.366582   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:48.527898   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:48.863701   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:48.865555   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:49.028601   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:49.365487   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:49.369208   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:49.533382   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:49.863442   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:49.865812   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:50.027862   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:50.365155   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:50.367813   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:50.528819   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:50.863413   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:50.865627   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:51.027954   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:51.362763   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:51.365011   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:51.528428   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:51.863348   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:51.866960   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:52.028687   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:52.362885   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:52.365329   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:52.527454   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:52.863075   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:52.866466   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:53.027928   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:53.368086   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:53.369333   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:53.534008   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:53.867163   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:53.868069   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:54.028761   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:54.365042   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:54.368168   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:54.528706   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:54.865194   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:54.867551   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:55.028176   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:55.363114   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:55.365685   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:55.527754   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:55.862084   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:55.865950   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:56.028645   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:56.363119   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:56.366567   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:56.528800   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:56.864169   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:56.866121   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:57.028108   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:57.363505   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:57.365766   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:57.528205   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:57.862943   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:57.865110   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:58.028487   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:58.770497   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:58.773998   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:58.774579   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:58.865527   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:58.865599   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:59.029254   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:59.363442   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:59.365904   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:50:59.527567   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:50:59.862234   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:50:59.865507   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:51:00.028193   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:51:00.363408   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:51:00.366531   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:51:00.528442   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:51:00.863140   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:51:00.866339   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:51:01.027814   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:51:01.362574   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:51:01.365337   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:51:01.661019   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:51:01.863194   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:51:01.865992   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:51:02.028536   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:51:02.362440   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:51:02.365669   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:51:02.527888   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:51:02.862505   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:51:02.866144   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:51:03.028918   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:51:03.363014   17475 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 16:51:03.366846   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:51:03.529009   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:51:03.863598   17475 kapi.go:107] duration metric: took 1m13.505283747s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0816 16:51:03.866415   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:51:04.027831   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:51:04.365949   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:51:04.566853   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:51:04.865922   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:51:05.028121   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:51:05.366873   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:51:05.527746   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:51:05.865856   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:51:06.029004   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:51:06.368160   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:51:06.528098   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:51:06.865940   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:51:07.028510   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:51:07.366427   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:51:07.765929   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:51:07.930061   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:51:08.033467   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:51:08.366374   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:51:08.528612   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 16:51:08.867858   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:51:09.028678   17475 kapi.go:107] duration metric: took 1m16.004190043s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0816 16:51:09.030463   17475 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-671083 cluster.
	I0816 16:51:09.031770   17475 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0816 16:51:09.032931   17475 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0816 16:51:09.367562   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:51:09.867216   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:51:10.367044   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:51:10.865759   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:51:11.366694   17475 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 16:51:11.866871   17475 kapi.go:107] duration metric: took 1m20.505239805s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0816 16:51:11.868681   17475 out.go:177] * Enabled addons: cloud-spanner, nvidia-device-plugin, helm-tiller, inspektor-gadget, storage-provisioner, metrics-server, ingress-dns, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0816 16:51:11.869969   17475 addons.go:510] duration metric: took 1m30.186821124s for enable addons: enabled=[cloud-spanner nvidia-device-plugin helm-tiller inspektor-gadget storage-provisioner metrics-server ingress-dns yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0816 16:51:11.869999   17475 start.go:246] waiting for cluster config update ...
	I0816 16:51:11.870016   17475 start.go:255] writing updated cluster config ...
	I0816 16:51:11.870250   17475 ssh_runner.go:195] Run: rm -f paused
	I0816 16:51:11.921390   17475 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0816 16:51:11.923333   17475 out.go:177] * Done! kubectl is now configured to use "addons-671083" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 16 16:57:01 addons-671083 crio[680]: time="2024-08-16 16:57:01.317609270Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723827421317581994,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2d1aae94-fca4-4fca-bd1a-92290f8a1220 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 16:57:01 addons-671083 crio[680]: time="2024-08-16 16:57:01.318430606Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b6236383-22f3-40b2-83e5-15ba7b20e239 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 16:57:01 addons-671083 crio[680]: time="2024-08-16 16:57:01.318486762Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b6236383-22f3-40b2-83e5-15ba7b20e239 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 16:57:01 addons-671083 crio[680]: time="2024-08-16 16:57:01.318738361Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f2c5cac8a8c8bdb7ced0494559d11953eb81232a7f5d807b79234a4ffe2d4e5c,PodSandboxId:ce3d999bfbec42e0208c5d963206c1219589a05073ea40e015c4966447954518,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1723827274111414073,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-srmzf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7068bb8f-88d1-41f1-b4ff-bd9559d40ee7,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2728ee91db6d9eea237bcf970aaa9ffab3dd088c5cefdb095638884c59f198d7,PodSandboxId:6686f5dd4c095493ee26735af3a18c0d48fb0a0c6e022bcb22e2f4fc3adb61ec,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1723827135580825464,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d0bf1f79-56d5-4c95-8a88-8e8d0007a72a,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:299d550fb32349bb9d9d7cce45a8abdb4782e4028dddba86ca97a077a395521b,PodSandboxId:b43c1481fdf82bb63aafdb868d1ae5bd26e7d6ec7087f10eb06e12a878e3b7f1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723827075363707655,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4f993024-c844-4f21-8
ed5-7df2f4b636be,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0769de1a3f711a5862de64148f60cf28f6f1c725631004ebf4b83dc040bd0616,PodSandboxId:8348cb178c21fa9fabcdcbe9d51fece4f34fb61167102fb0fd0bd11b03a53189,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1723827025489057762,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-jf7ql,io.kubernetes.pod.name
space: local-path-storage,io.kubernetes.pod.uid: 67fd9b18-127b-45cb-9434-b9b807138706,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1008dba1c022a67c1e5f7040f8e7db78ed122390ebda9fa686317a646762361c,PodSandboxId:4bf146cb19f670d5a91735293e5af7090f09fdc8b476bc5ddbc19b74324cb56d,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1723827017114777577,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metr
ics-server-8988944d9-qjczl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 499be229-e123-4025-afef-b0608d31b95d,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a055487b6473ea1f4a4d8325ff9e8ceeec758a776bcdd735b703f27cc4fafde5,PodSandboxId:42622eae74032d98c6953bdc561c5ce3ee399e1105565d3933425951d90372b6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723826987845184113,Labels
:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb6c00fa-72db-4dfe-a3d9-054186223927,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:421d348188441902e3955b86ce1478141bab0b13d05c11f202366ce16e5c5979,PodSandboxId:3ac4881b3f4898a71bd50733b145873cb83ecb57dbee9f1379dae9cc971b93b0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723826987073782131,Labels:map[string]string{io.ku
bernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jq9bq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50cf4e20-39bf-4c95-9744-3f86148fcb61,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ee6d2726d73385e27a2719444dc6f623400d8b8a63e729e5d6b431d5af73b7f,PodSandboxId:574d3d07260a8541ea77f7db73c55408246bdf4a71edfd1741cbc1d7ab9903fb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{
},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723826983394072696,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vcpxh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa9fb911-4140-45c4-b33c-e7c7616ee708,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:738c5d5fbb5383fe98b87374293bf6c3ae61553b928d7a5ac91a577aac118946,PodSandboxId:8c08976e9f90f1f518a3ebcaef8fbf75a26695bd5abfb0249bffaf85bc35a633,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723826972120943660,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-671083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fe7c910e37cd6e2e0e474ecd951dca1,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:665e133f73ce9972b588936faa219e58168f2f3bd3c18519302c89613e188332,PodSandboxId:b8adbe06dba655226a5a7f302adc0e7e7234bcc03e6c0b7aa6b6d3caad318048,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e591
3fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723826972136516089,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-671083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 161a4df41e89e657e65727c136980d27,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:543ca544863e9e2a5d4a129d883c277b09cba9632bf110ffe5fe1cbd96011991,PodSandboxId:cdc95547c0356ee5e6070f63895afc75d7ed65b01151acd587cc2b7e0e84c4f3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455
e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723826972105392619,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-671083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c965858a20f7454a1a3c5d6188150f4,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:074ca5d31f9b925dc5ae3ee9f953a10d88c2e8c3842a558542504fd550639eee,PodSandboxId:29325b5aece3179416642561b09b99784f74c6e902fb4c836a3076645248bb92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206
f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723826972046945926,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-671083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8284fdddcea6b0b0d292dda0281462e3,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b6236383-22f3-40b2-83e5-15ba7b20e239 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 16:57:01 addons-671083 crio[680]: time="2024-08-16 16:57:01.353217068Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f697f10e-98f2-42e2-afeb-9af28de04d19 name=/runtime.v1.RuntimeService/Version
	Aug 16 16:57:01 addons-671083 crio[680]: time="2024-08-16 16:57:01.353297883Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f697f10e-98f2-42e2-afeb-9af28de04d19 name=/runtime.v1.RuntimeService/Version
	Aug 16 16:57:01 addons-671083 crio[680]: time="2024-08-16 16:57:01.354254943Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2857421c-ccaf-4030-a57c-1d509443828d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 16:57:01 addons-671083 crio[680]: time="2024-08-16 16:57:01.355489577Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723827421355465343,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2857421c-ccaf-4030-a57c-1d509443828d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 16:57:01 addons-671083 crio[680]: time="2024-08-16 16:57:01.355927075Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=29c72d4b-0db2-4e15-a9af-6c30998a295d name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 16:57:01 addons-671083 crio[680]: time="2024-08-16 16:57:01.356081978Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=29c72d4b-0db2-4e15-a9af-6c30998a295d name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 16:57:01 addons-671083 crio[680]: time="2024-08-16 16:57:01.356437392Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f2c5cac8a8c8bdb7ced0494559d11953eb81232a7f5d807b79234a4ffe2d4e5c,PodSandboxId:ce3d999bfbec42e0208c5d963206c1219589a05073ea40e015c4966447954518,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1723827274111414073,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-srmzf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7068bb8f-88d1-41f1-b4ff-bd9559d40ee7,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2728ee91db6d9eea237bcf970aaa9ffab3dd088c5cefdb095638884c59f198d7,PodSandboxId:6686f5dd4c095493ee26735af3a18c0d48fb0a0c6e022bcb22e2f4fc3adb61ec,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1723827135580825464,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d0bf1f79-56d5-4c95-8a88-8e8d0007a72a,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:299d550fb32349bb9d9d7cce45a8abdb4782e4028dddba86ca97a077a395521b,PodSandboxId:b43c1481fdf82bb63aafdb868d1ae5bd26e7d6ec7087f10eb06e12a878e3b7f1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723827075363707655,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4f993024-c844-4f21-8
ed5-7df2f4b636be,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0769de1a3f711a5862de64148f60cf28f6f1c725631004ebf4b83dc040bd0616,PodSandboxId:8348cb178c21fa9fabcdcbe9d51fece4f34fb61167102fb0fd0bd11b03a53189,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1723827025489057762,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-jf7ql,io.kubernetes.pod.name
space: local-path-storage,io.kubernetes.pod.uid: 67fd9b18-127b-45cb-9434-b9b807138706,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1008dba1c022a67c1e5f7040f8e7db78ed122390ebda9fa686317a646762361c,PodSandboxId:4bf146cb19f670d5a91735293e5af7090f09fdc8b476bc5ddbc19b74324cb56d,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1723827017114777577,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metr
ics-server-8988944d9-qjczl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 499be229-e123-4025-afef-b0608d31b95d,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a055487b6473ea1f4a4d8325ff9e8ceeec758a776bcdd735b703f27cc4fafde5,PodSandboxId:42622eae74032d98c6953bdc561c5ce3ee399e1105565d3933425951d90372b6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723826987845184113,Labels
:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb6c00fa-72db-4dfe-a3d9-054186223927,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:421d348188441902e3955b86ce1478141bab0b13d05c11f202366ce16e5c5979,PodSandboxId:3ac4881b3f4898a71bd50733b145873cb83ecb57dbee9f1379dae9cc971b93b0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723826987073782131,Labels:map[string]string{io.ku
bernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jq9bq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50cf4e20-39bf-4c95-9744-3f86148fcb61,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ee6d2726d73385e27a2719444dc6f623400d8b8a63e729e5d6b431d5af73b7f,PodSandboxId:574d3d07260a8541ea77f7db73c55408246bdf4a71edfd1741cbc1d7ab9903fb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{
},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723826983394072696,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vcpxh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa9fb911-4140-45c4-b33c-e7c7616ee708,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:738c5d5fbb5383fe98b87374293bf6c3ae61553b928d7a5ac91a577aac118946,PodSandboxId:8c08976e9f90f1f518a3ebcaef8fbf75a26695bd5abfb0249bffaf85bc35a633,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723826972120943660,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-671083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fe7c910e37cd6e2e0e474ecd951dca1,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:665e133f73ce9972b588936faa219e58168f2f3bd3c18519302c89613e188332,PodSandboxId:b8adbe06dba655226a5a7f302adc0e7e7234bcc03e6c0b7aa6b6d3caad318048,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e591
3fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723826972136516089,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-671083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 161a4df41e89e657e65727c136980d27,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:543ca544863e9e2a5d4a129d883c277b09cba9632bf110ffe5fe1cbd96011991,PodSandboxId:cdc95547c0356ee5e6070f63895afc75d7ed65b01151acd587cc2b7e0e84c4f3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455
e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723826972105392619,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-671083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c965858a20f7454a1a3c5d6188150f4,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:074ca5d31f9b925dc5ae3ee9f953a10d88c2e8c3842a558542504fd550639eee,PodSandboxId:29325b5aece3179416642561b09b99784f74c6e902fb4c836a3076645248bb92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206
f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723826972046945926,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-671083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8284fdddcea6b0b0d292dda0281462e3,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=29c72d4b-0db2-4e15-a9af-6c30998a295d name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 16:57:01 addons-671083 crio[680]: time="2024-08-16 16:57:01.390718537Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=284a7c68-bb42-4db1-8a88-ccfa44641bad name=/runtime.v1.RuntimeService/Version
	Aug 16 16:57:01 addons-671083 crio[680]: time="2024-08-16 16:57:01.390807128Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=284a7c68-bb42-4db1-8a88-ccfa44641bad name=/runtime.v1.RuntimeService/Version
	Aug 16 16:57:01 addons-671083 crio[680]: time="2024-08-16 16:57:01.392591144Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b54eb572-4746-42b6-a825-4c67f6f0958e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 16:57:01 addons-671083 crio[680]: time="2024-08-16 16:57:01.394359230Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723827421394332660,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b54eb572-4746-42b6-a825-4c67f6f0958e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 16:57:01 addons-671083 crio[680]: time="2024-08-16 16:57:01.395112954Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=21e40035-eb3b-4f9c-a611-314517f5e23a name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 16:57:01 addons-671083 crio[680]: time="2024-08-16 16:57:01.395180965Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=21e40035-eb3b-4f9c-a611-314517f5e23a name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 16:57:01 addons-671083 crio[680]: time="2024-08-16 16:57:01.395595313Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f2c5cac8a8c8bdb7ced0494559d11953eb81232a7f5d807b79234a4ffe2d4e5c,PodSandboxId:ce3d999bfbec42e0208c5d963206c1219589a05073ea40e015c4966447954518,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1723827274111414073,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-srmzf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7068bb8f-88d1-41f1-b4ff-bd9559d40ee7,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2728ee91db6d9eea237bcf970aaa9ffab3dd088c5cefdb095638884c59f198d7,PodSandboxId:6686f5dd4c095493ee26735af3a18c0d48fb0a0c6e022bcb22e2f4fc3adb61ec,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1723827135580825464,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d0bf1f79-56d5-4c95-8a88-8e8d0007a72a,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:299d550fb32349bb9d9d7cce45a8abdb4782e4028dddba86ca97a077a395521b,PodSandboxId:b43c1481fdf82bb63aafdb868d1ae5bd26e7d6ec7087f10eb06e12a878e3b7f1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723827075363707655,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4f993024-c844-4f21-8
ed5-7df2f4b636be,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0769de1a3f711a5862de64148f60cf28f6f1c725631004ebf4b83dc040bd0616,PodSandboxId:8348cb178c21fa9fabcdcbe9d51fece4f34fb61167102fb0fd0bd11b03a53189,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1723827025489057762,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-jf7ql,io.kubernetes.pod.name
space: local-path-storage,io.kubernetes.pod.uid: 67fd9b18-127b-45cb-9434-b9b807138706,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1008dba1c022a67c1e5f7040f8e7db78ed122390ebda9fa686317a646762361c,PodSandboxId:4bf146cb19f670d5a91735293e5af7090f09fdc8b476bc5ddbc19b74324cb56d,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1723827017114777577,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metr
ics-server-8988944d9-qjczl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 499be229-e123-4025-afef-b0608d31b95d,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a055487b6473ea1f4a4d8325ff9e8ceeec758a776bcdd735b703f27cc4fafde5,PodSandboxId:42622eae74032d98c6953bdc561c5ce3ee399e1105565d3933425951d90372b6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723826987845184113,Labels
:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb6c00fa-72db-4dfe-a3d9-054186223927,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:421d348188441902e3955b86ce1478141bab0b13d05c11f202366ce16e5c5979,PodSandboxId:3ac4881b3f4898a71bd50733b145873cb83ecb57dbee9f1379dae9cc971b93b0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723826987073782131,Labels:map[string]string{io.ku
bernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jq9bq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50cf4e20-39bf-4c95-9744-3f86148fcb61,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ee6d2726d73385e27a2719444dc6f623400d8b8a63e729e5d6b431d5af73b7f,PodSandboxId:574d3d07260a8541ea77f7db73c55408246bdf4a71edfd1741cbc1d7ab9903fb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{
},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723826983394072696,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vcpxh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa9fb911-4140-45c4-b33c-e7c7616ee708,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:738c5d5fbb5383fe98b87374293bf6c3ae61553b928d7a5ac91a577aac118946,PodSandboxId:8c08976e9f90f1f518a3ebcaef8fbf75a26695bd5abfb0249bffaf85bc35a633,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723826972120943660,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-671083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fe7c910e37cd6e2e0e474ecd951dca1,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:665e133f73ce9972b588936faa219e58168f2f3bd3c18519302c89613e188332,PodSandboxId:b8adbe06dba655226a5a7f302adc0e7e7234bcc03e6c0b7aa6b6d3caad318048,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e591
3fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723826972136516089,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-671083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 161a4df41e89e657e65727c136980d27,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:543ca544863e9e2a5d4a129d883c277b09cba9632bf110ffe5fe1cbd96011991,PodSandboxId:cdc95547c0356ee5e6070f63895afc75d7ed65b01151acd587cc2b7e0e84c4f3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455
e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723826972105392619,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-671083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c965858a20f7454a1a3c5d6188150f4,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:074ca5d31f9b925dc5ae3ee9f953a10d88c2e8c3842a558542504fd550639eee,PodSandboxId:29325b5aece3179416642561b09b99784f74c6e902fb4c836a3076645248bb92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206
f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723826972046945926,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-671083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8284fdddcea6b0b0d292dda0281462e3,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=21e40035-eb3b-4f9c-a611-314517f5e23a name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 16:57:01 addons-671083 crio[680]: time="2024-08-16 16:57:01.428191658Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c54243d0-332b-410b-af4a-23fa249be9cb name=/runtime.v1.RuntimeService/Version
	Aug 16 16:57:01 addons-671083 crio[680]: time="2024-08-16 16:57:01.428275744Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c54243d0-332b-410b-af4a-23fa249be9cb name=/runtime.v1.RuntimeService/Version
	Aug 16 16:57:01 addons-671083 crio[680]: time="2024-08-16 16:57:01.429467510Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=44069bda-6bdb-470b-8f82-539a67482dec name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 16:57:01 addons-671083 crio[680]: time="2024-08-16 16:57:01.430773136Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723827421430745602,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=44069bda-6bdb-470b-8f82-539a67482dec name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 16:57:01 addons-671083 crio[680]: time="2024-08-16 16:57:01.431543659Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f5c0462e-bc17-4705-905a-4decc4a8f859 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 16:57:01 addons-671083 crio[680]: time="2024-08-16 16:57:01.431617006Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f5c0462e-bc17-4705-905a-4decc4a8f859 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 16:57:01 addons-671083 crio[680]: time="2024-08-16 16:57:01.431872279Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f2c5cac8a8c8bdb7ced0494559d11953eb81232a7f5d807b79234a4ffe2d4e5c,PodSandboxId:ce3d999bfbec42e0208c5d963206c1219589a05073ea40e015c4966447954518,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1723827274111414073,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-srmzf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7068bb8f-88d1-41f1-b4ff-bd9559d40ee7,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2728ee91db6d9eea237bcf970aaa9ffab3dd088c5cefdb095638884c59f198d7,PodSandboxId:6686f5dd4c095493ee26735af3a18c0d48fb0a0c6e022bcb22e2f4fc3adb61ec,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1723827135580825464,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d0bf1f79-56d5-4c95-8a88-8e8d0007a72a,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:299d550fb32349bb9d9d7cce45a8abdb4782e4028dddba86ca97a077a395521b,PodSandboxId:b43c1481fdf82bb63aafdb868d1ae5bd26e7d6ec7087f10eb06e12a878e3b7f1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723827075363707655,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4f993024-c844-4f21-8
ed5-7df2f4b636be,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0769de1a3f711a5862de64148f60cf28f6f1c725631004ebf4b83dc040bd0616,PodSandboxId:8348cb178c21fa9fabcdcbe9d51fece4f34fb61167102fb0fd0bd11b03a53189,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1723827025489057762,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-jf7ql,io.kubernetes.pod.name
space: local-path-storage,io.kubernetes.pod.uid: 67fd9b18-127b-45cb-9434-b9b807138706,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1008dba1c022a67c1e5f7040f8e7db78ed122390ebda9fa686317a646762361c,PodSandboxId:4bf146cb19f670d5a91735293e5af7090f09fdc8b476bc5ddbc19b74324cb56d,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1723827017114777577,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metr
ics-server-8988944d9-qjczl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 499be229-e123-4025-afef-b0608d31b95d,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a055487b6473ea1f4a4d8325ff9e8ceeec758a776bcdd735b703f27cc4fafde5,PodSandboxId:42622eae74032d98c6953bdc561c5ce3ee399e1105565d3933425951d90372b6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723826987845184113,Labels
:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb6c00fa-72db-4dfe-a3d9-054186223927,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:421d348188441902e3955b86ce1478141bab0b13d05c11f202366ce16e5c5979,PodSandboxId:3ac4881b3f4898a71bd50733b145873cb83ecb57dbee9f1379dae9cc971b93b0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723826987073782131,Labels:map[string]string{io.ku
bernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jq9bq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50cf4e20-39bf-4c95-9744-3f86148fcb61,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ee6d2726d73385e27a2719444dc6f623400d8b8a63e729e5d6b431d5af73b7f,PodSandboxId:574d3d07260a8541ea77f7db73c55408246bdf4a71edfd1741cbc1d7ab9903fb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{
},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723826983394072696,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vcpxh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa9fb911-4140-45c4-b33c-e7c7616ee708,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:738c5d5fbb5383fe98b87374293bf6c3ae61553b928d7a5ac91a577aac118946,PodSandboxId:8c08976e9f90f1f518a3ebcaef8fbf75a26695bd5abfb0249bffaf85bc35a633,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723826972120943660,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-671083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fe7c910e37cd6e2e0e474ecd951dca1,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:665e133f73ce9972b588936faa219e58168f2f3bd3c18519302c89613e188332,PodSandboxId:b8adbe06dba655226a5a7f302adc0e7e7234bcc03e6c0b7aa6b6d3caad318048,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e591
3fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723826972136516089,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-671083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 161a4df41e89e657e65727c136980d27,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:543ca544863e9e2a5d4a129d883c277b09cba9632bf110ffe5fe1cbd96011991,PodSandboxId:cdc95547c0356ee5e6070f63895afc75d7ed65b01151acd587cc2b7e0e84c4f3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455
e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723826972105392619,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-671083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c965858a20f7454a1a3c5d6188150f4,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:074ca5d31f9b925dc5ae3ee9f953a10d88c2e8c3842a558542504fd550639eee,PodSandboxId:29325b5aece3179416642561b09b99784f74c6e902fb4c836a3076645248bb92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206
f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723826972046945926,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-671083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8284fdddcea6b0b0d292dda0281462e3,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f5c0462e-bc17-4705-905a-4decc4a8f859 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f2c5cac8a8c8b       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   ce3d999bfbec4       hello-world-app-55bf9c44b4-srmzf
	2728ee91db6d9       docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0                         4 minutes ago       Running             nginx                     0                   6686f5dd4c095       nginx
	299d550fb3234       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     5 minutes ago       Running             busybox                   0                   b43c1481fdf82       busybox
	0769de1a3f711       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef        6 minutes ago       Running             local-path-provisioner    0                   8348cb178c21f       local-path-provisioner-86d989889c-jf7ql
	1008dba1c022a       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   6 minutes ago       Running             metrics-server            0                   4bf146cb19f67       metrics-server-8988944d9-qjczl
	a055487b6473e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        7 minutes ago       Running             storage-provisioner       0                   42622eae74032       storage-provisioner
	421d348188441       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        7 minutes ago       Running             coredns                   0                   3ac4881b3f489       coredns-6f6b679f8f-jq9bq
	0ee6d2726d733       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                                        7 minutes ago       Running             kube-proxy                0                   574d3d07260a8       kube-proxy-vcpxh
	665e133f73ce9       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        7 minutes ago       Running             etcd                      0                   b8adbe06dba65       etcd-addons-671083
	738c5d5fbb538       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                                        7 minutes ago       Running             kube-scheduler            0                   8c08976e9f90f       kube-scheduler-addons-671083
	543ca544863e9       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                                        7 minutes ago       Running             kube-controller-manager   0                   cdc95547c0356       kube-controller-manager-addons-671083
	074ca5d31f9b9       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                                        7 minutes ago       Running             kube-apiserver            0                   29325b5aece31       kube-apiserver-addons-671083
	
	
	==> coredns [421d348188441902e3955b86ce1478141bab0b13d05c11f202366ce16e5c5979] <==
	[INFO] 10.244.0.7:42200 - 9049 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.001263993s
	[INFO] 10.244.0.7:48689 - 58184 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00009508s
	[INFO] 10.244.0.7:48689 - 46923 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00007067s
	[INFO] 10.244.0.7:42660 - 61551 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000130505s
	[INFO] 10.244.0.7:42660 - 5228 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000052733s
	[INFO] 10.244.0.7:41080 - 40141 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000104365s
	[INFO] 10.244.0.7:41080 - 16076 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000083295s
	[INFO] 10.244.0.7:34117 - 46189 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000089031s
	[INFO] 10.244.0.7:34117 - 57962 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000041026s
	[INFO] 10.244.0.7:34023 - 15348 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000048946s
	[INFO] 10.244.0.7:34023 - 502 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000024639s
	[INFO] 10.244.0.7:37032 - 15531 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000049525s
	[INFO] 10.244.0.7:37032 - 37797 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000021836s
	[INFO] 10.244.0.7:40989 - 27476 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.0000842s
	[INFO] 10.244.0.7:40989 - 64341 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000042739s
	[INFO] 10.244.0.22:51805 - 27391 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000342041s
	[INFO] 10.244.0.22:52546 - 37114 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000152636s
	[INFO] 10.244.0.22:45158 - 51536 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00007447s
	[INFO] 10.244.0.22:33090 - 44549 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00011485s
	[INFO] 10.244.0.22:55780 - 37157 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000098638s
	[INFO] 10.244.0.22:41087 - 18887 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000097595s
	[INFO] 10.244.0.22:37851 - 13996 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000676163s
	[INFO] 10.244.0.22:51319 - 64786 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.00040473s
	[INFO] 10.244.0.26:35684 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000337877s
	[INFO] 10.244.0.26:60301 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000084096s
	
	
	==> describe nodes <==
	Name:               addons-671083
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-671083
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd
	                    minikube.k8s.io/name=addons-671083
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_16T16_49_37_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-671083
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 16:49:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-671083
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 16:56:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 16:54:43 +0000   Fri, 16 Aug 2024 16:49:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 16:54:43 +0000   Fri, 16 Aug 2024 16:49:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 16:54:43 +0000   Fri, 16 Aug 2024 16:49:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 16:54:43 +0000   Fri, 16 Aug 2024 16:49:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.240
	  Hostname:    addons-671083
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 ae86c2bab65540d7978e0e5805419ef0
	  System UUID:                ae86c2ba-b655-40d7-978e-0e5805419ef0
	  Boot ID:                    b0c049e6-f0f9-4d60-a9b9-0af8d52b57a6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m49s
	  default                     hello-world-app-55bf9c44b4-srmzf           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m30s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  kube-system                 coredns-6f6b679f8f-jq9bq                   100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     7m19s
	  kube-system                 etcd-addons-671083                         100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         7m24s
	  kube-system                 kube-apiserver-addons-671083               250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m24s
	  kube-system                 kube-controller-manager-addons-671083      200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m26s
	  kube-system                 kube-proxy-vcpxh                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m19s
	  kube-system                 kube-scheduler-addons-671083               100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m24s
	  kube-system                 metrics-server-8988944d9-qjczl             100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         7m14s
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m15s
	  local-path-storage          local-path-provisioner-86d989889c-jf7ql    0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m17s  kube-proxy       
	  Normal  Starting                 7m24s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m24s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m24s  kubelet          Node addons-671083 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m24s  kubelet          Node addons-671083 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m24s  kubelet          Node addons-671083 status is now: NodeHasSufficientPID
	  Normal  NodeReady                7m23s  kubelet          Node addons-671083 status is now: NodeReady
	  Normal  RegisteredNode           7m20s  node-controller  Node addons-671083 event: Registered Node addons-671083 in Controller
	
	
	==> dmesg <==
	[  +5.024557] kauditd_printk_skb: 129 callbacks suppressed
	[  +5.173314] kauditd_printk_skb: 49 callbacks suppressed
	[Aug16 16:50] kauditd_printk_skb: 4 callbacks suppressed
	[  +7.551593] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.452215] kauditd_printk_skb: 34 callbacks suppressed
	[  +6.182027] kauditd_printk_skb: 1 callbacks suppressed
	[  +5.289785] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.115476] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.237894] kauditd_printk_skb: 83 callbacks suppressed
	[Aug16 16:51] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.005634] kauditd_printk_skb: 30 callbacks suppressed
	[ +11.536129] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.859176] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.891815] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.057029] kauditd_printk_skb: 25 callbacks suppressed
	[  +5.180224] kauditd_printk_skb: 72 callbacks suppressed
	[  +6.489063] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.538649] kauditd_printk_skb: 33 callbacks suppressed
	[Aug16 16:52] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.019379] kauditd_printk_skb: 8 callbacks suppressed
	[  +9.754739] kauditd_printk_skb: 21 callbacks suppressed
	[ +35.129381] kauditd_printk_skb: 7 callbacks suppressed
	[Aug16 16:53] kauditd_printk_skb: 33 callbacks suppressed
	[Aug16 16:54] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.146471] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [665e133f73ce9972b588936faa219e58168f2f3bd3c18519302c89613e188332] <==
	{"level":"info","ts":"2024-08-16T16:50:58.746061Z","caller":"traceutil/trace.go:171","msg":"trace[1297409268] linearizableReadLoop","detail":"{readStateIndex:1143; appliedIndex:1141; }","duration":"429.79018ms","start":"2024-08-16T16:50:58.316237Z","end":"2024-08-16T16:50:58.746027Z","steps":["trace[1297409268] 'read index received'  (duration: 422.293479ms)","trace[1297409268] 'applied index is now lower than readState.Index'  (duration: 7.496233ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-16T16:50:58.746425Z","caller":"traceutil/trace.go:171","msg":"trace[2025505688] transaction","detail":"{read_only:false; response_revision:1111; number_of_response:1; }","duration":"453.049305ms","start":"2024-08-16T16:50:58.293366Z","end":"2024-08-16T16:50:58.746416Z","steps":["trace[2025505688] 'process raft request'  (duration: 452.554641ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T16:50:58.746541Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T16:50:58.293348Z","time spent":"453.127204ms","remote":"127.0.0.1:44972","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":10366,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/gadget/gadget-vz8gb\" mod_revision:1086 > success:<request_put:<key:\"/registry/pods/gadget/gadget-vz8gb\" value_size:10324 >> failure:<request_range:<key:\"/registry/pods/gadget/gadget-vz8gb\" > >"}
	{"level":"warn","ts":"2024-08-16T16:50:58.746733Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"430.485724ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-16T16:50:58.746778Z","caller":"traceutil/trace.go:171","msg":"trace[536864379] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1111; }","duration":"430.53937ms","start":"2024-08-16T16:50:58.316230Z","end":"2024-08-16T16:50:58.746770Z","steps":["trace[536864379] 'agreement among raft nodes before linearized reading'  (duration: 430.449115ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T16:50:58.746798Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T16:50:58.316164Z","time spent":"430.629934ms","remote":"127.0.0.1:44792","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-08-16T16:50:58.746959Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"398.336105ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-16T16:50:58.747044Z","caller":"traceutil/trace.go:171","msg":"trace[147962105] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1111; }","duration":"398.423075ms","start":"2024-08-16T16:50:58.348616Z","end":"2024-08-16T16:50:58.747039Z","steps":["trace[147962105] 'agreement among raft nodes before linearized reading'  (duration: 398.323395ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T16:50:58.747068Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T16:50:58.348583Z","time spent":"398.479686ms","remote":"127.0.0.1:44972","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-08-16T16:50:58.747537Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"233.612814ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-16T16:50:58.747567Z","caller":"traceutil/trace.go:171","msg":"trace[220685933] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1111; }","duration":"233.657465ms","start":"2024-08-16T16:50:58.513904Z","end":"2024-08-16T16:50:58.747561Z","steps":["trace[220685933] 'agreement among raft nodes before linearized reading'  (duration: 233.60375ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T16:50:58.747694Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"396.628658ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-16T16:50:58.747708Z","caller":"traceutil/trace.go:171","msg":"trace[1734614247] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1111; }","duration":"396.644791ms","start":"2024-08-16T16:50:58.351059Z","end":"2024-08-16T16:50:58.747704Z","steps":["trace[1734614247] 'agreement among raft nodes before linearized reading'  (duration: 396.596762ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T16:50:58.747721Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T16:50:58.351033Z","time spent":"396.685106ms","remote":"127.0.0.1:44972","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2024-08-16T16:51:01.643169Z","caller":"traceutil/trace.go:171","msg":"trace[838376814] linearizableReadLoop","detail":"{readStateIndex:1151; appliedIndex:1150; }","duration":"128.922741ms","start":"2024-08-16T16:51:01.514233Z","end":"2024-08-16T16:51:01.643156Z","steps":["trace[838376814] 'read index received'  (duration: 128.613949ms)","trace[838376814] 'applied index is now lower than readState.Index'  (duration: 308.345µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-16T16:51:01.643471Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.173917ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-16T16:51:01.643565Z","caller":"traceutil/trace.go:171","msg":"trace[1014669857] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1119; }","duration":"129.326085ms","start":"2024-08-16T16:51:01.514230Z","end":"2024-08-16T16:51:01.643556Z","steps":["trace[1014669857] 'agreement among raft nodes before linearized reading'  (duration: 129.124399ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T16:51:01.643893Z","caller":"traceutil/trace.go:171","msg":"trace[824408050] transaction","detail":"{read_only:false; response_revision:1119; number_of_response:1; }","duration":"182.606835ms","start":"2024-08-16T16:51:01.461274Z","end":"2024-08-16T16:51:01.643881Z","steps":["trace[824408050] 'process raft request'  (duration: 181.625416ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T16:51:07.749604Z","caller":"traceutil/trace.go:171","msg":"trace[1652212945] linearizableReadLoop","detail":"{readStateIndex:1180; appliedIndex:1179; }","duration":"236.494744ms","start":"2024-08-16T16:51:07.513095Z","end":"2024-08-16T16:51:07.749590Z","steps":["trace[1652212945] 'read index received'  (duration: 236.364981ms)","trace[1652212945] 'applied index is now lower than readState.Index'  (duration: 129.363µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-16T16:51:07.749838Z","caller":"traceutil/trace.go:171","msg":"trace[1323612367] transaction","detail":"{read_only:false; response_revision:1146; number_of_response:1; }","duration":"299.413625ms","start":"2024-08-16T16:51:07.450415Z","end":"2024-08-16T16:51:07.749829Z","steps":["trace[1323612367] 'process raft request'  (duration: 299.08505ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T16:51:07.750015Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"222.046909ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/\" range_end:\"/registry/clusterroles0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-08-16T16:51:07.750050Z","caller":"traceutil/trace.go:171","msg":"trace[1565559132] range","detail":"{range_begin:/registry/clusterroles/; range_end:/registry/clusterroles0; response_count:0; response_revision:1146; }","duration":"222.138413ms","start":"2024-08-16T16:51:07.527904Z","end":"2024-08-16T16:51:07.750043Z","steps":["trace[1565559132] 'agreement among raft nodes before linearized reading'  (duration: 221.995813ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T16:51:07.750113Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"237.009443ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-16T16:51:07.750137Z","caller":"traceutil/trace.go:171","msg":"trace[690359742] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1146; }","duration":"237.038317ms","start":"2024-08-16T16:51:07.513091Z","end":"2024-08-16T16:51:07.750130Z","steps":["trace[690359742] 'agreement among raft nodes before linearized reading'  (duration: 236.995962ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T16:52:14.752435Z","caller":"traceutil/trace.go:171","msg":"trace[1066084898] transaction","detail":"{read_only:false; response_revision:1647; number_of_response:1; }","duration":"299.049343ms","start":"2024-08-16T16:52:14.453361Z","end":"2024-08-16T16:52:14.752410Z","steps":["trace[1066084898] 'process raft request'  (duration: 298.941178ms)"],"step_count":1}
	
	
	==> kernel <==
	 16:57:01 up 7 min,  0 users,  load average: 0.12, 0.76, 0.56
	Linux addons-671083 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [074ca5d31f9b925dc5ae3ee9f953a10d88c2e8c3842a558542504fd550639eee] <==
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0816 16:51:27.503378       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0816 16:51:27.517044       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0816 16:51:54.010764       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.107.237.105"}
	I0816 16:52:05.752034       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0816 16:52:06.888782       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0816 16:52:11.242353       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0816 16:52:11.427044       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.96.124.212"}
	I0816 16:52:27.215757       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0816 16:53:04.571544       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0816 16:53:04.572298       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0816 16:53:04.594480       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0816 16:53:04.594548       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0816 16:53:04.607711       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0816 16:53:04.608502       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0816 16:53:04.623682       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0816 16:53:04.623741       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0816 16:53:04.771491       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0816 16:53:04.771592       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0816 16:53:05.624129       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0816 16:53:05.772530       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0816 16:53:05.773673       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0816 16:54:31.501118       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.98.174.61"}
	
	
	==> kube-controller-manager [543ca544863e9e2a5d4a129d883c277b09cba9632bf110ffe5fe1cbd96011991] <==
	W0816 16:55:14.836496       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 16:55:14.836561       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0816 16:55:16.382172       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 16:55:16.382217       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0816 16:55:26.790480       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 16:55:26.790542       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0816 16:55:33.767812       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 16:55:33.768052       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0816 16:55:46.685088       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 16:55:46.685319       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0816 16:55:47.573465       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 16:55:47.573514       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0816 16:56:18.378812       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 16:56:18.379166       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0816 16:56:20.704929       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 16:56:20.705012       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0816 16:56:23.988289       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 16:56:23.988343       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0816 16:56:24.891930       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 16:56:24.892087       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0816 16:56:56.974310       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 16:56:56.974449       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0816 16:56:58.811276       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0816 16:56:58.811342       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0816 16:57:00.504228       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-8988944d9" duration="14.413µs"
	
	
	==> kube-proxy [0ee6d2726d73385e27a2719444dc6f623400d8b8a63e729e5d6b431d5af73b7f] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0816 16:49:44.050949       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0816 16:49:44.065378       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.240"]
	E0816 16:49:44.065448       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0816 16:49:44.130808       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0816 16:49:44.130853       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0816 16:49:44.130880       1 server_linux.go:169] "Using iptables Proxier"
	I0816 16:49:44.133240       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0816 16:49:44.133458       1 server.go:483] "Version info" version="v1.31.0"
	I0816 16:49:44.133484       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 16:49:44.136181       1 config.go:197] "Starting service config controller"
	I0816 16:49:44.136204       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0816 16:49:44.136241       1 config.go:104] "Starting endpoint slice config controller"
	I0816 16:49:44.136257       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0816 16:49:44.137042       1 config.go:326] "Starting node config controller"
	I0816 16:49:44.137063       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0816 16:49:44.236528       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0816 16:49:44.236581       1 shared_informer.go:320] Caches are synced for service config
	I0816 16:49:44.237154       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [738c5d5fbb5383fe98b87374293bf6c3ae61553b928d7a5ac91a577aac118946] <==
	W0816 16:49:34.669335       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0816 16:49:34.669533       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 16:49:34.669924       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 16:49:34.670047       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0816 16:49:35.501090       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 16:49:35.501138       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 16:49:35.508114       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 16:49:35.508158       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 16:49:35.543566       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 16:49:35.543614       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 16:49:35.595687       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 16:49:35.595739       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0816 16:49:35.630321       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 16:49:35.630441       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 16:49:35.742240       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 16:49:35.742334       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 16:49:35.746758       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0816 16:49:35.746931       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 16:49:35.788063       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 16:49:35.788172       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 16:49:35.800387       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0816 16:49:35.800507       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 16:49:35.804441       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 16:49:35.804487       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0816 16:49:38.538752       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 16 16:55:57 addons-671083 kubelet[1211]: E0816 16:55:57.573303    1211 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723827357572924891,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 16:55:57 addons-671083 kubelet[1211]: E0816 16:55:57.573342    1211 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723827357572924891,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 16:56:07 addons-671083 kubelet[1211]: E0816 16:56:07.575790    1211 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723827367575415182,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 16:56:07 addons-671083 kubelet[1211]: E0816 16:56:07.575823    1211 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723827367575415182,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 16:56:16 addons-671083 kubelet[1211]: I0816 16:56:16.231592    1211 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Aug 16 16:56:17 addons-671083 kubelet[1211]: E0816 16:56:17.579952    1211 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723827377579269730,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 16:56:17 addons-671083 kubelet[1211]: E0816 16:56:17.580772    1211 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723827377579269730,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 16:56:27 addons-671083 kubelet[1211]: E0816 16:56:27.587655    1211 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723827387583316054,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 16:56:27 addons-671083 kubelet[1211]: E0816 16:56:27.587946    1211 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723827387583316054,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 16:56:37 addons-671083 kubelet[1211]: E0816 16:56:37.254868    1211 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 16:56:37 addons-671083 kubelet[1211]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 16:56:37 addons-671083 kubelet[1211]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 16:56:37 addons-671083 kubelet[1211]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 16:56:37 addons-671083 kubelet[1211]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 16:56:37 addons-671083 kubelet[1211]: E0816 16:56:37.590496    1211 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723827397590186231,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 16:56:37 addons-671083 kubelet[1211]: E0816 16:56:37.590532    1211 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723827397590186231,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 16:56:47 addons-671083 kubelet[1211]: E0816 16:56:47.593955    1211 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723827407593454025,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 16:56:47 addons-671083 kubelet[1211]: E0816 16:56:47.594347    1211 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723827407593454025,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 16:56:57 addons-671083 kubelet[1211]: E0816 16:56:57.597447    1211 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723827417597083357,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 16:56:57 addons-671083 kubelet[1211]: E0816 16:56:57.597510    1211 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723827417597083357,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 16:57:00 addons-671083 kubelet[1211]: I0816 16:57:00.523529    1211 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-55bf9c44b4-srmzf" podStartSLOduration=147.277250711 podStartE2EDuration="2m29.523507921s" podCreationTimestamp="2024-08-16 16:54:31 +0000 UTC" firstStartedPulling="2024-08-16 16:54:31.85390841 +0000 UTC m=+294.764531384" lastFinishedPulling="2024-08-16 16:54:34.100165618 +0000 UTC m=+297.010788594" observedRunningTime="2024-08-16 16:54:34.658265773 +0000 UTC m=+297.568888768" watchObservedRunningTime="2024-08-16 16:57:00.523507921 +0000 UTC m=+443.434130913"
	Aug 16 16:57:01 addons-671083 kubelet[1211]: I0816 16:57:01.909570    1211 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/499be229-e123-4025-afef-b0608d31b95d-tmp-dir\") pod \"499be229-e123-4025-afef-b0608d31b95d\" (UID: \"499be229-e123-4025-afef-b0608d31b95d\") "
	Aug 16 16:57:01 addons-671083 kubelet[1211]: I0816 16:57:01.909617    1211 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ppl5\" (UniqueName: \"kubernetes.io/projected/499be229-e123-4025-afef-b0608d31b95d-kube-api-access-5ppl5\") pod \"499be229-e123-4025-afef-b0608d31b95d\" (UID: \"499be229-e123-4025-afef-b0608d31b95d\") "
	Aug 16 16:57:01 addons-671083 kubelet[1211]: I0816 16:57:01.910221    1211 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/499be229-e123-4025-afef-b0608d31b95d-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "499be229-e123-4025-afef-b0608d31b95d" (UID: "499be229-e123-4025-afef-b0608d31b95d"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Aug 16 16:57:01 addons-671083 kubelet[1211]: I0816 16:57:01.921129    1211 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/499be229-e123-4025-afef-b0608d31b95d-kube-api-access-5ppl5" (OuterVolumeSpecName: "kube-api-access-5ppl5") pod "499be229-e123-4025-afef-b0608d31b95d" (UID: "499be229-e123-4025-afef-b0608d31b95d"). InnerVolumeSpecName "kube-api-access-5ppl5". PluginName "kubernetes.io/projected", VolumeGidValue ""
	
	
	==> storage-provisioner [a055487b6473ea1f4a4d8325ff9e8ceeec758a776bcdd735b703f27cc4fafde5] <==
	I0816 16:49:49.330518       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0816 16:49:49.464713       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0816 16:49:49.464775       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0816 16:49:49.637053       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0816 16:49:49.741365       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-671083_d4cf6f3f-58d0-454d-b31d-a9613105700e!
	I0816 16:49:49.741444       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d42da215-d37b-49ed-8472-72b6409bcac2", APIVersion:"v1", ResourceVersion:"666", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-671083_d4cf6f3f-58d0-454d-b31d-a9613105700e became leader
	I0816 16:49:50.042350       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-671083_d4cf6f3f-58d0-454d-b31d-a9613105700e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-671083 -n addons-671083
helpers_test.go:261: (dbg) Run:  kubectl --context addons-671083 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (319.49s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.27s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-671083
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-671083: exit status 82 (2m0.448011852s)

                                                
                                                
-- stdout --
	* Stopping node "addons-671083"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-671083" : exit status 82
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-671083
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-671083: exit status 11 (21.53217078s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.240:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-671083" : exit status 11
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-671083
addons_test.go:182: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-671083: exit status 11 (6.144294186s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.240:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:184: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-671083" : exit status 11
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-671083
addons_test.go:187: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-671083: exit status 11 (6.143030275s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.240:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:189: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-671083" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 node stop m02 -v=7 --alsologtostderr
E0816 17:09:02.039415   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/functional-654639/client.crt: no such file or directory" logger="UnhandledError"
E0816 17:09:43.001115   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/functional-654639/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-764617 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.46316138s)

                                                
                                                
-- stdout --
	* Stopping node "ha-764617-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 17:08:46.159060   31306 out.go:345] Setting OutFile to fd 1 ...
	I0816 17:08:46.159191   31306 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:08:46.159201   31306 out.go:358] Setting ErrFile to fd 2...
	I0816 17:08:46.159205   31306 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:08:46.159497   31306 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-9545/.minikube/bin
	I0816 17:08:46.159900   31306 mustload.go:65] Loading cluster: ha-764617
	I0816 17:08:46.161007   31306 config.go:182] Loaded profile config "ha-764617": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 17:08:46.161030   31306 stop.go:39] StopHost: ha-764617-m02
	I0816 17:08:46.161581   31306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:08:46.161628   31306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:08:46.177331   31306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37155
	I0816 17:08:46.177813   31306 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:08:46.178371   31306 main.go:141] libmachine: Using API Version  1
	I0816 17:08:46.178410   31306 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:08:46.178760   31306 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:08:46.180989   31306 out.go:177] * Stopping node "ha-764617-m02"  ...
	I0816 17:08:46.182090   31306 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0816 17:08:46.182114   31306 main.go:141] libmachine: (ha-764617-m02) Calling .DriverName
	I0816 17:08:46.182325   31306 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0816 17:08:46.182348   31306 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHHostname
	I0816 17:08:46.185324   31306 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:08:46.185773   31306 main.go:141] libmachine: (ha-764617-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:3e:7f", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:05:08 +0000 UTC Type:0 Mac:52:54:00:cf:3e:7f Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-764617-m02 Clientid:01:52:54:00:cf:3e:7f}
	I0816 17:08:46.185802   31306 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:08:46.185921   31306 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHPort
	I0816 17:08:46.186100   31306 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHKeyPath
	I0816 17:08:46.186274   31306 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHUsername
	I0816 17:08:46.186450   31306 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m02/id_rsa Username:docker}
	I0816 17:08:46.268186   31306 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0816 17:08:46.321458   31306 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0816 17:08:46.375909   31306 main.go:141] libmachine: Stopping "ha-764617-m02"...
	I0816 17:08:46.375939   31306 main.go:141] libmachine: (ha-764617-m02) Calling .GetState
	I0816 17:08:46.377421   31306 main.go:141] libmachine: (ha-764617-m02) Calling .Stop
	I0816 17:08:46.380847   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 0/120
	I0816 17:08:47.382347   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 1/120
	I0816 17:08:48.383860   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 2/120
	I0816 17:08:49.385124   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 3/120
	I0816 17:08:50.386627   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 4/120
	I0816 17:08:51.388293   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 5/120
	I0816 17:08:52.389853   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 6/120
	I0816 17:08:53.391176   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 7/120
	I0816 17:08:54.392454   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 8/120
	I0816 17:08:55.393646   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 9/120
	I0816 17:08:56.395724   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 10/120
	I0816 17:08:57.397002   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 11/120
	I0816 17:08:58.399100   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 12/120
	I0816 17:08:59.401243   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 13/120
	I0816 17:09:00.403528   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 14/120
	I0816 17:09:01.405428   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 15/120
	I0816 17:09:02.407068   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 16/120
	I0816 17:09:03.408915   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 17/120
	I0816 17:09:04.411131   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 18/120
	I0816 17:09:05.412342   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 19/120
	I0816 17:09:06.414346   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 20/120
	I0816 17:09:07.415621   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 21/120
	I0816 17:09:08.417162   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 22/120
	I0816 17:09:09.418556   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 23/120
	I0816 17:09:10.419717   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 24/120
	I0816 17:09:11.421640   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 25/120
	I0816 17:09:12.422973   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 26/120
	I0816 17:09:13.424452   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 27/120
	I0816 17:09:14.425940   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 28/120
	I0816 17:09:15.427137   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 29/120
	I0816 17:09:16.429547   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 30/120
	I0816 17:09:17.431248   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 31/120
	I0816 17:09:18.432791   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 32/120
	I0816 17:09:19.435252   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 33/120
	I0816 17:09:20.436478   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 34/120
	I0816 17:09:21.438391   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 35/120
	I0816 17:09:22.440227   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 36/120
	I0816 17:09:23.441483   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 37/120
	I0816 17:09:24.443064   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 38/120
	I0816 17:09:25.444421   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 39/120
	I0816 17:09:26.446548   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 40/120
	I0816 17:09:27.447667   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 41/120
	I0816 17:09:28.448991   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 42/120
	I0816 17:09:29.450554   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 43/120
	I0816 17:09:30.452023   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 44/120
	I0816 17:09:31.454001   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 45/120
	I0816 17:09:32.455293   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 46/120
	I0816 17:09:33.457025   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 47/120
	I0816 17:09:34.459171   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 48/120
	I0816 17:09:35.461022   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 49/120
	I0816 17:09:36.463171   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 50/120
	I0816 17:09:37.464561   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 51/120
	I0816 17:09:38.466000   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 52/120
	I0816 17:09:39.467248   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 53/120
	I0816 17:09:40.469051   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 54/120
	I0816 17:09:41.470698   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 55/120
	I0816 17:09:42.472057   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 56/120
	I0816 17:09:43.473492   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 57/120
	I0816 17:09:44.474814   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 58/120
	I0816 17:09:45.476247   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 59/120
	I0816 17:09:46.478699   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 60/120
	I0816 17:09:47.480335   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 61/120
	I0816 17:09:48.482197   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 62/120
	I0816 17:09:49.484231   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 63/120
	I0816 17:09:50.485603   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 64/120
	I0816 17:09:51.487544   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 65/120
	I0816 17:09:52.488871   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 66/120
	I0816 17:09:53.490920   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 67/120
	I0816 17:09:54.492346   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 68/120
	I0816 17:09:55.493572   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 69/120
	I0816 17:09:56.495138   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 70/120
	I0816 17:09:57.496970   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 71/120
	I0816 17:09:58.499177   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 72/120
	I0816 17:09:59.501514   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 73/120
	I0816 17:10:00.503029   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 74/120
	I0816 17:10:01.505097   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 75/120
	I0816 17:10:02.507114   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 76/120
	I0816 17:10:03.508356   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 77/120
	I0816 17:10:04.509529   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 78/120
	I0816 17:10:05.511131   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 79/120
	I0816 17:10:06.513254   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 80/120
	I0816 17:10:07.515485   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 81/120
	I0816 17:10:08.516948   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 82/120
	I0816 17:10:09.519125   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 83/120
	I0816 17:10:10.520478   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 84/120
	I0816 17:10:11.522699   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 85/120
	I0816 17:10:12.524219   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 86/120
	I0816 17:10:13.526569   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 87/120
	I0816 17:10:14.527995   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 88/120
	I0816 17:10:15.529423   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 89/120
	I0816 17:10:16.531705   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 90/120
	I0816 17:10:17.533329   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 91/120
	I0816 17:10:18.534920   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 92/120
	I0816 17:10:19.536822   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 93/120
	I0816 17:10:20.538517   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 94/120
	I0816 17:10:21.540545   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 95/120
	I0816 17:10:22.542293   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 96/120
	I0816 17:10:23.543839   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 97/120
	I0816 17:10:24.545402   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 98/120
	I0816 17:10:25.546736   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 99/120
	I0816 17:10:26.548977   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 100/120
	I0816 17:10:27.551076   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 101/120
	I0816 17:10:28.552386   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 102/120
	I0816 17:10:29.553618   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 103/120
	I0816 17:10:30.555489   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 104/120
	I0816 17:10:31.557354   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 105/120
	I0816 17:10:32.559202   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 106/120
	I0816 17:10:33.560543   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 107/120
	I0816 17:10:34.561822   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 108/120
	I0816 17:10:35.563955   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 109/120
	I0816 17:10:36.565960   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 110/120
	I0816 17:10:37.567662   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 111/120
	I0816 17:10:38.568969   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 112/120
	I0816 17:10:39.571199   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 113/120
	I0816 17:10:40.572802   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 114/120
	I0816 17:10:41.574551   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 115/120
	I0816 17:10:42.576286   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 116/120
	I0816 17:10:43.577734   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 117/120
	I0816 17:10:44.578989   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 118/120
	I0816 17:10:45.581299   31306 main.go:141] libmachine: (ha-764617-m02) Waiting for machine to stop 119/120
	I0816 17:10:46.582369   31306 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0816 17:10:46.582578   31306 out.go:270] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-764617 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 status -v=7 --alsologtostderr
E0816 17:11:04.923484   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/functional-654639/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-764617 status -v=7 --alsologtostderr: exit status 3 (19.192107951s)

                                                
                                                
-- stdout --
	ha-764617
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-764617-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-764617-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-764617-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 17:10:46.628977   31732 out.go:345] Setting OutFile to fd 1 ...
	I0816 17:10:46.629277   31732 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:10:46.629287   31732 out.go:358] Setting ErrFile to fd 2...
	I0816 17:10:46.629292   31732 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:10:46.629457   31732 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-9545/.minikube/bin
	I0816 17:10:46.629658   31732 out.go:352] Setting JSON to false
	I0816 17:10:46.629685   31732 mustload.go:65] Loading cluster: ha-764617
	I0816 17:10:46.629744   31732 notify.go:220] Checking for updates...
	I0816 17:10:46.630212   31732 config.go:182] Loaded profile config "ha-764617": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 17:10:46.630237   31732 status.go:255] checking status of ha-764617 ...
	I0816 17:10:46.630690   31732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:10:46.630798   31732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:10:46.646306   31732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42025
	I0816 17:10:46.646777   31732 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:10:46.647368   31732 main.go:141] libmachine: Using API Version  1
	I0816 17:10:46.647405   31732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:10:46.647823   31732 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:10:46.648068   31732 main.go:141] libmachine: (ha-764617) Calling .GetState
	I0816 17:10:46.649688   31732 status.go:330] ha-764617 host status = "Running" (err=<nil>)
	I0816 17:10:46.649711   31732 host.go:66] Checking if "ha-764617" exists ...
	I0816 17:10:46.650120   31732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:10:46.650176   31732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:10:46.664427   31732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40883
	I0816 17:10:46.664916   31732 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:10:46.665477   31732 main.go:141] libmachine: Using API Version  1
	I0816 17:10:46.665504   31732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:10:46.665765   31732 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:10:46.665941   31732 main.go:141] libmachine: (ha-764617) Calling .GetIP
	I0816 17:10:46.668556   31732 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:10:46.669059   31732 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:10:46.669092   31732 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:10:46.669210   31732 host.go:66] Checking if "ha-764617" exists ...
	I0816 17:10:46.669506   31732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:10:46.669543   31732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:10:46.685077   31732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43475
	I0816 17:10:46.685423   31732 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:10:46.685890   31732 main.go:141] libmachine: Using API Version  1
	I0816 17:10:46.685909   31732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:10:46.686201   31732 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:10:46.686387   31732 main.go:141] libmachine: (ha-764617) Calling .DriverName
	I0816 17:10:46.686583   31732 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 17:10:46.686602   31732 main.go:141] libmachine: (ha-764617) Calling .GetSSHHostname
	I0816 17:10:46.689099   31732 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:10:46.689492   31732 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:10:46.689527   31732 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:10:46.689670   31732 main.go:141] libmachine: (ha-764617) Calling .GetSSHPort
	I0816 17:10:46.689850   31732 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:10:46.690014   31732 main.go:141] libmachine: (ha-764617) Calling .GetSSHUsername
	I0816 17:10:46.690146   31732 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617/id_rsa Username:docker}
	I0816 17:10:46.782757   31732 ssh_runner.go:195] Run: systemctl --version
	I0816 17:10:46.789245   31732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 17:10:46.807124   31732 kubeconfig.go:125] found "ha-764617" server: "https://192.168.39.254:8443"
	I0816 17:10:46.807154   31732 api_server.go:166] Checking apiserver status ...
	I0816 17:10:46.807188   31732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 17:10:46.823195   31732 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1104/cgroup
	W0816 17:10:46.832885   31732 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1104/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 17:10:46.832951   31732 ssh_runner.go:195] Run: ls
	I0816 17:10:46.836930   31732 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0816 17:10:46.842587   31732 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0816 17:10:46.842609   31732 status.go:422] ha-764617 apiserver status = Running (err=<nil>)
	I0816 17:10:46.842619   31732 status.go:257] ha-764617 status: &{Name:ha-764617 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 17:10:46.842644   31732 status.go:255] checking status of ha-764617-m02 ...
	I0816 17:10:46.842961   31732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:10:46.843001   31732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:10:46.857754   31732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36905
	I0816 17:10:46.858167   31732 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:10:46.858672   31732 main.go:141] libmachine: Using API Version  1
	I0816 17:10:46.858696   31732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:10:46.859069   31732 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:10:46.859297   31732 main.go:141] libmachine: (ha-764617-m02) Calling .GetState
	I0816 17:10:46.860818   31732 status.go:330] ha-764617-m02 host status = "Running" (err=<nil>)
	I0816 17:10:46.860835   31732 host.go:66] Checking if "ha-764617-m02" exists ...
	I0816 17:10:46.861188   31732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:10:46.861235   31732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:10:46.876254   31732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37239
	I0816 17:10:46.876619   31732 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:10:46.877084   31732 main.go:141] libmachine: Using API Version  1
	I0816 17:10:46.877104   31732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:10:46.877436   31732 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:10:46.877617   31732 main.go:141] libmachine: (ha-764617-m02) Calling .GetIP
	I0816 17:10:46.880236   31732 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:10:46.880710   31732 main.go:141] libmachine: (ha-764617-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:3e:7f", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:05:08 +0000 UTC Type:0 Mac:52:54:00:cf:3e:7f Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-764617-m02 Clientid:01:52:54:00:cf:3e:7f}
	I0816 17:10:46.880731   31732 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:10:46.880843   31732 host.go:66] Checking if "ha-764617-m02" exists ...
	I0816 17:10:46.881133   31732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:10:46.881174   31732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:10:46.897366   31732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36661
	I0816 17:10:46.897878   31732 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:10:46.898385   31732 main.go:141] libmachine: Using API Version  1
	I0816 17:10:46.898405   31732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:10:46.898680   31732 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:10:46.898881   31732 main.go:141] libmachine: (ha-764617-m02) Calling .DriverName
	I0816 17:10:46.899063   31732 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 17:10:46.899081   31732 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHHostname
	I0816 17:10:46.901899   31732 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:10:46.902353   31732 main.go:141] libmachine: (ha-764617-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:3e:7f", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:05:08 +0000 UTC Type:0 Mac:52:54:00:cf:3e:7f Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-764617-m02 Clientid:01:52:54:00:cf:3e:7f}
	I0816 17:10:46.902379   31732 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:10:46.902493   31732 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHPort
	I0816 17:10:46.902644   31732 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHKeyPath
	I0816 17:10:46.902788   31732 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHUsername
	I0816 17:10:46.902898   31732 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m02/id_rsa Username:docker}
	W0816 17:11:05.420835   31732 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.184:22: connect: no route to host
	W0816 17:11:05.420946   31732 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.184:22: connect: no route to host
	E0816 17:11:05.420970   31732 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.184:22: connect: no route to host
	I0816 17:11:05.420981   31732 status.go:257] ha-764617-m02 status: &{Name:ha-764617-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0816 17:11:05.421027   31732 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.184:22: connect: no route to host
	I0816 17:11:05.421045   31732 status.go:255] checking status of ha-764617-m03 ...
	I0816 17:11:05.421473   31732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:05.421531   31732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:05.436313   31732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42183
	I0816 17:11:05.436876   31732 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:05.437330   31732 main.go:141] libmachine: Using API Version  1
	I0816 17:11:05.437351   31732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:05.437705   31732 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:05.437916   31732 main.go:141] libmachine: (ha-764617-m03) Calling .GetState
	I0816 17:11:05.439405   31732 status.go:330] ha-764617-m03 host status = "Running" (err=<nil>)
	I0816 17:11:05.439435   31732 host.go:66] Checking if "ha-764617-m03" exists ...
	I0816 17:11:05.439723   31732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:05.439753   31732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:05.453975   31732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43759
	I0816 17:11:05.454325   31732 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:05.454753   31732 main.go:141] libmachine: Using API Version  1
	I0816 17:11:05.454784   31732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:05.455075   31732 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:05.455269   31732 main.go:141] libmachine: (ha-764617-m03) Calling .GetIP
	I0816 17:11:05.457629   31732 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:11:05.458066   31732 main.go:141] libmachine: (ha-764617-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:4e:81", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:06:25 +0000 UTC Type:0 Mac:52:54:00:b2:4e:81 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-764617-m03 Clientid:01:52:54:00:b2:4e:81}
	I0816 17:11:05.458103   31732 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:11:05.458185   31732 host.go:66] Checking if "ha-764617-m03" exists ...
	I0816 17:11:05.458497   31732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:05.458546   31732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:05.473310   31732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34791
	I0816 17:11:05.473671   31732 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:05.474113   31732 main.go:141] libmachine: Using API Version  1
	I0816 17:11:05.474131   31732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:05.474421   31732 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:05.474611   31732 main.go:141] libmachine: (ha-764617-m03) Calling .DriverName
	I0816 17:11:05.474789   31732 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 17:11:05.474807   31732 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHHostname
	I0816 17:11:05.477362   31732 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:11:05.477737   31732 main.go:141] libmachine: (ha-764617-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:4e:81", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:06:25 +0000 UTC Type:0 Mac:52:54:00:b2:4e:81 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-764617-m03 Clientid:01:52:54:00:b2:4e:81}
	I0816 17:11:05.477770   31732 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:11:05.477895   31732 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHPort
	I0816 17:11:05.478034   31732 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHKeyPath
	I0816 17:11:05.478174   31732 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHUsername
	I0816 17:11:05.478310   31732 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m03/id_rsa Username:docker}
	I0816 17:11:05.561091   31732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 17:11:05.578141   31732 kubeconfig.go:125] found "ha-764617" server: "https://192.168.39.254:8443"
	I0816 17:11:05.578186   31732 api_server.go:166] Checking apiserver status ...
	I0816 17:11:05.578228   31732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 17:11:05.592579   31732 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1510/cgroup
	W0816 17:11:05.601553   31732 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1510/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 17:11:05.601630   31732 ssh_runner.go:195] Run: ls
	I0816 17:11:05.608157   31732 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0816 17:11:05.615789   31732 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0816 17:11:05.615809   31732 status.go:422] ha-764617-m03 apiserver status = Running (err=<nil>)
	I0816 17:11:05.615817   31732 status.go:257] ha-764617-m03 status: &{Name:ha-764617-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 17:11:05.615831   31732 status.go:255] checking status of ha-764617-m04 ...
	I0816 17:11:05.616099   31732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:05.616133   31732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:05.631998   31732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40451
	I0816 17:11:05.632409   31732 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:05.632894   31732 main.go:141] libmachine: Using API Version  1
	I0816 17:11:05.632916   31732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:05.633221   31732 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:05.633423   31732 main.go:141] libmachine: (ha-764617-m04) Calling .GetState
	I0816 17:11:05.635055   31732 status.go:330] ha-764617-m04 host status = "Running" (err=<nil>)
	I0816 17:11:05.635071   31732 host.go:66] Checking if "ha-764617-m04" exists ...
	I0816 17:11:05.635463   31732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:05.635557   31732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:05.650627   31732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40429
	I0816 17:11:05.651082   31732 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:05.651554   31732 main.go:141] libmachine: Using API Version  1
	I0816 17:11:05.651578   31732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:05.651899   31732 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:05.652050   31732 main.go:141] libmachine: (ha-764617-m04) Calling .GetIP
	I0816 17:11:05.654791   31732 main.go:141] libmachine: (ha-764617-m04) DBG | domain ha-764617-m04 has defined MAC address 52:54:00:61:8e:ba in network mk-ha-764617
	I0816 17:11:05.655221   31732 main.go:141] libmachine: (ha-764617-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:8e:ba", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:07:50 +0000 UTC Type:0 Mac:52:54:00:61:8e:ba Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-764617-m04 Clientid:01:52:54:00:61:8e:ba}
	I0816 17:11:05.655244   31732 main.go:141] libmachine: (ha-764617-m04) DBG | domain ha-764617-m04 has defined IP address 192.168.39.137 and MAC address 52:54:00:61:8e:ba in network mk-ha-764617
	I0816 17:11:05.655385   31732 host.go:66] Checking if "ha-764617-m04" exists ...
	I0816 17:11:05.655715   31732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:05.655747   31732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:05.670490   31732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39745
	I0816 17:11:05.670907   31732 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:05.671294   31732 main.go:141] libmachine: Using API Version  1
	I0816 17:11:05.671314   31732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:05.671586   31732 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:05.671778   31732 main.go:141] libmachine: (ha-764617-m04) Calling .DriverName
	I0816 17:11:05.671955   31732 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 17:11:05.671984   31732 main.go:141] libmachine: (ha-764617-m04) Calling .GetSSHHostname
	I0816 17:11:05.674739   31732 main.go:141] libmachine: (ha-764617-m04) DBG | domain ha-764617-m04 has defined MAC address 52:54:00:61:8e:ba in network mk-ha-764617
	I0816 17:11:05.675146   31732 main.go:141] libmachine: (ha-764617-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:8e:ba", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:07:50 +0000 UTC Type:0 Mac:52:54:00:61:8e:ba Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-764617-m04 Clientid:01:52:54:00:61:8e:ba}
	I0816 17:11:05.675180   31732 main.go:141] libmachine: (ha-764617-m04) DBG | domain ha-764617-m04 has defined IP address 192.168.39.137 and MAC address 52:54:00:61:8e:ba in network mk-ha-764617
	I0816 17:11:05.675298   31732 main.go:141] libmachine: (ha-764617-m04) Calling .GetSSHPort
	I0816 17:11:05.675484   31732 main.go:141] libmachine: (ha-764617-m04) Calling .GetSSHKeyPath
	I0816 17:11:05.675641   31732 main.go:141] libmachine: (ha-764617-m04) Calling .GetSSHUsername
	I0816 17:11:05.675803   31732 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m04/id_rsa Username:docker}
	I0816 17:11:05.757141   31732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 17:11:05.774326   31732 status.go:257] ha-764617-m04 status: &{Name:ha-764617-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-764617 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-764617 -n ha-764617
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-764617 logs -n 25: (1.285295374s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-764617 cp ha-764617-m03:/home/docker/cp-test.txt                              | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1933781201/001/cp-test_ha-764617-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-764617 ssh -n                                                                 | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | ha-764617-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-764617 cp ha-764617-m03:/home/docker/cp-test.txt                              | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | ha-764617:/home/docker/cp-test_ha-764617-m03_ha-764617.txt                       |           |         |         |                     |                     |
	| ssh     | ha-764617 ssh -n                                                                 | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | ha-764617-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-764617 ssh -n ha-764617 sudo cat                                              | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | /home/docker/cp-test_ha-764617-m03_ha-764617.txt                                 |           |         |         |                     |                     |
	| cp      | ha-764617 cp ha-764617-m03:/home/docker/cp-test.txt                              | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | ha-764617-m02:/home/docker/cp-test_ha-764617-m03_ha-764617-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-764617 ssh -n                                                                 | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | ha-764617-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-764617 ssh -n ha-764617-m02 sudo cat                                          | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | /home/docker/cp-test_ha-764617-m03_ha-764617-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-764617 cp ha-764617-m03:/home/docker/cp-test.txt                              | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | ha-764617-m04:/home/docker/cp-test_ha-764617-m03_ha-764617-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-764617 ssh -n                                                                 | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | ha-764617-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-764617 ssh -n ha-764617-m04 sudo cat                                          | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | /home/docker/cp-test_ha-764617-m03_ha-764617-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-764617 cp testdata/cp-test.txt                                                | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | ha-764617-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-764617 ssh -n                                                                 | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | ha-764617-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-764617 cp ha-764617-m04:/home/docker/cp-test.txt                              | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1933781201/001/cp-test_ha-764617-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-764617 ssh -n                                                                 | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | ha-764617-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-764617 cp ha-764617-m04:/home/docker/cp-test.txt                              | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | ha-764617:/home/docker/cp-test_ha-764617-m04_ha-764617.txt                       |           |         |         |                     |                     |
	| ssh     | ha-764617 ssh -n                                                                 | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | ha-764617-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-764617 ssh -n ha-764617 sudo cat                                              | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | /home/docker/cp-test_ha-764617-m04_ha-764617.txt                                 |           |         |         |                     |                     |
	| cp      | ha-764617 cp ha-764617-m04:/home/docker/cp-test.txt                              | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | ha-764617-m02:/home/docker/cp-test_ha-764617-m04_ha-764617-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-764617 ssh -n                                                                 | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | ha-764617-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-764617 ssh -n ha-764617-m02 sudo cat                                          | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | /home/docker/cp-test_ha-764617-m04_ha-764617-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-764617 cp ha-764617-m04:/home/docker/cp-test.txt                              | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | ha-764617-m03:/home/docker/cp-test_ha-764617-m04_ha-764617-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-764617 ssh -n                                                                 | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | ha-764617-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-764617 ssh -n ha-764617-m03 sudo cat                                          | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | /home/docker/cp-test_ha-764617-m04_ha-764617-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-764617 node stop m02 -v=7                                                     | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 17:04:11
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 17:04:11.174420   27287 out.go:345] Setting OutFile to fd 1 ...
	I0816 17:04:11.174645   27287 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:04:11.174653   27287 out.go:358] Setting ErrFile to fd 2...
	I0816 17:04:11.174657   27287 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:04:11.174805   27287 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-9545/.minikube/bin
	I0816 17:04:11.175400   27287 out.go:352] Setting JSON to false
	I0816 17:04:11.176184   27287 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2749,"bootTime":1723825102,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0816 17:04:11.176238   27287 start.go:139] virtualization: kvm guest
	I0816 17:04:11.178345   27287 out.go:177] * [ha-764617] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0816 17:04:11.179681   27287 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 17:04:11.179713   27287 notify.go:220] Checking for updates...
	I0816 17:04:11.181900   27287 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 17:04:11.183037   27287 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19461-9545/kubeconfig
	I0816 17:04:11.184170   27287 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19461-9545/.minikube
	I0816 17:04:11.185338   27287 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 17:04:11.186327   27287 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 17:04:11.187420   27287 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 17:04:11.221543   27287 out.go:177] * Using the kvm2 driver based on user configuration
	I0816 17:04:11.222682   27287 start.go:297] selected driver: kvm2
	I0816 17:04:11.222697   27287 start.go:901] validating driver "kvm2" against <nil>
	I0816 17:04:11.222710   27287 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 17:04:11.223397   27287 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 17:04:11.223476   27287 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19461-9545/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 17:04:11.238691   27287 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0816 17:04:11.238751   27287 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 17:04:11.238965   27287 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 17:04:11.239001   27287 cni.go:84] Creating CNI manager for ""
	I0816 17:04:11.239010   27287 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0816 17:04:11.239021   27287 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0816 17:04:11.239092   27287 start.go:340] cluster config:
	{Name:ha-764617 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-764617 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0816 17:04:11.239194   27287 iso.go:125] acquiring lock: {Name:mke35866c1d0f078e1f027c2d727692722810e04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 17:04:11.240824   27287 out.go:177] * Starting "ha-764617" primary control-plane node in "ha-764617" cluster
	I0816 17:04:11.241860   27287 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 17:04:11.241899   27287 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0816 17:04:11.241907   27287 cache.go:56] Caching tarball of preloaded images
	I0816 17:04:11.241987   27287 preload.go:172] Found /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 17:04:11.242000   27287 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0816 17:04:11.242295   27287 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/config.json ...
	I0816 17:04:11.242324   27287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/config.json: {Name:mke1f2c51e39699076007c2f0252e975b8439c9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:04:11.242473   27287 start.go:360] acquireMachinesLock for ha-764617: {Name:mke1ffa1f2d6d714bdd85e184816ba8f4dfd08f1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 17:04:11.242514   27287 start.go:364] duration metric: took 25.966µs to acquireMachinesLock for "ha-764617"
	I0816 17:04:11.242535   27287 start.go:93] Provisioning new machine with config: &{Name:ha-764617 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-764617 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 17:04:11.242604   27287 start.go:125] createHost starting for "" (driver="kvm2")
	I0816 17:04:11.244182   27287 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 17:04:11.244317   27287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:04:11.244348   27287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:04:11.258103   27287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33649
	I0816 17:04:11.258510   27287 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:04:11.259028   27287 main.go:141] libmachine: Using API Version  1
	I0816 17:04:11.259044   27287 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:04:11.259383   27287 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:04:11.259556   27287 main.go:141] libmachine: (ha-764617) Calling .GetMachineName
	I0816 17:04:11.259684   27287 main.go:141] libmachine: (ha-764617) Calling .DriverName
	I0816 17:04:11.259825   27287 start.go:159] libmachine.API.Create for "ha-764617" (driver="kvm2")
	I0816 17:04:11.259862   27287 client.go:168] LocalClient.Create starting
	I0816 17:04:11.259890   27287 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem
	I0816 17:04:11.259930   27287 main.go:141] libmachine: Decoding PEM data...
	I0816 17:04:11.259947   27287 main.go:141] libmachine: Parsing certificate...
	I0816 17:04:11.260006   27287 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem
	I0816 17:04:11.260024   27287 main.go:141] libmachine: Decoding PEM data...
	I0816 17:04:11.260035   27287 main.go:141] libmachine: Parsing certificate...
	I0816 17:04:11.260051   27287 main.go:141] libmachine: Running pre-create checks...
	I0816 17:04:11.260060   27287 main.go:141] libmachine: (ha-764617) Calling .PreCreateCheck
	I0816 17:04:11.260370   27287 main.go:141] libmachine: (ha-764617) Calling .GetConfigRaw
	I0816 17:04:11.260758   27287 main.go:141] libmachine: Creating machine...
	I0816 17:04:11.260779   27287 main.go:141] libmachine: (ha-764617) Calling .Create
	I0816 17:04:11.260893   27287 main.go:141] libmachine: (ha-764617) Creating KVM machine...
	I0816 17:04:11.262073   27287 main.go:141] libmachine: (ha-764617) DBG | found existing default KVM network
	I0816 17:04:11.262688   27287 main.go:141] libmachine: (ha-764617) DBG | I0816 17:04:11.262559   27310 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012d980}
	I0816 17:04:11.262705   27287 main.go:141] libmachine: (ha-764617) DBG | created network xml: 
	I0816 17:04:11.262718   27287 main.go:141] libmachine: (ha-764617) DBG | <network>
	I0816 17:04:11.262730   27287 main.go:141] libmachine: (ha-764617) DBG |   <name>mk-ha-764617</name>
	I0816 17:04:11.262737   27287 main.go:141] libmachine: (ha-764617) DBG |   <dns enable='no'/>
	I0816 17:04:11.262751   27287 main.go:141] libmachine: (ha-764617) DBG |   
	I0816 17:04:11.262765   27287 main.go:141] libmachine: (ha-764617) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0816 17:04:11.262779   27287 main.go:141] libmachine: (ha-764617) DBG |     <dhcp>
	I0816 17:04:11.262793   27287 main.go:141] libmachine: (ha-764617) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0816 17:04:11.262804   27287 main.go:141] libmachine: (ha-764617) DBG |     </dhcp>
	I0816 17:04:11.262815   27287 main.go:141] libmachine: (ha-764617) DBG |   </ip>
	I0816 17:04:11.262824   27287 main.go:141] libmachine: (ha-764617) DBG |   
	I0816 17:04:11.262834   27287 main.go:141] libmachine: (ha-764617) DBG | </network>
	I0816 17:04:11.262850   27287 main.go:141] libmachine: (ha-764617) DBG | 
	I0816 17:04:11.267653   27287 main.go:141] libmachine: (ha-764617) DBG | trying to create private KVM network mk-ha-764617 192.168.39.0/24...
	I0816 17:04:11.328268   27287 main.go:141] libmachine: (ha-764617) DBG | private KVM network mk-ha-764617 192.168.39.0/24 created
	I0816 17:04:11.328320   27287 main.go:141] libmachine: (ha-764617) Setting up store path in /home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617 ...
	I0816 17:04:11.328335   27287 main.go:141] libmachine: (ha-764617) DBG | I0816 17:04:11.328212   27310 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19461-9545/.minikube
	I0816 17:04:11.328350   27287 main.go:141] libmachine: (ha-764617) Building disk image from file:///home/jenkins/minikube-integration/19461-9545/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0816 17:04:11.328365   27287 main.go:141] libmachine: (ha-764617) Downloading /home/jenkins/minikube-integration/19461-9545/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19461-9545/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0816 17:04:11.565921   27287 main.go:141] libmachine: (ha-764617) DBG | I0816 17:04:11.565786   27310 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617/id_rsa...
	I0816 17:04:11.665197   27287 main.go:141] libmachine: (ha-764617) DBG | I0816 17:04:11.665075   27310 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617/ha-764617.rawdisk...
	I0816 17:04:11.665230   27287 main.go:141] libmachine: (ha-764617) DBG | Writing magic tar header
	I0816 17:04:11.665244   27287 main.go:141] libmachine: (ha-764617) DBG | Writing SSH key tar header
	I0816 17:04:11.665253   27287 main.go:141] libmachine: (ha-764617) DBG | I0816 17:04:11.665210   27310 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617 ...
	I0816 17:04:11.665346   27287 main.go:141] libmachine: (ha-764617) Setting executable bit set on /home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617 (perms=drwx------)
	I0816 17:04:11.665364   27287 main.go:141] libmachine: (ha-764617) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617
	I0816 17:04:11.665375   27287 main.go:141] libmachine: (ha-764617) Setting executable bit set on /home/jenkins/minikube-integration/19461-9545/.minikube/machines (perms=drwxr-xr-x)
	I0816 17:04:11.665391   27287 main.go:141] libmachine: (ha-764617) Setting executable bit set on /home/jenkins/minikube-integration/19461-9545/.minikube (perms=drwxr-xr-x)
	I0816 17:04:11.665401   27287 main.go:141] libmachine: (ha-764617) Setting executable bit set on /home/jenkins/minikube-integration/19461-9545 (perms=drwxrwxr-x)
	I0816 17:04:11.665413   27287 main.go:141] libmachine: (ha-764617) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0816 17:04:11.665419   27287 main.go:141] libmachine: (ha-764617) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0816 17:04:11.665425   27287 main.go:141] libmachine: (ha-764617) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19461-9545/.minikube/machines
	I0816 17:04:11.665434   27287 main.go:141] libmachine: (ha-764617) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19461-9545/.minikube
	I0816 17:04:11.665440   27287 main.go:141] libmachine: (ha-764617) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19461-9545
	I0816 17:04:11.665474   27287 main.go:141] libmachine: (ha-764617) Creating domain...
	I0816 17:04:11.665498   27287 main.go:141] libmachine: (ha-764617) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0816 17:04:11.665512   27287 main.go:141] libmachine: (ha-764617) DBG | Checking permissions on dir: /home/jenkins
	I0816 17:04:11.665536   27287 main.go:141] libmachine: (ha-764617) DBG | Checking permissions on dir: /home
	I0816 17:04:11.665551   27287 main.go:141] libmachine: (ha-764617) DBG | Skipping /home - not owner
	I0816 17:04:11.666431   27287 main.go:141] libmachine: (ha-764617) define libvirt domain using xml: 
	I0816 17:04:11.666445   27287 main.go:141] libmachine: (ha-764617) <domain type='kvm'>
	I0816 17:04:11.666452   27287 main.go:141] libmachine: (ha-764617)   <name>ha-764617</name>
	I0816 17:04:11.666456   27287 main.go:141] libmachine: (ha-764617)   <memory unit='MiB'>2200</memory>
	I0816 17:04:11.666462   27287 main.go:141] libmachine: (ha-764617)   <vcpu>2</vcpu>
	I0816 17:04:11.666466   27287 main.go:141] libmachine: (ha-764617)   <features>
	I0816 17:04:11.666471   27287 main.go:141] libmachine: (ha-764617)     <acpi/>
	I0816 17:04:11.666475   27287 main.go:141] libmachine: (ha-764617)     <apic/>
	I0816 17:04:11.666480   27287 main.go:141] libmachine: (ha-764617)     <pae/>
	I0816 17:04:11.666485   27287 main.go:141] libmachine: (ha-764617)     
	I0816 17:04:11.666490   27287 main.go:141] libmachine: (ha-764617)   </features>
	I0816 17:04:11.666497   27287 main.go:141] libmachine: (ha-764617)   <cpu mode='host-passthrough'>
	I0816 17:04:11.666502   27287 main.go:141] libmachine: (ha-764617)   
	I0816 17:04:11.666507   27287 main.go:141] libmachine: (ha-764617)   </cpu>
	I0816 17:04:11.666512   27287 main.go:141] libmachine: (ha-764617)   <os>
	I0816 17:04:11.666519   27287 main.go:141] libmachine: (ha-764617)     <type>hvm</type>
	I0816 17:04:11.666524   27287 main.go:141] libmachine: (ha-764617)     <boot dev='cdrom'/>
	I0816 17:04:11.666537   27287 main.go:141] libmachine: (ha-764617)     <boot dev='hd'/>
	I0816 17:04:11.666557   27287 main.go:141] libmachine: (ha-764617)     <bootmenu enable='no'/>
	I0816 17:04:11.666578   27287 main.go:141] libmachine: (ha-764617)   </os>
	I0816 17:04:11.666585   27287 main.go:141] libmachine: (ha-764617)   <devices>
	I0816 17:04:11.666596   27287 main.go:141] libmachine: (ha-764617)     <disk type='file' device='cdrom'>
	I0816 17:04:11.666637   27287 main.go:141] libmachine: (ha-764617)       <source file='/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617/boot2docker.iso'/>
	I0816 17:04:11.666660   27287 main.go:141] libmachine: (ha-764617)       <target dev='hdc' bus='scsi'/>
	I0816 17:04:11.666671   27287 main.go:141] libmachine: (ha-764617)       <readonly/>
	I0816 17:04:11.666684   27287 main.go:141] libmachine: (ha-764617)     </disk>
	I0816 17:04:11.666705   27287 main.go:141] libmachine: (ha-764617)     <disk type='file' device='disk'>
	I0816 17:04:11.666718   27287 main.go:141] libmachine: (ha-764617)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0816 17:04:11.666731   27287 main.go:141] libmachine: (ha-764617)       <source file='/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617/ha-764617.rawdisk'/>
	I0816 17:04:11.666739   27287 main.go:141] libmachine: (ha-764617)       <target dev='hda' bus='virtio'/>
	I0816 17:04:11.666745   27287 main.go:141] libmachine: (ha-764617)     </disk>
	I0816 17:04:11.666752   27287 main.go:141] libmachine: (ha-764617)     <interface type='network'>
	I0816 17:04:11.666759   27287 main.go:141] libmachine: (ha-764617)       <source network='mk-ha-764617'/>
	I0816 17:04:11.666770   27287 main.go:141] libmachine: (ha-764617)       <model type='virtio'/>
	I0816 17:04:11.666782   27287 main.go:141] libmachine: (ha-764617)     </interface>
	I0816 17:04:11.666793   27287 main.go:141] libmachine: (ha-764617)     <interface type='network'>
	I0816 17:04:11.666804   27287 main.go:141] libmachine: (ha-764617)       <source network='default'/>
	I0816 17:04:11.666814   27287 main.go:141] libmachine: (ha-764617)       <model type='virtio'/>
	I0816 17:04:11.666821   27287 main.go:141] libmachine: (ha-764617)     </interface>
	I0816 17:04:11.666831   27287 main.go:141] libmachine: (ha-764617)     <serial type='pty'>
	I0816 17:04:11.666839   27287 main.go:141] libmachine: (ha-764617)       <target port='0'/>
	I0816 17:04:11.666845   27287 main.go:141] libmachine: (ha-764617)     </serial>
	I0816 17:04:11.666855   27287 main.go:141] libmachine: (ha-764617)     <console type='pty'>
	I0816 17:04:11.666867   27287 main.go:141] libmachine: (ha-764617)       <target type='serial' port='0'/>
	I0816 17:04:11.666877   27287 main.go:141] libmachine: (ha-764617)     </console>
	I0816 17:04:11.666890   27287 main.go:141] libmachine: (ha-764617)     <rng model='virtio'>
	I0816 17:04:11.666901   27287 main.go:141] libmachine: (ha-764617)       <backend model='random'>/dev/random</backend>
	I0816 17:04:11.666910   27287 main.go:141] libmachine: (ha-764617)     </rng>
	I0816 17:04:11.666921   27287 main.go:141] libmachine: (ha-764617)     
	I0816 17:04:11.666945   27287 main.go:141] libmachine: (ha-764617)     
	I0816 17:04:11.666965   27287 main.go:141] libmachine: (ha-764617)   </devices>
	I0816 17:04:11.666974   27287 main.go:141] libmachine: (ha-764617) </domain>
	I0816 17:04:11.666979   27287 main.go:141] libmachine: (ha-764617) 
	I0816 17:04:11.672366   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:cf:a5:f4 in network default
	I0816 17:04:11.672928   27287 main.go:141] libmachine: (ha-764617) Ensuring networks are active...
	I0816 17:04:11.672941   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:11.673616   27287 main.go:141] libmachine: (ha-764617) Ensuring network default is active
	I0816 17:04:11.674000   27287 main.go:141] libmachine: (ha-764617) Ensuring network mk-ha-764617 is active
	I0816 17:04:11.674421   27287 main.go:141] libmachine: (ha-764617) Getting domain xml...
	I0816 17:04:11.675137   27287 main.go:141] libmachine: (ha-764617) Creating domain...
	I0816 17:04:12.863675   27287 main.go:141] libmachine: (ha-764617) Waiting to get IP...
	I0816 17:04:12.864442   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:12.864835   27287 main.go:141] libmachine: (ha-764617) DBG | unable to find current IP address of domain ha-764617 in network mk-ha-764617
	I0816 17:04:12.864877   27287 main.go:141] libmachine: (ha-764617) DBG | I0816 17:04:12.864827   27310 retry.go:31] will retry after 238.805759ms: waiting for machine to come up
	I0816 17:04:13.105386   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:13.105864   27287 main.go:141] libmachine: (ha-764617) DBG | unable to find current IP address of domain ha-764617 in network mk-ha-764617
	I0816 17:04:13.105891   27287 main.go:141] libmachine: (ha-764617) DBG | I0816 17:04:13.105825   27310 retry.go:31] will retry after 313.687436ms: waiting for machine to come up
	I0816 17:04:13.421431   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:13.421952   27287 main.go:141] libmachine: (ha-764617) DBG | unable to find current IP address of domain ha-764617 in network mk-ha-764617
	I0816 17:04:13.421974   27287 main.go:141] libmachine: (ha-764617) DBG | I0816 17:04:13.421918   27310 retry.go:31] will retry after 369.042428ms: waiting for machine to come up
	I0816 17:04:13.792398   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:13.792886   27287 main.go:141] libmachine: (ha-764617) DBG | unable to find current IP address of domain ha-764617 in network mk-ha-764617
	I0816 17:04:13.792927   27287 main.go:141] libmachine: (ha-764617) DBG | I0816 17:04:13.792852   27310 retry.go:31] will retry after 568.225467ms: waiting for machine to come up
	I0816 17:04:14.362432   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:14.362828   27287 main.go:141] libmachine: (ha-764617) DBG | unable to find current IP address of domain ha-764617 in network mk-ha-764617
	I0816 17:04:14.362860   27287 main.go:141] libmachine: (ha-764617) DBG | I0816 17:04:14.362777   27310 retry.go:31] will retry after 741.209975ms: waiting for machine to come up
	I0816 17:04:15.105604   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:15.106046   27287 main.go:141] libmachine: (ha-764617) DBG | unable to find current IP address of domain ha-764617 in network mk-ha-764617
	I0816 17:04:15.106073   27287 main.go:141] libmachine: (ha-764617) DBG | I0816 17:04:15.105994   27310 retry.go:31] will retry after 660.568903ms: waiting for machine to come up
	I0816 17:04:15.767780   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:15.768211   27287 main.go:141] libmachine: (ha-764617) DBG | unable to find current IP address of domain ha-764617 in network mk-ha-764617
	I0816 17:04:15.768239   27287 main.go:141] libmachine: (ha-764617) DBG | I0816 17:04:15.768164   27310 retry.go:31] will retry after 894.998278ms: waiting for machine to come up
	I0816 17:04:16.664726   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:16.665143   27287 main.go:141] libmachine: (ha-764617) DBG | unable to find current IP address of domain ha-764617 in network mk-ha-764617
	I0816 17:04:16.665170   27287 main.go:141] libmachine: (ha-764617) DBG | I0816 17:04:16.665101   27310 retry.go:31] will retry after 1.452752003s: waiting for machine to come up
	I0816 17:04:18.119859   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:18.120258   27287 main.go:141] libmachine: (ha-764617) DBG | unable to find current IP address of domain ha-764617 in network mk-ha-764617
	I0816 17:04:18.120286   27287 main.go:141] libmachine: (ha-764617) DBG | I0816 17:04:18.120204   27310 retry.go:31] will retry after 1.178795077s: waiting for machine to come up
	I0816 17:04:19.300517   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:19.300993   27287 main.go:141] libmachine: (ha-764617) DBG | unable to find current IP address of domain ha-764617 in network mk-ha-764617
	I0816 17:04:19.301021   27287 main.go:141] libmachine: (ha-764617) DBG | I0816 17:04:19.300948   27310 retry.go:31] will retry after 2.323538467s: waiting for machine to come up
	I0816 17:04:21.626714   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:21.627179   27287 main.go:141] libmachine: (ha-764617) DBG | unable to find current IP address of domain ha-764617 in network mk-ha-764617
	I0816 17:04:21.627207   27287 main.go:141] libmachine: (ha-764617) DBG | I0816 17:04:21.627095   27310 retry.go:31] will retry after 2.426890051s: waiting for machine to come up
	I0816 17:04:24.056745   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:24.057302   27287 main.go:141] libmachine: (ha-764617) DBG | unable to find current IP address of domain ha-764617 in network mk-ha-764617
	I0816 17:04:24.057325   27287 main.go:141] libmachine: (ha-764617) DBG | I0816 17:04:24.057137   27310 retry.go:31] will retry after 2.310439067s: waiting for machine to come up
	I0816 17:04:26.369421   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:26.369803   27287 main.go:141] libmachine: (ha-764617) DBG | unable to find current IP address of domain ha-764617 in network mk-ha-764617
	I0816 17:04:26.369828   27287 main.go:141] libmachine: (ha-764617) DBG | I0816 17:04:26.369751   27310 retry.go:31] will retry after 4.128642923s: waiting for machine to come up
	I0816 17:04:30.503022   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:30.503484   27287 main.go:141] libmachine: (ha-764617) Found IP for machine: 192.168.39.18
	I0816 17:04:30.503515   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has current primary IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:30.503525   27287 main.go:141] libmachine: (ha-764617) Reserving static IP address...
	I0816 17:04:30.504069   27287 main.go:141] libmachine: (ha-764617) DBG | unable to find host DHCP lease matching {name: "ha-764617", mac: "52:54:00:5b:ba:f5", ip: "192.168.39.18"} in network mk-ha-764617
	I0816 17:04:30.575307   27287 main.go:141] libmachine: (ha-764617) DBG | Getting to WaitForSSH function...
	I0816 17:04:30.575338   27287 main.go:141] libmachine: (ha-764617) Reserved static IP address: 192.168.39.18
	I0816 17:04:30.575351   27287 main.go:141] libmachine: (ha-764617) Waiting for SSH to be available...
	I0816 17:04:30.579341   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:30.579893   27287 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:minikube Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:04:30.579927   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:30.580078   27287 main.go:141] libmachine: (ha-764617) DBG | Using SSH client type: external
	I0816 17:04:30.580094   27287 main.go:141] libmachine: (ha-764617) DBG | Using SSH private key: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617/id_rsa (-rw-------)
	I0816 17:04:30.580129   27287 main.go:141] libmachine: (ha-764617) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.18 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 17:04:30.580202   27287 main.go:141] libmachine: (ha-764617) DBG | About to run SSH command:
	I0816 17:04:30.580223   27287 main.go:141] libmachine: (ha-764617) DBG | exit 0
	I0816 17:04:30.712474   27287 main.go:141] libmachine: (ha-764617) DBG | SSH cmd err, output: <nil>: 
	I0816 17:04:30.712829   27287 main.go:141] libmachine: (ha-764617) KVM machine creation complete!
	I0816 17:04:30.713295   27287 main.go:141] libmachine: (ha-764617) Calling .GetConfigRaw
	I0816 17:04:30.713814   27287 main.go:141] libmachine: (ha-764617) Calling .DriverName
	I0816 17:04:30.713996   27287 main.go:141] libmachine: (ha-764617) Calling .DriverName
	I0816 17:04:30.714230   27287 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0816 17:04:30.714263   27287 main.go:141] libmachine: (ha-764617) Calling .GetState
	I0816 17:04:30.715663   27287 main.go:141] libmachine: Detecting operating system of created instance...
	I0816 17:04:30.715674   27287 main.go:141] libmachine: Waiting for SSH to be available...
	I0816 17:04:30.715679   27287 main.go:141] libmachine: Getting to WaitForSSH function...
	I0816 17:04:30.715685   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHHostname
	I0816 17:04:30.718094   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:30.718477   27287 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:04:30.718504   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:30.718666   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHPort
	I0816 17:04:30.718828   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:04:30.718973   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:04:30.719081   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHUsername
	I0816 17:04:30.719232   27287 main.go:141] libmachine: Using SSH client type: native
	I0816 17:04:30.719569   27287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0816 17:04:30.719582   27287 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0816 17:04:30.831711   27287 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 17:04:30.831738   27287 main.go:141] libmachine: Detecting the provisioner...
	I0816 17:04:30.831749   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHHostname
	I0816 17:04:30.834505   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:30.834918   27287 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:04:30.834939   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:30.835178   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHPort
	I0816 17:04:30.835493   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:04:30.835670   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:04:30.835833   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHUsername
	I0816 17:04:30.835995   27287 main.go:141] libmachine: Using SSH client type: native
	I0816 17:04:30.836186   27287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0816 17:04:30.836203   27287 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0816 17:04:30.949182   27287 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0816 17:04:30.949252   27287 main.go:141] libmachine: found compatible host: buildroot
	I0816 17:04:30.949261   27287 main.go:141] libmachine: Provisioning with buildroot...
	I0816 17:04:30.949268   27287 main.go:141] libmachine: (ha-764617) Calling .GetMachineName
	I0816 17:04:30.949518   27287 buildroot.go:166] provisioning hostname "ha-764617"
	I0816 17:04:30.949539   27287 main.go:141] libmachine: (ha-764617) Calling .GetMachineName
	I0816 17:04:30.949765   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHHostname
	I0816 17:04:30.952461   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:30.952994   27287 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:04:30.953019   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:30.953235   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHPort
	I0816 17:04:30.953404   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:04:30.953580   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:04:30.953729   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHUsername
	I0816 17:04:30.953878   27287 main.go:141] libmachine: Using SSH client type: native
	I0816 17:04:30.954089   27287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0816 17:04:30.954108   27287 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-764617 && echo "ha-764617" | sudo tee /etc/hostname
	I0816 17:04:31.083399   27287 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-764617
	
	I0816 17:04:31.083421   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHHostname
	I0816 17:04:31.086023   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:31.086356   27287 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:04:31.086391   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:31.086566   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHPort
	I0816 17:04:31.086748   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:04:31.086912   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:04:31.087031   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHUsername
	I0816 17:04:31.087185   27287 main.go:141] libmachine: Using SSH client type: native
	I0816 17:04:31.087385   27287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0816 17:04:31.087402   27287 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-764617' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-764617/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-764617' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 17:04:31.209097   27287 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 17:04:31.209120   27287 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19461-9545/.minikube CaCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19461-9545/.minikube}
	I0816 17:04:31.209150   27287 buildroot.go:174] setting up certificates
	I0816 17:04:31.209159   27287 provision.go:84] configureAuth start
	I0816 17:04:31.209168   27287 main.go:141] libmachine: (ha-764617) Calling .GetMachineName
	I0816 17:04:31.209471   27287 main.go:141] libmachine: (ha-764617) Calling .GetIP
	I0816 17:04:31.211993   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:31.212316   27287 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:04:31.212340   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:31.212446   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHHostname
	I0816 17:04:31.214616   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:31.215003   27287 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:04:31.215030   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:31.215167   27287 provision.go:143] copyHostCerts
	I0816 17:04:31.215199   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem
	I0816 17:04:31.215228   27287 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem, removing ...
	I0816 17:04:31.215242   27287 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem
	I0816 17:04:31.215307   27287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem (1123 bytes)
	I0816 17:04:31.215390   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem
	I0816 17:04:31.215407   27287 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem, removing ...
	I0816 17:04:31.215413   27287 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem
	I0816 17:04:31.215442   27287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem (1679 bytes)
	I0816 17:04:31.215485   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem
	I0816 17:04:31.215502   27287 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem, removing ...
	I0816 17:04:31.215508   27287 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem
	I0816 17:04:31.215529   27287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem (1082 bytes)
	I0816 17:04:31.215583   27287 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem org=jenkins.ha-764617 san=[127.0.0.1 192.168.39.18 ha-764617 localhost minikube]
	I0816 17:04:31.373435   27287 provision.go:177] copyRemoteCerts
	I0816 17:04:31.373494   27287 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 17:04:31.373517   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHHostname
	I0816 17:04:31.376138   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:31.376421   27287 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:04:31.376449   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:31.376660   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHPort
	I0816 17:04:31.376859   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:04:31.377015   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHUsername
	I0816 17:04:31.377125   27287 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617/id_rsa Username:docker}
	I0816 17:04:31.462167   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 17:04:31.462266   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 17:04:31.484481   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 17:04:31.484559   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0816 17:04:31.505907   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 17:04:31.505970   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 17:04:31.527200   27287 provision.go:87] duration metric: took 318.030237ms to configureAuth
	I0816 17:04:31.527226   27287 buildroot.go:189] setting minikube options for container-runtime
	I0816 17:04:31.527416   27287 config.go:182] Loaded profile config "ha-764617": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 17:04:31.527489   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHHostname
	I0816 17:04:31.530425   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:31.530833   27287 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:04:31.530857   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:31.531021   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHPort
	I0816 17:04:31.531191   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:04:31.531425   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:04:31.531586   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHUsername
	I0816 17:04:31.531741   27287 main.go:141] libmachine: Using SSH client type: native
	I0816 17:04:31.531914   27287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0816 17:04:31.531930   27287 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 17:04:31.798292   27287 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 17:04:31.798317   27287 main.go:141] libmachine: Checking connection to Docker...
	I0816 17:04:31.798326   27287 main.go:141] libmachine: (ha-764617) Calling .GetURL
	I0816 17:04:31.799912   27287 main.go:141] libmachine: (ha-764617) DBG | Using libvirt version 6000000
	I0816 17:04:31.802124   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:31.802428   27287 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:04:31.802452   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:31.802622   27287 main.go:141] libmachine: Docker is up and running!
	I0816 17:04:31.802636   27287 main.go:141] libmachine: Reticulating splines...
	I0816 17:04:31.802645   27287 client.go:171] duration metric: took 20.542772627s to LocalClient.Create
	I0816 17:04:31.802671   27287 start.go:167] duration metric: took 20.542846204s to libmachine.API.Create "ha-764617"
	I0816 17:04:31.802681   27287 start.go:293] postStartSetup for "ha-764617" (driver="kvm2")
	I0816 17:04:31.802693   27287 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 17:04:31.802714   27287 main.go:141] libmachine: (ha-764617) Calling .DriverName
	I0816 17:04:31.802966   27287 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 17:04:31.802989   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHHostname
	I0816 17:04:31.805134   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:31.805491   27287 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:04:31.805520   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:31.805631   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHPort
	I0816 17:04:31.805843   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:04:31.806002   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHUsername
	I0816 17:04:31.806130   27287 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617/id_rsa Username:docker}
	I0816 17:04:31.890154   27287 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 17:04:31.893837   27287 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 17:04:31.893857   27287 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/addons for local assets ...
	I0816 17:04:31.893923   27287 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/files for local assets ...
	I0816 17:04:31.893990   27287 filesync.go:149] local asset: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem -> 167532.pem in /etc/ssl/certs
	I0816 17:04:31.893999   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem -> /etc/ssl/certs/167532.pem
	I0816 17:04:31.894079   27287 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 17:04:31.902565   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /etc/ssl/certs/167532.pem (1708 bytes)
	I0816 17:04:31.924279   27287 start.go:296] duration metric: took 121.587288ms for postStartSetup
	I0816 17:04:31.924337   27287 main.go:141] libmachine: (ha-764617) Calling .GetConfigRaw
	I0816 17:04:31.924935   27287 main.go:141] libmachine: (ha-764617) Calling .GetIP
	I0816 17:04:31.927607   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:31.927910   27287 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:04:31.927934   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:31.928141   27287 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/config.json ...
	I0816 17:04:31.928305   27287 start.go:128] duration metric: took 20.685691268s to createHost
	I0816 17:04:31.928324   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHHostname
	I0816 17:04:31.930644   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:31.931018   27287 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:04:31.931051   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:31.931135   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHPort
	I0816 17:04:31.931290   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:04:31.931454   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:04:31.931598   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHUsername
	I0816 17:04:31.931777   27287 main.go:141] libmachine: Using SSH client type: native
	I0816 17:04:31.931983   27287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0816 17:04:31.931993   27287 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 17:04:32.044753   27287 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723827872.020223694
	
	I0816 17:04:32.044780   27287 fix.go:216] guest clock: 1723827872.020223694
	I0816 17:04:32.044789   27287 fix.go:229] Guest: 2024-08-16 17:04:32.020223694 +0000 UTC Remote: 2024-08-16 17:04:31.928315094 +0000 UTC m=+20.785775909 (delta=91.9086ms)
	I0816 17:04:32.044835   27287 fix.go:200] guest clock delta is within tolerance: 91.9086ms
	I0816 17:04:32.044843   27287 start.go:83] releasing machines lock for "ha-764617", held for 20.80232118s
	I0816 17:04:32.044876   27287 main.go:141] libmachine: (ha-764617) Calling .DriverName
	I0816 17:04:32.045143   27287 main.go:141] libmachine: (ha-764617) Calling .GetIP
	I0816 17:04:32.047638   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:32.047969   27287 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:04:32.047995   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:32.048104   27287 main.go:141] libmachine: (ha-764617) Calling .DriverName
	I0816 17:04:32.048560   27287 main.go:141] libmachine: (ha-764617) Calling .DriverName
	I0816 17:04:32.048743   27287 main.go:141] libmachine: (ha-764617) Calling .DriverName
	I0816 17:04:32.048837   27287 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 17:04:32.048891   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHHostname
	I0816 17:04:32.048954   27287 ssh_runner.go:195] Run: cat /version.json
	I0816 17:04:32.048976   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHHostname
	I0816 17:04:32.051572   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:32.051819   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:32.051849   27287 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:04:32.051871   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:32.052025   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHPort
	I0816 17:04:32.052186   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:04:32.052230   27287 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:04:32.052258   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:32.052334   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHUsername
	I0816 17:04:32.052403   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHPort
	I0816 17:04:32.052472   27287 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617/id_rsa Username:docker}
	I0816 17:04:32.052557   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:04:32.052666   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHUsername
	I0816 17:04:32.052755   27287 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617/id_rsa Username:docker}
	I0816 17:04:32.133532   27287 ssh_runner.go:195] Run: systemctl --version
	I0816 17:04:32.166489   27287 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 17:04:32.321880   27287 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 17:04:32.327144   27287 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 17:04:32.327210   27287 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 17:04:32.342225   27287 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 17:04:32.342252   27287 start.go:495] detecting cgroup driver to use...
	I0816 17:04:32.342315   27287 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 17:04:32.359528   27287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 17:04:32.372483   27287 docker.go:217] disabling cri-docker service (if available) ...
	I0816 17:04:32.372545   27287 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 17:04:32.385946   27287 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 17:04:32.398731   27287 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 17:04:32.510965   27287 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 17:04:32.669176   27287 docker.go:233] disabling docker service ...
	I0816 17:04:32.669247   27287 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 17:04:32.682954   27287 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 17:04:32.694779   27287 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 17:04:32.824420   27287 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 17:04:32.938035   27287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 17:04:32.951141   27287 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 17:04:32.968389   27287 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 17:04:32.968457   27287 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:04:32.978033   27287 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 17:04:32.978103   27287 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:04:32.987902   27287 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:04:32.997597   27287 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:04:33.007383   27287 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 17:04:33.017246   27287 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:04:33.026596   27287 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:04:33.042318   27287 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:04:33.051714   27287 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 17:04:33.060974   27287 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 17:04:33.061018   27287 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 17:04:33.073318   27287 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 17:04:33.082041   27287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 17:04:33.188184   27287 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 17:04:33.325270   27287 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 17:04:33.325343   27287 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 17:04:33.330234   27287 start.go:563] Will wait 60s for crictl version
	I0816 17:04:33.330290   27287 ssh_runner.go:195] Run: which crictl
	I0816 17:04:33.333608   27287 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 17:04:33.370836   27287 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 17:04:33.370940   27287 ssh_runner.go:195] Run: crio --version
	I0816 17:04:33.397234   27287 ssh_runner.go:195] Run: crio --version
	I0816 17:04:33.423894   27287 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 17:04:33.424869   27287 main.go:141] libmachine: (ha-764617) Calling .GetIP
	I0816 17:04:33.427349   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:33.427640   27287 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:04:33.427672   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:33.427821   27287 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0816 17:04:33.431601   27287 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 17:04:33.444115   27287 kubeadm.go:883] updating cluster {Name:ha-764617 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-764617 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.18 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 17:04:33.444354   27287 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 17:04:33.444479   27287 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 17:04:33.475671   27287 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0816 17:04:33.475753   27287 ssh_runner.go:195] Run: which lz4
	I0816 17:04:33.479653   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0816 17:04:33.479732   27287 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 17:04:33.483534   27287 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 17:04:33.483560   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0816 17:04:34.625529   27287 crio.go:462] duration metric: took 1.14581672s to copy over tarball
	I0816 17:04:34.625604   27287 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 17:04:36.603204   27287 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.977567495s)
	I0816 17:04:36.603231   27287 crio.go:469] duration metric: took 1.977674917s to extract the tarball
	I0816 17:04:36.603238   27287 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 17:04:36.639542   27287 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 17:04:36.685580   27287 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 17:04:36.685600   27287 cache_images.go:84] Images are preloaded, skipping loading
	I0816 17:04:36.685607   27287 kubeadm.go:934] updating node { 192.168.39.18 8443 v1.31.0 crio true true} ...
	I0816 17:04:36.685701   27287 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-764617 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.18
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-764617 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 17:04:36.685778   27287 ssh_runner.go:195] Run: crio config
	I0816 17:04:36.729932   27287 cni.go:84] Creating CNI manager for ""
	I0816 17:04:36.729949   27287 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0816 17:04:36.729958   27287 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 17:04:36.729979   27287 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.18 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-764617 NodeName:ha-764617 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.18"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.18 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 17:04:36.730114   27287 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.18
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-764617"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.18
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.18"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 17:04:36.730136   27287 kube-vip.go:115] generating kube-vip config ...
	I0816 17:04:36.730175   27287 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0816 17:04:36.745310   27287 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0816 17:04:36.745443   27287 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0816 17:04:36.745505   27287 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 17:04:36.754575   27287 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 17:04:36.754650   27287 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0816 17:04:36.763403   27287 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0816 17:04:36.779161   27287 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 17:04:36.794117   27287 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0816 17:04:36.809831   27287 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0816 17:04:36.825108   27287 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0816 17:04:36.828513   27287 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 17:04:36.840109   27287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 17:04:36.945366   27287 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 17:04:36.960672   27287 certs.go:68] Setting up /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617 for IP: 192.168.39.18
	I0816 17:04:36.960693   27287 certs.go:194] generating shared ca certs ...
	I0816 17:04:36.960711   27287 certs.go:226] acquiring lock for ca certs: {Name:mk1e000575c94c138513704c2900b8a68810eb65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:04:36.960862   27287 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key
	I0816 17:04:36.960920   27287 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key
	I0816 17:04:36.960933   27287 certs.go:256] generating profile certs ...
	I0816 17:04:36.960997   27287 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/client.key
	I0816 17:04:36.961014   27287 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/client.crt with IP's: []
	I0816 17:04:37.176726   27287 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/client.crt ...
	I0816 17:04:37.176760   27287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/client.crt: {Name:mk29d5c77bd5773d8bf6de36574a6e04d0236cc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:04:37.176962   27287 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/client.key ...
	I0816 17:04:37.176979   27287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/client.key: {Name:mk6489e419fcaef7b92be41faf0bb734efb07372 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:04:37.177094   27287 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.key.48dc9881
	I0816 17:04:37.177117   27287 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.crt.48dc9881 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.18 192.168.39.254]
	I0816 17:04:37.290736   27287 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.crt.48dc9881 ...
	I0816 17:04:37.290770   27287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.crt.48dc9881: {Name:mkb16c0a15ab305065c0248cc0b7d908e1c729bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:04:37.290951   27287 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.key.48dc9881 ...
	I0816 17:04:37.290968   27287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.key.48dc9881: {Name:mk149993b661876c649e1091e4e9fb3fe6eb5c6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:04:37.291061   27287 certs.go:381] copying /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.crt.48dc9881 -> /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.crt
	I0816 17:04:37.291169   27287 certs.go:385] copying /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.key.48dc9881 -> /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.key
	I0816 17:04:37.291252   27287 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/proxy-client.key
	I0816 17:04:37.291273   27287 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/proxy-client.crt with IP's: []
	I0816 17:04:37.458550   27287 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/proxy-client.crt ...
	I0816 17:04:37.458580   27287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/proxy-client.crt: {Name:mk27f9575b8fc72d6b583bd1d3945d7bdb054f4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:04:37.458749   27287 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/proxy-client.key ...
	I0816 17:04:37.458764   27287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/proxy-client.key: {Name:mk84ca222215bfb6433b5f26a0008fbd0ef2ecde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:04:37.458854   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0816 17:04:37.458876   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0816 17:04:37.458891   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0816 17:04:37.458908   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0816 17:04:37.458927   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0816 17:04:37.458944   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0816 17:04:37.458960   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0816 17:04:37.458993   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0816 17:04:37.459070   27287 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem (1338 bytes)
	W0816 17:04:37.459118   27287 certs.go:480] ignoring /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753_empty.pem, impossibly tiny 0 bytes
	I0816 17:04:37.459134   27287 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 17:04:37.459168   27287 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem (1082 bytes)
	I0816 17:04:37.459204   27287 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem (1123 bytes)
	I0816 17:04:37.459236   27287 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem (1679 bytes)
	I0816 17:04:37.459291   27287 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem (1708 bytes)
	I0816 17:04:37.459331   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem -> /usr/share/ca-certificates/167532.pem
	I0816 17:04:37.459353   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0816 17:04:37.459378   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem -> /usr/share/ca-certificates/16753.pem
	I0816 17:04:37.459910   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 17:04:37.482899   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 17:04:37.504048   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 17:04:37.525208   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 17:04:37.546815   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0816 17:04:37.568681   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 17:04:37.590215   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 17:04:37.611561   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 17:04:37.633430   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /usr/share/ca-certificates/167532.pem (1708 bytes)
	I0816 17:04:37.654743   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 17:04:37.676033   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem --> /usr/share/ca-certificates/16753.pem (1338 bytes)
	I0816 17:04:37.699254   27287 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 17:04:37.736664   27287 ssh_runner.go:195] Run: openssl version
	I0816 17:04:37.745980   27287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 17:04:37.756383   27287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 17:04:37.760494   27287 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:49 /usr/share/ca-certificates/minikubeCA.pem
	I0816 17:04:37.760538   27287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 17:04:37.765914   27287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 17:04:37.775960   27287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16753.pem && ln -fs /usr/share/ca-certificates/16753.pem /etc/ssl/certs/16753.pem"
	I0816 17:04:37.785868   27287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16753.pem
	I0816 17:04:37.789900   27287 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 17:00 /usr/share/ca-certificates/16753.pem
	I0816 17:04:37.789951   27287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16753.pem
	I0816 17:04:37.795013   27287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16753.pem /etc/ssl/certs/51391683.0"
	I0816 17:04:37.804845   27287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167532.pem && ln -fs /usr/share/ca-certificates/167532.pem /etc/ssl/certs/167532.pem"
	I0816 17:04:37.814710   27287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167532.pem
	I0816 17:04:37.818676   27287 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 17:00 /usr/share/ca-certificates/167532.pem
	I0816 17:04:37.818726   27287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167532.pem
	I0816 17:04:37.823779   27287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167532.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 17:04:37.833390   27287 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 17:04:37.837091   27287 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0816 17:04:37.837148   27287 kubeadm.go:392] StartCluster: {Name:ha-764617 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-764617 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.18 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 17:04:37.837218   27287 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 17:04:37.837271   27287 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 17:04:37.872744   27287 cri.go:89] found id: ""
	I0816 17:04:37.872806   27287 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 17:04:37.882348   27287 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 17:04:37.891808   27287 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 17:04:37.901344   27287 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 17:04:37.901360   27287 kubeadm.go:157] found existing configuration files:
	
	I0816 17:04:37.901399   27287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 17:04:37.910156   27287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 17:04:37.910215   27287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 17:04:37.919131   27287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 17:04:37.927578   27287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 17:04:37.927651   27287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 17:04:37.936765   27287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 17:04:37.945292   27287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 17:04:37.945367   27287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 17:04:37.954326   27287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 17:04:37.962804   27287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 17:04:37.962864   27287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 17:04:37.971386   27287 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 17:04:38.058317   27287 kubeadm.go:310] W0816 17:04:38.040644     845 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 17:04:38.059090   27287 kubeadm.go:310] W0816 17:04:38.041564     845 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 17:04:38.155015   27287 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 17:04:49.272689   27287 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0816 17:04:49.272761   27287 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 17:04:49.272877   27287 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 17:04:49.273019   27287 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 17:04:49.273139   27287 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0816 17:04:49.273208   27287 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 17:04:49.274913   27287 out.go:235]   - Generating certificates and keys ...
	I0816 17:04:49.275005   27287 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 17:04:49.275070   27287 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 17:04:49.275135   27287 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0816 17:04:49.275194   27287 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0816 17:04:49.275252   27287 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0816 17:04:49.275294   27287 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0816 17:04:49.275343   27287 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0816 17:04:49.275437   27287 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-764617 localhost] and IPs [192.168.39.18 127.0.0.1 ::1]
	I0816 17:04:49.275491   27287 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0816 17:04:49.275598   27287 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-764617 localhost] and IPs [192.168.39.18 127.0.0.1 ::1]
	I0816 17:04:49.275653   27287 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0816 17:04:49.275710   27287 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0816 17:04:49.275751   27287 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0816 17:04:49.275797   27287 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 17:04:49.275843   27287 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 17:04:49.275894   27287 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0816 17:04:49.275951   27287 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 17:04:49.276020   27287 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 17:04:49.276070   27287 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 17:04:49.276138   27287 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 17:04:49.276193   27287 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 17:04:49.277491   27287 out.go:235]   - Booting up control plane ...
	I0816 17:04:49.277585   27287 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 17:04:49.277682   27287 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 17:04:49.277767   27287 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 17:04:49.277904   27287 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 17:04:49.278017   27287 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 17:04:49.278072   27287 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 17:04:49.278261   27287 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0816 17:04:49.278399   27287 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0816 17:04:49.278456   27287 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.459421ms
	I0816 17:04:49.278542   27287 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0816 17:04:49.278631   27287 kubeadm.go:310] [api-check] The API server is healthy after 6.002832221s
	I0816 17:04:49.278750   27287 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0816 17:04:49.278889   27287 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0816 17:04:49.278959   27287 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0816 17:04:49.279105   27287 kubeadm.go:310] [mark-control-plane] Marking the node ha-764617 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0816 17:04:49.279155   27287 kubeadm.go:310] [bootstrap-token] Using token: okdxih.5xmh1by8w9juwakw
	I0816 17:04:49.280296   27287 out.go:235]   - Configuring RBAC rules ...
	I0816 17:04:49.280383   27287 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0816 17:04:49.280451   27287 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0816 17:04:49.280584   27287 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0816 17:04:49.280734   27287 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0816 17:04:49.280846   27287 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0816 17:04:49.280946   27287 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0816 17:04:49.281122   27287 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0816 17:04:49.281198   27287 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0816 17:04:49.281252   27287 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0816 17:04:49.281259   27287 kubeadm.go:310] 
	I0816 17:04:49.281307   27287 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0816 17:04:49.281312   27287 kubeadm.go:310] 
	I0816 17:04:49.281431   27287 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0816 17:04:49.281440   27287 kubeadm.go:310] 
	I0816 17:04:49.281475   27287 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0816 17:04:49.281563   27287 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0816 17:04:49.281646   27287 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0816 17:04:49.281655   27287 kubeadm.go:310] 
	I0816 17:04:49.281733   27287 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0816 17:04:49.281743   27287 kubeadm.go:310] 
	I0816 17:04:49.281824   27287 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0816 17:04:49.281833   27287 kubeadm.go:310] 
	I0816 17:04:49.281903   27287 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0816 17:04:49.282006   27287 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0816 17:04:49.282108   27287 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0816 17:04:49.282117   27287 kubeadm.go:310] 
	I0816 17:04:49.282261   27287 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0816 17:04:49.282408   27287 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0816 17:04:49.282422   27287 kubeadm.go:310] 
	I0816 17:04:49.282542   27287 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token okdxih.5xmh1by8w9juwakw \
	I0816 17:04:49.282693   27287 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:340cd7e6f4afd769ffe0b97bb40a910498795aaf29fab06cd6b8c9a3957ccd57 \
	I0816 17:04:49.282726   27287 kubeadm.go:310] 	--control-plane 
	I0816 17:04:49.282735   27287 kubeadm.go:310] 
	I0816 17:04:49.282819   27287 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0816 17:04:49.282826   27287 kubeadm.go:310] 
	I0816 17:04:49.282937   27287 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token okdxih.5xmh1by8w9juwakw \
	I0816 17:04:49.283065   27287 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:340cd7e6f4afd769ffe0b97bb40a910498795aaf29fab06cd6b8c9a3957ccd57 
	I0816 17:04:49.283081   27287 cni.go:84] Creating CNI manager for ""
	I0816 17:04:49.283089   27287 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0816 17:04:49.284435   27287 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0816 17:04:49.285374   27287 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0816 17:04:49.290271   27287 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0816 17:04:49.290284   27287 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0816 17:04:49.310395   27287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0816 17:04:49.731837   27287 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 17:04:49.731919   27287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 17:04:49.731967   27287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-764617 minikube.k8s.io/updated_at=2024_08_16T17_04_49_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd minikube.k8s.io/name=ha-764617 minikube.k8s.io/primary=true
	I0816 17:04:49.759424   27287 ops.go:34] apiserver oom_adj: -16
	I0816 17:04:49.929739   27287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 17:04:50.430106   27287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 17:04:50.929879   27287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 17:04:51.430115   27287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 17:04:51.929910   27287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 17:04:52.430781   27287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 17:04:52.930148   27287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 17:04:53.430816   27287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 17:04:53.568437   27287 kubeadm.go:1113] duration metric: took 3.836568007s to wait for elevateKubeSystemPrivileges
	I0816 17:04:53.568470   27287 kubeadm.go:394] duration metric: took 15.731325614s to StartCluster
	I0816 17:04:53.568485   27287 settings.go:142] acquiring lock: {Name:mk7d6bbff611e99a9222156e58c7ea3b0a5541f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:04:53.568549   27287 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19461-9545/kubeconfig
	I0816 17:04:53.569221   27287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/kubeconfig: {Name:mk7d5abb3a1b9b654c5d7490d32aeae2c9a1bdd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:04:53.569405   27287 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.18 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 17:04:53.569423   27287 start.go:241] waiting for startup goroutines ...
	I0816 17:04:53.569436   27287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 17:04:53.569429   27287 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 17:04:53.569482   27287 addons.go:69] Setting storage-provisioner=true in profile "ha-764617"
	I0816 17:04:53.569500   27287 addons.go:69] Setting default-storageclass=true in profile "ha-764617"
	I0816 17:04:53.569513   27287 addons.go:234] Setting addon storage-provisioner=true in "ha-764617"
	I0816 17:04:53.569533   27287 host.go:66] Checking if "ha-764617" exists ...
	I0816 17:04:53.569555   27287 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-764617"
	I0816 17:04:53.569636   27287 config.go:182] Loaded profile config "ha-764617": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 17:04:53.569915   27287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:04:53.569935   27287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:04:53.569950   27287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:04:53.569978   27287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:04:53.585332   27287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40619
	I0816 17:04:53.585337   27287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44019
	I0816 17:04:53.585860   27287 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:04:53.585959   27287 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:04:53.586355   27287 main.go:141] libmachine: Using API Version  1
	I0816 17:04:53.586374   27287 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:04:53.586487   27287 main.go:141] libmachine: Using API Version  1
	I0816 17:04:53.586508   27287 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:04:53.586714   27287 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:04:53.586906   27287 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:04:53.587088   27287 main.go:141] libmachine: (ha-764617) Calling .GetState
	I0816 17:04:53.587271   27287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:04:53.587295   27287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:04:53.589361   27287 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19461-9545/kubeconfig
	I0816 17:04:53.589725   27287 kapi.go:59] client config for ha-764617: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/client.crt", KeyFile:"/home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/client.key", CAFile:"/home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f189a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0816 17:04:53.590222   27287 cert_rotation.go:140] Starting client certificate rotation controller
	I0816 17:04:53.590492   27287 addons.go:234] Setting addon default-storageclass=true in "ha-764617"
	I0816 17:04:53.590533   27287 host.go:66] Checking if "ha-764617" exists ...
	I0816 17:04:53.590896   27287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:04:53.590928   27287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:04:53.603076   27287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40165
	I0816 17:04:53.603549   27287 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:04:53.604052   27287 main.go:141] libmachine: Using API Version  1
	I0816 17:04:53.604070   27287 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:04:53.604493   27287 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:04:53.604697   27287 main.go:141] libmachine: (ha-764617) Calling .GetState
	I0816 17:04:53.606306   27287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37781
	I0816 17:04:53.606807   27287 main.go:141] libmachine: (ha-764617) Calling .DriverName
	I0816 17:04:53.606861   27287 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:04:53.607374   27287 main.go:141] libmachine: Using API Version  1
	I0816 17:04:53.607419   27287 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:04:53.607765   27287 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:04:53.608235   27287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:04:53.608260   27287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:04:53.608587   27287 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 17:04:53.610269   27287 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 17:04:53.610286   27287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 17:04:53.610301   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHHostname
	I0816 17:04:53.613537   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:53.613969   27287 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:04:53.613998   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:53.614096   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHPort
	I0816 17:04:53.614295   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:04:53.614485   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHUsername
	I0816 17:04:53.614662   27287 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617/id_rsa Username:docker}
	I0816 17:04:53.624062   27287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46679
	I0816 17:04:53.624481   27287 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:04:53.625005   27287 main.go:141] libmachine: Using API Version  1
	I0816 17:04:53.625031   27287 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:04:53.625342   27287 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:04:53.625621   27287 main.go:141] libmachine: (ha-764617) Calling .GetState
	I0816 17:04:53.627483   27287 main.go:141] libmachine: (ha-764617) Calling .DriverName
	I0816 17:04:53.627770   27287 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 17:04:53.627787   27287 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 17:04:53.627805   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHHostname
	I0816 17:04:53.630290   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:53.630708   27287 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:04:53.630732   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:53.630936   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHPort
	I0816 17:04:53.631090   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:04:53.631243   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHUsername
	I0816 17:04:53.631362   27287 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617/id_rsa Username:docker}
	I0816 17:04:53.696311   27287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0816 17:04:53.759477   27287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 17:04:53.789447   27287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 17:04:54.163504   27287 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0816 17:04:54.432951   27287 main.go:141] libmachine: Making call to close driver server
	I0816 17:04:54.432979   27287 main.go:141] libmachine: (ha-764617) Calling .Close
	I0816 17:04:54.433014   27287 main.go:141] libmachine: Making call to close driver server
	I0816 17:04:54.433033   27287 main.go:141] libmachine: (ha-764617) Calling .Close
	I0816 17:04:54.433264   27287 main.go:141] libmachine: Successfully made call to close driver server
	I0816 17:04:54.433278   27287 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 17:04:54.433289   27287 main.go:141] libmachine: Making call to close driver server
	I0816 17:04:54.433297   27287 main.go:141] libmachine: (ha-764617) Calling .Close
	I0816 17:04:54.433340   27287 main.go:141] libmachine: (ha-764617) DBG | Closing plugin on server side
	I0816 17:04:54.433553   27287 main.go:141] libmachine: Successfully made call to close driver server
	I0816 17:04:54.433568   27287 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 17:04:54.433624   27287 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0816 17:04:54.433640   27287 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0816 17:04:54.433721   27287 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0816 17:04:54.433728   27287 round_trippers.go:469] Request Headers:
	I0816 17:04:54.433739   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:04:54.433744   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:04:54.434163   27287 main.go:141] libmachine: Successfully made call to close driver server
	I0816 17:04:54.434183   27287 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 17:04:54.434193   27287 main.go:141] libmachine: Making call to close driver server
	I0816 17:04:54.434202   27287 main.go:141] libmachine: (ha-764617) Calling .Close
	I0816 17:04:54.434404   27287 main.go:141] libmachine: Successfully made call to close driver server
	I0816 17:04:54.434417   27287 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 17:04:54.446734   27287 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0816 17:04:54.447515   27287 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0816 17:04:54.447535   27287 round_trippers.go:469] Request Headers:
	I0816 17:04:54.447546   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:04:54.447555   27287 round_trippers.go:473]     Content-Type: application/json
	I0816 17:04:54.447560   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:04:54.450522   27287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 17:04:54.450680   27287 main.go:141] libmachine: Making call to close driver server
	I0816 17:04:54.450695   27287 main.go:141] libmachine: (ha-764617) Calling .Close
	I0816 17:04:54.450967   27287 main.go:141] libmachine: Successfully made call to close driver server
	I0816 17:04:54.450980   27287 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 17:04:54.452662   27287 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0816 17:04:54.453868   27287 addons.go:510] duration metric: took 884.435075ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0816 17:04:54.453898   27287 start.go:246] waiting for cluster config update ...
	I0816 17:04:54.453907   27287 start.go:255] writing updated cluster config ...
	I0816 17:04:54.455355   27287 out.go:201] 
	I0816 17:04:54.456729   27287 config.go:182] Loaded profile config "ha-764617": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 17:04:54.456801   27287 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/config.json ...
	I0816 17:04:54.458188   27287 out.go:177] * Starting "ha-764617-m02" control-plane node in "ha-764617" cluster
	I0816 17:04:54.459321   27287 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 17:04:54.459338   27287 cache.go:56] Caching tarball of preloaded images
	I0816 17:04:54.459424   27287 preload.go:172] Found /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 17:04:54.459438   27287 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0816 17:04:54.459514   27287 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/config.json ...
	I0816 17:04:54.459659   27287 start.go:360] acquireMachinesLock for ha-764617-m02: {Name:mke1ffa1f2d6d714bdd85e184816ba8f4dfd08f1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 17:04:54.459700   27287 start.go:364] duration metric: took 23.793µs to acquireMachinesLock for "ha-764617-m02"
	I0816 17:04:54.459729   27287 start.go:93] Provisioning new machine with config: &{Name:ha-764617 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-764617 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.18 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 17:04:54.459788   27287 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0816 17:04:54.461263   27287 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 17:04:54.461335   27287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:04:54.461360   27287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:04:54.475683   27287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43357
	I0816 17:04:54.476121   27287 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:04:54.476668   27287 main.go:141] libmachine: Using API Version  1
	I0816 17:04:54.476694   27287 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:04:54.477067   27287 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:04:54.477289   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetMachineName
	I0816 17:04:54.477529   27287 main.go:141] libmachine: (ha-764617-m02) Calling .DriverName
	I0816 17:04:54.477753   27287 start.go:159] libmachine.API.Create for "ha-764617" (driver="kvm2")
	I0816 17:04:54.477778   27287 client.go:168] LocalClient.Create starting
	I0816 17:04:54.477809   27287 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem
	I0816 17:04:54.477844   27287 main.go:141] libmachine: Decoding PEM data...
	I0816 17:04:54.477860   27287 main.go:141] libmachine: Parsing certificate...
	I0816 17:04:54.477905   27287 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem
	I0816 17:04:54.477922   27287 main.go:141] libmachine: Decoding PEM data...
	I0816 17:04:54.477933   27287 main.go:141] libmachine: Parsing certificate...
	I0816 17:04:54.477949   27287 main.go:141] libmachine: Running pre-create checks...
	I0816 17:04:54.477957   27287 main.go:141] libmachine: (ha-764617-m02) Calling .PreCreateCheck
	I0816 17:04:54.478121   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetConfigRaw
	I0816 17:04:54.478601   27287 main.go:141] libmachine: Creating machine...
	I0816 17:04:54.478613   27287 main.go:141] libmachine: (ha-764617-m02) Calling .Create
	I0816 17:04:54.478746   27287 main.go:141] libmachine: (ha-764617-m02) Creating KVM machine...
	I0816 17:04:54.480066   27287 main.go:141] libmachine: (ha-764617-m02) DBG | found existing default KVM network
	I0816 17:04:54.480120   27287 main.go:141] libmachine: (ha-764617-m02) DBG | found existing private KVM network mk-ha-764617
	I0816 17:04:54.480315   27287 main.go:141] libmachine: (ha-764617-m02) Setting up store path in /home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m02 ...
	I0816 17:04:54.480338   27287 main.go:141] libmachine: (ha-764617-m02) Building disk image from file:///home/jenkins/minikube-integration/19461-9545/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0816 17:04:54.480423   27287 main.go:141] libmachine: (ha-764617-m02) DBG | I0816 17:04:54.480315   27641 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19461-9545/.minikube
	I0816 17:04:54.480524   27287 main.go:141] libmachine: (ha-764617-m02) Downloading /home/jenkins/minikube-integration/19461-9545/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19461-9545/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0816 17:04:54.739664   27287 main.go:141] libmachine: (ha-764617-m02) DBG | I0816 17:04:54.739505   27641 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m02/id_rsa...
	I0816 17:04:54.905076   27287 main.go:141] libmachine: (ha-764617-m02) DBG | I0816 17:04:54.904937   27641 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m02/ha-764617-m02.rawdisk...
	I0816 17:04:54.905097   27287 main.go:141] libmachine: (ha-764617-m02) DBG | Writing magic tar header
	I0816 17:04:54.905107   27287 main.go:141] libmachine: (ha-764617-m02) DBG | Writing SSH key tar header
	I0816 17:04:54.905115   27287 main.go:141] libmachine: (ha-764617-m02) DBG | I0816 17:04:54.905072   27641 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m02 ...
	I0816 17:04:54.905225   27287 main.go:141] libmachine: (ha-764617-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m02
	I0816 17:04:54.905287   27287 main.go:141] libmachine: (ha-764617-m02) Setting executable bit set on /home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m02 (perms=drwx------)
	I0816 17:04:54.905317   27287 main.go:141] libmachine: (ha-764617-m02) Setting executable bit set on /home/jenkins/minikube-integration/19461-9545/.minikube/machines (perms=drwxr-xr-x)
	I0816 17:04:54.905338   27287 main.go:141] libmachine: (ha-764617-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19461-9545/.minikube/machines
	I0816 17:04:54.905351   27287 main.go:141] libmachine: (ha-764617-m02) Setting executable bit set on /home/jenkins/minikube-integration/19461-9545/.minikube (perms=drwxr-xr-x)
	I0816 17:04:54.905373   27287 main.go:141] libmachine: (ha-764617-m02) Setting executable bit set on /home/jenkins/minikube-integration/19461-9545 (perms=drwxrwxr-x)
	I0816 17:04:54.905386   27287 main.go:141] libmachine: (ha-764617-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0816 17:04:54.905397   27287 main.go:141] libmachine: (ha-764617-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0816 17:04:54.905412   27287 main.go:141] libmachine: (ha-764617-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19461-9545/.minikube
	I0816 17:04:54.905425   27287 main.go:141] libmachine: (ha-764617-m02) Creating domain...
	I0816 17:04:54.905446   27287 main.go:141] libmachine: (ha-764617-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19461-9545
	I0816 17:04:54.905459   27287 main.go:141] libmachine: (ha-764617-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0816 17:04:54.905473   27287 main.go:141] libmachine: (ha-764617-m02) DBG | Checking permissions on dir: /home/jenkins
	I0816 17:04:54.905509   27287 main.go:141] libmachine: (ha-764617-m02) DBG | Checking permissions on dir: /home
	I0816 17:04:54.905537   27287 main.go:141] libmachine: (ha-764617-m02) DBG | Skipping /home - not owner
	I0816 17:04:54.906422   27287 main.go:141] libmachine: (ha-764617-m02) define libvirt domain using xml: 
	I0816 17:04:54.906449   27287 main.go:141] libmachine: (ha-764617-m02) <domain type='kvm'>
	I0816 17:04:54.906460   27287 main.go:141] libmachine: (ha-764617-m02)   <name>ha-764617-m02</name>
	I0816 17:04:54.906472   27287 main.go:141] libmachine: (ha-764617-m02)   <memory unit='MiB'>2200</memory>
	I0816 17:04:54.906483   27287 main.go:141] libmachine: (ha-764617-m02)   <vcpu>2</vcpu>
	I0816 17:04:54.906491   27287 main.go:141] libmachine: (ha-764617-m02)   <features>
	I0816 17:04:54.906499   27287 main.go:141] libmachine: (ha-764617-m02)     <acpi/>
	I0816 17:04:54.906509   27287 main.go:141] libmachine: (ha-764617-m02)     <apic/>
	I0816 17:04:54.906520   27287 main.go:141] libmachine: (ha-764617-m02)     <pae/>
	I0816 17:04:54.906528   27287 main.go:141] libmachine: (ha-764617-m02)     
	I0816 17:04:54.906534   27287 main.go:141] libmachine: (ha-764617-m02)   </features>
	I0816 17:04:54.906541   27287 main.go:141] libmachine: (ha-764617-m02)   <cpu mode='host-passthrough'>
	I0816 17:04:54.906561   27287 main.go:141] libmachine: (ha-764617-m02)   
	I0816 17:04:54.906580   27287 main.go:141] libmachine: (ha-764617-m02)   </cpu>
	I0816 17:04:54.906586   27287 main.go:141] libmachine: (ha-764617-m02)   <os>
	I0816 17:04:54.906602   27287 main.go:141] libmachine: (ha-764617-m02)     <type>hvm</type>
	I0816 17:04:54.906611   27287 main.go:141] libmachine: (ha-764617-m02)     <boot dev='cdrom'/>
	I0816 17:04:54.906615   27287 main.go:141] libmachine: (ha-764617-m02)     <boot dev='hd'/>
	I0816 17:04:54.906624   27287 main.go:141] libmachine: (ha-764617-m02)     <bootmenu enable='no'/>
	I0816 17:04:54.906628   27287 main.go:141] libmachine: (ha-764617-m02)   </os>
	I0816 17:04:54.906636   27287 main.go:141] libmachine: (ha-764617-m02)   <devices>
	I0816 17:04:54.906641   27287 main.go:141] libmachine: (ha-764617-m02)     <disk type='file' device='cdrom'>
	I0816 17:04:54.906666   27287 main.go:141] libmachine: (ha-764617-m02)       <source file='/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m02/boot2docker.iso'/>
	I0816 17:04:54.906684   27287 main.go:141] libmachine: (ha-764617-m02)       <target dev='hdc' bus='scsi'/>
	I0816 17:04:54.906697   27287 main.go:141] libmachine: (ha-764617-m02)       <readonly/>
	I0816 17:04:54.906706   27287 main.go:141] libmachine: (ha-764617-m02)     </disk>
	I0816 17:04:54.906713   27287 main.go:141] libmachine: (ha-764617-m02)     <disk type='file' device='disk'>
	I0816 17:04:54.906722   27287 main.go:141] libmachine: (ha-764617-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0816 17:04:54.906730   27287 main.go:141] libmachine: (ha-764617-m02)       <source file='/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m02/ha-764617-m02.rawdisk'/>
	I0816 17:04:54.906739   27287 main.go:141] libmachine: (ha-764617-m02)       <target dev='hda' bus='virtio'/>
	I0816 17:04:54.906748   27287 main.go:141] libmachine: (ha-764617-m02)     </disk>
	I0816 17:04:54.906772   27287 main.go:141] libmachine: (ha-764617-m02)     <interface type='network'>
	I0816 17:04:54.906804   27287 main.go:141] libmachine: (ha-764617-m02)       <source network='mk-ha-764617'/>
	I0816 17:04:54.906844   27287 main.go:141] libmachine: (ha-764617-m02)       <model type='virtio'/>
	I0816 17:04:54.906860   27287 main.go:141] libmachine: (ha-764617-m02)     </interface>
	I0816 17:04:54.906869   27287 main.go:141] libmachine: (ha-764617-m02)     <interface type='network'>
	I0816 17:04:54.906897   27287 main.go:141] libmachine: (ha-764617-m02)       <source network='default'/>
	I0816 17:04:54.906924   27287 main.go:141] libmachine: (ha-764617-m02)       <model type='virtio'/>
	I0816 17:04:54.906937   27287 main.go:141] libmachine: (ha-764617-m02)     </interface>
	I0816 17:04:54.906948   27287 main.go:141] libmachine: (ha-764617-m02)     <serial type='pty'>
	I0816 17:04:54.906958   27287 main.go:141] libmachine: (ha-764617-m02)       <target port='0'/>
	I0816 17:04:54.906970   27287 main.go:141] libmachine: (ha-764617-m02)     </serial>
	I0816 17:04:54.906984   27287 main.go:141] libmachine: (ha-764617-m02)     <console type='pty'>
	I0816 17:04:54.906999   27287 main.go:141] libmachine: (ha-764617-m02)       <target type='serial' port='0'/>
	I0816 17:04:54.907012   27287 main.go:141] libmachine: (ha-764617-m02)     </console>
	I0816 17:04:54.907026   27287 main.go:141] libmachine: (ha-764617-m02)     <rng model='virtio'>
	I0816 17:04:54.907041   27287 main.go:141] libmachine: (ha-764617-m02)       <backend model='random'>/dev/random</backend>
	I0816 17:04:54.907054   27287 main.go:141] libmachine: (ha-764617-m02)     </rng>
	I0816 17:04:54.907079   27287 main.go:141] libmachine: (ha-764617-m02)     
	I0816 17:04:54.907099   27287 main.go:141] libmachine: (ha-764617-m02)     
	I0816 17:04:54.907127   27287 main.go:141] libmachine: (ha-764617-m02)   </devices>
	I0816 17:04:54.907138   27287 main.go:141] libmachine: (ha-764617-m02) </domain>
	I0816 17:04:54.907151   27287 main.go:141] libmachine: (ha-764617-m02) 
	I0816 17:04:54.913591   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:07:50:41 in network default
	I0816 17:04:54.914164   27287 main.go:141] libmachine: (ha-764617-m02) Ensuring networks are active...
	I0816 17:04:54.914182   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:04:54.914860   27287 main.go:141] libmachine: (ha-764617-m02) Ensuring network default is active
	I0816 17:04:54.915188   27287 main.go:141] libmachine: (ha-764617-m02) Ensuring network mk-ha-764617 is active
	I0816 17:04:54.915654   27287 main.go:141] libmachine: (ha-764617-m02) Getting domain xml...
	I0816 17:04:54.916475   27287 main.go:141] libmachine: (ha-764617-m02) Creating domain...
	I0816 17:04:56.120910   27287 main.go:141] libmachine: (ha-764617-m02) Waiting to get IP...
	I0816 17:04:56.123112   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:04:56.123563   27287 main.go:141] libmachine: (ha-764617-m02) DBG | unable to find current IP address of domain ha-764617-m02 in network mk-ha-764617
	I0816 17:04:56.123587   27287 main.go:141] libmachine: (ha-764617-m02) DBG | I0816 17:04:56.123545   27641 retry.go:31] will retry after 262.894322ms: waiting for machine to come up
	I0816 17:04:56.388173   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:04:56.388595   27287 main.go:141] libmachine: (ha-764617-m02) DBG | unable to find current IP address of domain ha-764617-m02 in network mk-ha-764617
	I0816 17:04:56.388640   27287 main.go:141] libmachine: (ha-764617-m02) DBG | I0816 17:04:56.388547   27641 retry.go:31] will retry after 331.429254ms: waiting for machine to come up
	I0816 17:04:56.722096   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:04:56.722532   27287 main.go:141] libmachine: (ha-764617-m02) DBG | unable to find current IP address of domain ha-764617-m02 in network mk-ha-764617
	I0816 17:04:56.722555   27287 main.go:141] libmachine: (ha-764617-m02) DBG | I0816 17:04:56.722498   27641 retry.go:31] will retry after 356.120471ms: waiting for machine to come up
	I0816 17:04:57.079691   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:04:57.080201   27287 main.go:141] libmachine: (ha-764617-m02) DBG | unable to find current IP address of domain ha-764617-m02 in network mk-ha-764617
	I0816 17:04:57.080229   27287 main.go:141] libmachine: (ha-764617-m02) DBG | I0816 17:04:57.080136   27641 retry.go:31] will retry after 514.370488ms: waiting for machine to come up
	I0816 17:04:57.596018   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:04:57.596594   27287 main.go:141] libmachine: (ha-764617-m02) DBG | unable to find current IP address of domain ha-764617-m02 in network mk-ha-764617
	I0816 17:04:57.596636   27287 main.go:141] libmachine: (ha-764617-m02) DBG | I0816 17:04:57.596541   27641 retry.go:31] will retry after 552.829899ms: waiting for machine to come up
	I0816 17:04:58.150731   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:04:58.151261   27287 main.go:141] libmachine: (ha-764617-m02) DBG | unable to find current IP address of domain ha-764617-m02 in network mk-ha-764617
	I0816 17:04:58.151283   27287 main.go:141] libmachine: (ha-764617-m02) DBG | I0816 17:04:58.151189   27641 retry.go:31] will retry after 611.263778ms: waiting for machine to come up
	I0816 17:04:58.763791   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:04:58.764307   27287 main.go:141] libmachine: (ha-764617-m02) DBG | unable to find current IP address of domain ha-764617-m02 in network mk-ha-764617
	I0816 17:04:58.764332   27287 main.go:141] libmachine: (ha-764617-m02) DBG | I0816 17:04:58.764275   27641 retry.go:31] will retry after 1.056287332s: waiting for machine to come up
	I0816 17:04:59.822389   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:04:59.822774   27287 main.go:141] libmachine: (ha-764617-m02) DBG | unable to find current IP address of domain ha-764617-m02 in network mk-ha-764617
	I0816 17:04:59.822803   27287 main.go:141] libmachine: (ha-764617-m02) DBG | I0816 17:04:59.822739   27641 retry.go:31] will retry after 1.157897358s: waiting for machine to come up
	I0816 17:05:00.981939   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:00.982458   27287 main.go:141] libmachine: (ha-764617-m02) DBG | unable to find current IP address of domain ha-764617-m02 in network mk-ha-764617
	I0816 17:05:00.982487   27287 main.go:141] libmachine: (ha-764617-m02) DBG | I0816 17:05:00.982406   27641 retry.go:31] will retry after 1.380933513s: waiting for machine to come up
	I0816 17:05:02.364965   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:02.365510   27287 main.go:141] libmachine: (ha-764617-m02) DBG | unable to find current IP address of domain ha-764617-m02 in network mk-ha-764617
	I0816 17:05:02.365532   27287 main.go:141] libmachine: (ha-764617-m02) DBG | I0816 17:05:02.365457   27641 retry.go:31] will retry after 2.011545615s: waiting for machine to come up
	I0816 17:05:04.379865   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:04.380325   27287 main.go:141] libmachine: (ha-764617-m02) DBG | unable to find current IP address of domain ha-764617-m02 in network mk-ha-764617
	I0816 17:05:04.380351   27287 main.go:141] libmachine: (ha-764617-m02) DBG | I0816 17:05:04.380277   27641 retry.go:31] will retry after 2.507828277s: waiting for machine to come up
	I0816 17:05:06.891550   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:06.891913   27287 main.go:141] libmachine: (ha-764617-m02) DBG | unable to find current IP address of domain ha-764617-m02 in network mk-ha-764617
	I0816 17:05:06.891933   27287 main.go:141] libmachine: (ha-764617-m02) DBG | I0816 17:05:06.891886   27641 retry.go:31] will retry after 2.791745221s: waiting for machine to come up
	I0816 17:05:09.685124   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:09.685567   27287 main.go:141] libmachine: (ha-764617-m02) DBG | unable to find current IP address of domain ha-764617-m02 in network mk-ha-764617
	I0816 17:05:09.685612   27287 main.go:141] libmachine: (ha-764617-m02) DBG | I0816 17:05:09.685517   27641 retry.go:31] will retry after 4.387344822s: waiting for machine to come up
	I0816 17:05:14.077676   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:14.078051   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has current primary IP address 192.168.39.184 and MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:14.078083   27287 main.go:141] libmachine: (ha-764617-m02) Found IP for machine: 192.168.39.184
	I0816 17:05:14.078097   27287 main.go:141] libmachine: (ha-764617-m02) Reserving static IP address...
	I0816 17:05:14.078416   27287 main.go:141] libmachine: (ha-764617-m02) DBG | unable to find host DHCP lease matching {name: "ha-764617-m02", mac: "52:54:00:cf:3e:7f", ip: "192.168.39.184"} in network mk-ha-764617
	I0816 17:05:14.151792   27287 main.go:141] libmachine: (ha-764617-m02) Reserved static IP address: 192.168.39.184
	I0816 17:05:14.151825   27287 main.go:141] libmachine: (ha-764617-m02) DBG | Getting to WaitForSSH function...
	I0816 17:05:14.151835   27287 main.go:141] libmachine: (ha-764617-m02) Waiting for SSH to be available...
	I0816 17:05:14.154304   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:14.154742   27287 main.go:141] libmachine: (ha-764617-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:3e:7f", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:05:08 +0000 UTC Type:0 Mac:52:54:00:cf:3e:7f Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:minikube Clientid:01:52:54:00:cf:3e:7f}
	I0816 17:05:14.154765   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:14.154942   27287 main.go:141] libmachine: (ha-764617-m02) DBG | Using SSH client type: external
	I0816 17:05:14.154964   27287 main.go:141] libmachine: (ha-764617-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m02/id_rsa (-rw-------)
	I0816 17:05:14.154991   27287 main.go:141] libmachine: (ha-764617-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.184 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 17:05:14.155000   27287 main.go:141] libmachine: (ha-764617-m02) DBG | About to run SSH command:
	I0816 17:05:14.155063   27287 main.go:141] libmachine: (ha-764617-m02) DBG | exit 0
	I0816 17:05:14.276467   27287 main.go:141] libmachine: (ha-764617-m02) DBG | SSH cmd err, output: <nil>: 
	I0816 17:05:14.276757   27287 main.go:141] libmachine: (ha-764617-m02) KVM machine creation complete!
	I0816 17:05:14.277004   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetConfigRaw
	I0816 17:05:14.277523   27287 main.go:141] libmachine: (ha-764617-m02) Calling .DriverName
	I0816 17:05:14.277727   27287 main.go:141] libmachine: (ha-764617-m02) Calling .DriverName
	I0816 17:05:14.277913   27287 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0816 17:05:14.277925   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetState
	I0816 17:05:14.279235   27287 main.go:141] libmachine: Detecting operating system of created instance...
	I0816 17:05:14.279250   27287 main.go:141] libmachine: Waiting for SSH to be available...
	I0816 17:05:14.279258   27287 main.go:141] libmachine: Getting to WaitForSSH function...
	I0816 17:05:14.279267   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHHostname
	I0816 17:05:14.281382   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:14.281636   27287 main.go:141] libmachine: (ha-764617-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:3e:7f", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:05:08 +0000 UTC Type:0 Mac:52:54:00:cf:3e:7f Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-764617-m02 Clientid:01:52:54:00:cf:3e:7f}
	I0816 17:05:14.281666   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:14.281808   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHPort
	I0816 17:05:14.281955   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHKeyPath
	I0816 17:05:14.282111   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHKeyPath
	I0816 17:05:14.282212   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHUsername
	I0816 17:05:14.282368   27287 main.go:141] libmachine: Using SSH client type: native
	I0816 17:05:14.282621   27287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.184 22 <nil> <nil>}
	I0816 17:05:14.282638   27287 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0816 17:05:14.379575   27287 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 17:05:14.379602   27287 main.go:141] libmachine: Detecting the provisioner...
	I0816 17:05:14.379612   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHHostname
	I0816 17:05:14.382527   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:14.382891   27287 main.go:141] libmachine: (ha-764617-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:3e:7f", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:05:08 +0000 UTC Type:0 Mac:52:54:00:cf:3e:7f Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-764617-m02 Clientid:01:52:54:00:cf:3e:7f}
	I0816 17:05:14.382923   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:14.383040   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHPort
	I0816 17:05:14.383210   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHKeyPath
	I0816 17:05:14.383406   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHKeyPath
	I0816 17:05:14.383542   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHUsername
	I0816 17:05:14.383795   27287 main.go:141] libmachine: Using SSH client type: native
	I0816 17:05:14.383969   27287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.184 22 <nil> <nil>}
	I0816 17:05:14.383981   27287 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0816 17:05:14.480969   27287 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0816 17:05:14.481075   27287 main.go:141] libmachine: found compatible host: buildroot
	I0816 17:05:14.481088   27287 main.go:141] libmachine: Provisioning with buildroot...
	I0816 17:05:14.481099   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetMachineName
	I0816 17:05:14.481349   27287 buildroot.go:166] provisioning hostname "ha-764617-m02"
	I0816 17:05:14.481379   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetMachineName
	I0816 17:05:14.481574   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHHostname
	I0816 17:05:14.484312   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:14.484718   27287 main.go:141] libmachine: (ha-764617-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:3e:7f", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:05:08 +0000 UTC Type:0 Mac:52:54:00:cf:3e:7f Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-764617-m02 Clientid:01:52:54:00:cf:3e:7f}
	I0816 17:05:14.484743   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:14.484911   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHPort
	I0816 17:05:14.485089   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHKeyPath
	I0816 17:05:14.485264   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHKeyPath
	I0816 17:05:14.485418   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHUsername
	I0816 17:05:14.485636   27287 main.go:141] libmachine: Using SSH client type: native
	I0816 17:05:14.485815   27287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.184 22 <nil> <nil>}
	I0816 17:05:14.485829   27287 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-764617-m02 && echo "ha-764617-m02" | sudo tee /etc/hostname
	I0816 17:05:14.598108   27287 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-764617-m02
	
	I0816 17:05:14.598133   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHHostname
	I0816 17:05:14.601493   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:14.601919   27287 main.go:141] libmachine: (ha-764617-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:3e:7f", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:05:08 +0000 UTC Type:0 Mac:52:54:00:cf:3e:7f Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-764617-m02 Clientid:01:52:54:00:cf:3e:7f}
	I0816 17:05:14.601951   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:14.602152   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHPort
	I0816 17:05:14.602347   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHKeyPath
	I0816 17:05:14.602499   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHKeyPath
	I0816 17:05:14.602619   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHUsername
	I0816 17:05:14.602763   27287 main.go:141] libmachine: Using SSH client type: native
	I0816 17:05:14.602976   27287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.184 22 <nil> <nil>}
	I0816 17:05:14.603001   27287 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-764617-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-764617-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-764617-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 17:05:14.710138   27287 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 17:05:14.710174   27287 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19461-9545/.minikube CaCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19461-9545/.minikube}
	I0816 17:05:14.710194   27287 buildroot.go:174] setting up certificates
	I0816 17:05:14.710205   27287 provision.go:84] configureAuth start
	I0816 17:05:14.710217   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetMachineName
	I0816 17:05:14.710524   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetIP
	I0816 17:05:14.713732   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:14.714158   27287 main.go:141] libmachine: (ha-764617-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:3e:7f", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:05:08 +0000 UTC Type:0 Mac:52:54:00:cf:3e:7f Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-764617-m02 Clientid:01:52:54:00:cf:3e:7f}
	I0816 17:05:14.714191   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:14.714354   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHHostname
	I0816 17:05:14.716407   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:14.716766   27287 main.go:141] libmachine: (ha-764617-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:3e:7f", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:05:08 +0000 UTC Type:0 Mac:52:54:00:cf:3e:7f Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-764617-m02 Clientid:01:52:54:00:cf:3e:7f}
	I0816 17:05:14.716796   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:14.716917   27287 provision.go:143] copyHostCerts
	I0816 17:05:14.716949   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem
	I0816 17:05:14.716990   27287 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem, removing ...
	I0816 17:05:14.717002   27287 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem
	I0816 17:05:14.717079   27287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem (1082 bytes)
	I0816 17:05:14.717184   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem
	I0816 17:05:14.717211   27287 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem, removing ...
	I0816 17:05:14.717220   27287 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem
	I0816 17:05:14.717261   27287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem (1123 bytes)
	I0816 17:05:14.717340   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem
	I0816 17:05:14.717364   27287 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem, removing ...
	I0816 17:05:14.717374   27287 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem
	I0816 17:05:14.717410   27287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem (1679 bytes)
	I0816 17:05:14.717489   27287 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem org=jenkins.ha-764617-m02 san=[127.0.0.1 192.168.39.184 ha-764617-m02 localhost minikube]
	I0816 17:05:15.172415   27287 provision.go:177] copyRemoteCerts
	I0816 17:05:15.172467   27287 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 17:05:15.172488   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHHostname
	I0816 17:05:15.175218   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:15.175574   27287 main.go:141] libmachine: (ha-764617-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:3e:7f", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:05:08 +0000 UTC Type:0 Mac:52:54:00:cf:3e:7f Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-764617-m02 Clientid:01:52:54:00:cf:3e:7f}
	I0816 17:05:15.175596   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:15.175818   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHPort
	I0816 17:05:15.176028   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHKeyPath
	I0816 17:05:15.176205   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHUsername
	I0816 17:05:15.176337   27287 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m02/id_rsa Username:docker}
	I0816 17:05:15.255011   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 17:05:15.255084   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0816 17:05:15.280457   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 17:05:15.280534   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 17:05:15.309859   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 17:05:15.309917   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 17:05:15.334358   27287 provision.go:87] duration metric: took 624.139811ms to configureAuth
	I0816 17:05:15.334386   27287 buildroot.go:189] setting minikube options for container-runtime
	I0816 17:05:15.334544   27287 config.go:182] Loaded profile config "ha-764617": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 17:05:15.334639   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHHostname
	I0816 17:05:15.337113   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:15.337604   27287 main.go:141] libmachine: (ha-764617-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:3e:7f", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:05:08 +0000 UTC Type:0 Mac:52:54:00:cf:3e:7f Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-764617-m02 Clientid:01:52:54:00:cf:3e:7f}
	I0816 17:05:15.337635   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:15.337772   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHPort
	I0816 17:05:15.337980   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHKeyPath
	I0816 17:05:15.338161   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHKeyPath
	I0816 17:05:15.338290   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHUsername
	I0816 17:05:15.338443   27287 main.go:141] libmachine: Using SSH client type: native
	I0816 17:05:15.338647   27287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.184 22 <nil> <nil>}
	I0816 17:05:15.338663   27287 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 17:05:15.594238   27287 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 17:05:15.594260   27287 main.go:141] libmachine: Checking connection to Docker...
	I0816 17:05:15.594268   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetURL
	I0816 17:05:15.595833   27287 main.go:141] libmachine: (ha-764617-m02) DBG | Using libvirt version 6000000
	I0816 17:05:15.598037   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:15.598365   27287 main.go:141] libmachine: (ha-764617-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:3e:7f", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:05:08 +0000 UTC Type:0 Mac:52:54:00:cf:3e:7f Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-764617-m02 Clientid:01:52:54:00:cf:3e:7f}
	I0816 17:05:15.598392   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:15.598609   27287 main.go:141] libmachine: Docker is up and running!
	I0816 17:05:15.598626   27287 main.go:141] libmachine: Reticulating splines...
	I0816 17:05:15.598632   27287 client.go:171] duration metric: took 21.12084563s to LocalClient.Create
	I0816 17:05:15.598652   27287 start.go:167] duration metric: took 21.12090112s to libmachine.API.Create "ha-764617"
	I0816 17:05:15.598661   27287 start.go:293] postStartSetup for "ha-764617-m02" (driver="kvm2")
	I0816 17:05:15.598670   27287 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 17:05:15.598693   27287 main.go:141] libmachine: (ha-764617-m02) Calling .DriverName
	I0816 17:05:15.598897   27287 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 17:05:15.598919   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHHostname
	I0816 17:05:15.601355   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:15.601756   27287 main.go:141] libmachine: (ha-764617-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:3e:7f", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:05:08 +0000 UTC Type:0 Mac:52:54:00:cf:3e:7f Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-764617-m02 Clientid:01:52:54:00:cf:3e:7f}
	I0816 17:05:15.601787   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:15.601977   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHPort
	I0816 17:05:15.602157   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHKeyPath
	I0816 17:05:15.602357   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHUsername
	I0816 17:05:15.602513   27287 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m02/id_rsa Username:docker}
	I0816 17:05:15.678515   27287 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 17:05:15.682486   27287 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 17:05:15.682513   27287 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/addons for local assets ...
	I0816 17:05:15.682605   27287 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/files for local assets ...
	I0816 17:05:15.682708   27287 filesync.go:149] local asset: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem -> 167532.pem in /etc/ssl/certs
	I0816 17:05:15.682721   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem -> /etc/ssl/certs/167532.pem
	I0816 17:05:15.682837   27287 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 17:05:15.691786   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /etc/ssl/certs/167532.pem (1708 bytes)
	I0816 17:05:15.714314   27287 start.go:296] duration metric: took 115.641935ms for postStartSetup
	I0816 17:05:15.714368   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetConfigRaw
	I0816 17:05:15.714977   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetIP
	I0816 17:05:15.717734   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:15.718053   27287 main.go:141] libmachine: (ha-764617-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:3e:7f", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:05:08 +0000 UTC Type:0 Mac:52:54:00:cf:3e:7f Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-764617-m02 Clientid:01:52:54:00:cf:3e:7f}
	I0816 17:05:15.718074   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:15.718364   27287 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/config.json ...
	I0816 17:05:15.718577   27287 start.go:128] duration metric: took 21.258778684s to createHost
	I0816 17:05:15.718598   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHHostname
	I0816 17:05:15.721229   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:15.721603   27287 main.go:141] libmachine: (ha-764617-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:3e:7f", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:05:08 +0000 UTC Type:0 Mac:52:54:00:cf:3e:7f Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-764617-m02 Clientid:01:52:54:00:cf:3e:7f}
	I0816 17:05:15.721633   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:15.721787   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHPort
	I0816 17:05:15.721954   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHKeyPath
	I0816 17:05:15.722168   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHKeyPath
	I0816 17:05:15.722332   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHUsername
	I0816 17:05:15.722508   27287 main.go:141] libmachine: Using SSH client type: native
	I0816 17:05:15.722688   27287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.184 22 <nil> <nil>}
	I0816 17:05:15.722701   27287 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 17:05:15.821238   27287 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723827915.794612205
	
	I0816 17:05:15.821257   27287 fix.go:216] guest clock: 1723827915.794612205
	I0816 17:05:15.821267   27287 fix.go:229] Guest: 2024-08-16 17:05:15.794612205 +0000 UTC Remote: 2024-08-16 17:05:15.718589053 +0000 UTC m=+64.576049869 (delta=76.023152ms)
	I0816 17:05:15.821285   27287 fix.go:200] guest clock delta is within tolerance: 76.023152ms
	I0816 17:05:15.821314   27287 start.go:83] releasing machines lock for "ha-764617-m02", held for 21.36157963s
	I0816 17:05:15.821341   27287 main.go:141] libmachine: (ha-764617-m02) Calling .DriverName
	I0816 17:05:15.821626   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetIP
	I0816 17:05:15.824154   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:15.824543   27287 main.go:141] libmachine: (ha-764617-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:3e:7f", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:05:08 +0000 UTC Type:0 Mac:52:54:00:cf:3e:7f Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-764617-m02 Clientid:01:52:54:00:cf:3e:7f}
	I0816 17:05:15.824576   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:15.827172   27287 out.go:177] * Found network options:
	I0816 17:05:15.828431   27287 out.go:177]   - NO_PROXY=192.168.39.18
	W0816 17:05:15.829535   27287 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 17:05:15.829578   27287 main.go:141] libmachine: (ha-764617-m02) Calling .DriverName
	I0816 17:05:15.830199   27287 main.go:141] libmachine: (ha-764617-m02) Calling .DriverName
	I0816 17:05:15.830386   27287 main.go:141] libmachine: (ha-764617-m02) Calling .DriverName
	I0816 17:05:15.830468   27287 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 17:05:15.830514   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHHostname
	W0816 17:05:15.830612   27287 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 17:05:15.830691   27287 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 17:05:15.830714   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHHostname
	I0816 17:05:15.833788   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:15.834027   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:15.834178   27287 main.go:141] libmachine: (ha-764617-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:3e:7f", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:05:08 +0000 UTC Type:0 Mac:52:54:00:cf:3e:7f Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-764617-m02 Clientid:01:52:54:00:cf:3e:7f}
	I0816 17:05:15.834203   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:15.834382   27287 main.go:141] libmachine: (ha-764617-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:3e:7f", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:05:08 +0000 UTC Type:0 Mac:52:54:00:cf:3e:7f Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-764617-m02 Clientid:01:52:54:00:cf:3e:7f}
	I0816 17:05:15.834413   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:15.834383   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHPort
	I0816 17:05:15.834584   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHPort
	I0816 17:05:15.834661   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHKeyPath
	I0816 17:05:15.834734   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHKeyPath
	I0816 17:05:15.834813   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHUsername
	I0816 17:05:15.834915   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHUsername
	I0816 17:05:15.834932   27287 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m02/id_rsa Username:docker}
	I0816 17:05:15.835044   27287 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m02/id_rsa Username:docker}
	I0816 17:05:16.068814   27287 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 17:05:16.075538   27287 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 17:05:16.075601   27287 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 17:05:16.091482   27287 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 17:05:16.091504   27287 start.go:495] detecting cgroup driver to use...
	I0816 17:05:16.091561   27287 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 17:05:16.110257   27287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 17:05:16.126323   27287 docker.go:217] disabling cri-docker service (if available) ...
	I0816 17:05:16.126375   27287 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 17:05:16.141399   27287 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 17:05:16.154490   27287 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 17:05:16.269157   27287 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 17:05:16.407657   27287 docker.go:233] disabling docker service ...
	I0816 17:05:16.407721   27287 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 17:05:16.421434   27287 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 17:05:16.433516   27287 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 17:05:16.567272   27287 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 17:05:16.689387   27287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 17:05:16.703189   27287 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 17:05:16.721006   27287 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 17:05:16.721072   27287 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:05:16.731367   27287 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 17:05:16.731440   27287 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:05:16.741272   27287 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:05:16.751083   27287 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:05:16.761289   27287 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 17:05:16.771215   27287 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:05:16.781163   27287 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:05:16.797739   27287 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:05:16.808133   27287 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 17:05:16.817377   27287 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 17:05:16.817434   27287 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 17:05:16.829184   27287 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 17:05:16.838635   27287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 17:05:16.952234   27287 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 17:05:17.091683   27287 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 17:05:17.091750   27287 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 17:05:17.096146   27287 start.go:563] Will wait 60s for crictl version
	I0816 17:05:17.096190   27287 ssh_runner.go:195] Run: which crictl
	I0816 17:05:17.099403   27287 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 17:05:17.135869   27287 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 17:05:17.135939   27287 ssh_runner.go:195] Run: crio --version
	I0816 17:05:17.164105   27287 ssh_runner.go:195] Run: crio --version
	I0816 17:05:17.191765   27287 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 17:05:17.192910   27287 out.go:177]   - env NO_PROXY=192.168.39.18
	I0816 17:05:17.193933   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetIP
	I0816 17:05:17.197050   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:17.197441   27287 main.go:141] libmachine: (ha-764617-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:3e:7f", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:05:08 +0000 UTC Type:0 Mac:52:54:00:cf:3e:7f Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-764617-m02 Clientid:01:52:54:00:cf:3e:7f}
	I0816 17:05:17.197469   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:17.197706   27287 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0816 17:05:17.202722   27287 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 17:05:17.215145   27287 mustload.go:65] Loading cluster: ha-764617
	I0816 17:05:17.215352   27287 config.go:182] Loaded profile config "ha-764617": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 17:05:17.215607   27287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:05:17.215647   27287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:05:17.230152   27287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38425
	I0816 17:05:17.230644   27287 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:05:17.231066   27287 main.go:141] libmachine: Using API Version  1
	I0816 17:05:17.231083   27287 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:05:17.231367   27287 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:05:17.231514   27287 main.go:141] libmachine: (ha-764617) Calling .GetState
	I0816 17:05:17.232989   27287 host.go:66] Checking if "ha-764617" exists ...
	I0816 17:05:17.233254   27287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:05:17.233282   27287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:05:17.247698   27287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45357
	I0816 17:05:17.248105   27287 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:05:17.248561   27287 main.go:141] libmachine: Using API Version  1
	I0816 17:05:17.248587   27287 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:05:17.248858   27287 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:05:17.249057   27287 main.go:141] libmachine: (ha-764617) Calling .DriverName
	I0816 17:05:17.249234   27287 certs.go:68] Setting up /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617 for IP: 192.168.39.184
	I0816 17:05:17.249245   27287 certs.go:194] generating shared ca certs ...
	I0816 17:05:17.249257   27287 certs.go:226] acquiring lock for ca certs: {Name:mk1e000575c94c138513704c2900b8a68810eb65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:05:17.249376   27287 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key
	I0816 17:05:17.249423   27287 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key
	I0816 17:05:17.249432   27287 certs.go:256] generating profile certs ...
	I0816 17:05:17.249502   27287 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/client.key
	I0816 17:05:17.249525   27287 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.key.f9174157
	I0816 17:05:17.249556   27287 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.crt.f9174157 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.18 192.168.39.184 192.168.39.254]
	I0816 17:05:17.330711   27287 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.crt.f9174157 ...
	I0816 17:05:17.330737   27287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.crt.f9174157: {Name:mk01e6747a8590487bd79267069b868aeffb68c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:05:17.330890   27287 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.key.f9174157 ...
	I0816 17:05:17.330903   27287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.key.f9174157: {Name:mke5c2cbeaef23a1785ed59c672deb9d987932b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:05:17.330968   27287 certs.go:381] copying /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.crt.f9174157 -> /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.crt
	I0816 17:05:17.331088   27287 certs.go:385] copying /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.key.f9174157 -> /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.key
	I0816 17:05:17.331208   27287 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/proxy-client.key
	I0816 17:05:17.331223   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0816 17:05:17.331235   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0816 17:05:17.331248   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0816 17:05:17.331260   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0816 17:05:17.331272   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0816 17:05:17.331284   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0816 17:05:17.331302   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0816 17:05:17.331314   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0816 17:05:17.331358   27287 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem (1338 bytes)
	W0816 17:05:17.331382   27287 certs.go:480] ignoring /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753_empty.pem, impossibly tiny 0 bytes
	I0816 17:05:17.331388   27287 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 17:05:17.331412   27287 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem (1082 bytes)
	I0816 17:05:17.331433   27287 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem (1123 bytes)
	I0816 17:05:17.331455   27287 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem (1679 bytes)
	I0816 17:05:17.331497   27287 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem (1708 bytes)
	I0816 17:05:17.331521   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem -> /usr/share/ca-certificates/16753.pem
	I0816 17:05:17.331534   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem -> /usr/share/ca-certificates/167532.pem
	I0816 17:05:17.331546   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0816 17:05:17.331577   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHHostname
	I0816 17:05:17.334829   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:05:17.335193   27287 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:05:17.335215   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:05:17.335399   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHPort
	I0816 17:05:17.335609   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:05:17.335758   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHUsername
	I0816 17:05:17.335881   27287 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617/id_rsa Username:docker}
	I0816 17:05:17.413021   27287 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0816 17:05:17.417918   27287 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0816 17:05:17.428202   27287 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0816 17:05:17.431990   27287 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0816 17:05:17.441799   27287 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0816 17:05:17.446073   27287 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0816 17:05:17.457196   27287 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0816 17:05:17.461127   27287 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0816 17:05:17.470431   27287 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0816 17:05:17.474049   27287 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0816 17:05:17.484125   27287 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0816 17:05:17.488044   27287 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0816 17:05:17.497695   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 17:05:17.521837   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 17:05:17.545316   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 17:05:17.568540   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 17:05:17.593049   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0816 17:05:17.614659   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 17:05:17.637248   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 17:05:17.660202   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 17:05:17.685154   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem --> /usr/share/ca-certificates/16753.pem (1338 bytes)
	I0816 17:05:17.709667   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /usr/share/ca-certificates/167532.pem (1708 bytes)
	I0816 17:05:17.734027   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 17:05:17.758301   27287 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0816 17:05:17.774810   27287 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0816 17:05:17.791619   27287 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0816 17:05:17.808482   27287 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0816 17:05:17.824086   27287 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0816 17:05:17.839536   27287 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0816 17:05:17.856346   27287 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0816 17:05:17.872671   27287 ssh_runner.go:195] Run: openssl version
	I0816 17:05:17.878303   27287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16753.pem && ln -fs /usr/share/ca-certificates/16753.pem /etc/ssl/certs/16753.pem"
	I0816 17:05:17.888775   27287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16753.pem
	I0816 17:05:17.892983   27287 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 17:00 /usr/share/ca-certificates/16753.pem
	I0816 17:05:17.893042   27287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16753.pem
	I0816 17:05:17.898619   27287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16753.pem /etc/ssl/certs/51391683.0"
	I0816 17:05:17.909254   27287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167532.pem && ln -fs /usr/share/ca-certificates/167532.pem /etc/ssl/certs/167532.pem"
	I0816 17:05:17.920282   27287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167532.pem
	I0816 17:05:17.924763   27287 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 17:00 /usr/share/ca-certificates/167532.pem
	I0816 17:05:17.924828   27287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167532.pem
	I0816 17:05:17.930414   27287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167532.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 17:05:17.941647   27287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 17:05:17.952642   27287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 17:05:17.957203   27287 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:49 /usr/share/ca-certificates/minikubeCA.pem
	I0816 17:05:17.957264   27287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 17:05:17.963130   27287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 17:05:17.973871   27287 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 17:05:17.977962   27287 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0816 17:05:17.978020   27287 kubeadm.go:934] updating node {m02 192.168.39.184 8443 v1.31.0 crio true true} ...
	I0816 17:05:17.978119   27287 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-764617-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.184
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-764617 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 17:05:17.978153   27287 kube-vip.go:115] generating kube-vip config ...
	I0816 17:05:17.978198   27287 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0816 17:05:17.993064   27287 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0816 17:05:17.993141   27287 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0816 17:05:17.993203   27287 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 17:05:18.002372   27287 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0816 17:05:18.002434   27287 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0816 17:05:18.012001   27287 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0816 17:05:18.012025   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0816 17:05:18.012100   27287 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0816 17:05:18.012101   27287 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19461-9545/.minikube/cache/linux/amd64/v1.31.0/kubelet
	I0816 17:05:18.012133   27287 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19461-9545/.minikube/cache/linux/amd64/v1.31.0/kubeadm
	I0816 17:05:18.015936   27287 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0816 17:05:18.015959   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0816 17:05:18.945188   27287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 17:05:18.959054   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0816 17:05:18.959177   27287 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0816 17:05:18.963226   27287 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0816 17:05:18.963265   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0816 17:05:19.011628   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0816 17:05:19.011722   27287 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0816 17:05:19.039543   27287 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0816 17:05:19.039587   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0816 17:05:19.448167   27287 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0816 17:05:19.456878   27287 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0816 17:05:19.472028   27287 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 17:05:19.487352   27287 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0816 17:05:19.503229   27287 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0816 17:05:19.506741   27287 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 17:05:19.518344   27287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 17:05:19.633708   27287 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 17:05:19.649364   27287 host.go:66] Checking if "ha-764617" exists ...
	I0816 17:05:19.649821   27287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:05:19.649869   27287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:05:19.665777   27287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37987
	I0816 17:05:19.666191   27287 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:05:19.666695   27287 main.go:141] libmachine: Using API Version  1
	I0816 17:05:19.666719   27287 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:05:19.667010   27287 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:05:19.667214   27287 main.go:141] libmachine: (ha-764617) Calling .DriverName
	I0816 17:05:19.667340   27287 start.go:317] joinCluster: &{Name:ha-764617 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-764617 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.18 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.184 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 17:05:19.667431   27287 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0816 17:05:19.667450   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHHostname
	I0816 17:05:19.670648   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:05:19.671071   27287 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:05:19.671101   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:05:19.671264   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHPort
	I0816 17:05:19.671411   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:05:19.671568   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHUsername
	I0816 17:05:19.671714   27287 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617/id_rsa Username:docker}
	I0816 17:05:19.819563   27287 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.184 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 17:05:19.819617   27287 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 5eeya2.4dclp2q50i3hu1c0 --discovery-token-ca-cert-hash sha256:340cd7e6f4afd769ffe0b97bb40a910498795aaf29fab06cd6b8c9a3957ccd57 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-764617-m02 --control-plane --apiserver-advertise-address=192.168.39.184 --apiserver-bind-port=8443"
	I0816 17:05:47.562480   27287 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 5eeya2.4dclp2q50i3hu1c0 --discovery-token-ca-cert-hash sha256:340cd7e6f4afd769ffe0b97bb40a910498795aaf29fab06cd6b8c9a3957ccd57 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-764617-m02 --control-plane --apiserver-advertise-address=192.168.39.184 --apiserver-bind-port=8443": (27.742838871s)
	I0816 17:05:47.562514   27287 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0816 17:05:48.085888   27287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-764617-m02 minikube.k8s.io/updated_at=2024_08_16T17_05_48_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd minikube.k8s.io/name=ha-764617 minikube.k8s.io/primary=false
	I0816 17:05:48.193572   27287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-764617-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0816 17:05:48.317060   27287 start.go:319] duration metric: took 28.649716421s to joinCluster
	I0816 17:05:48.317133   27287 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.184 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 17:05:48.317412   27287 config.go:182] Loaded profile config "ha-764617": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 17:05:48.318876   27287 out.go:177] * Verifying Kubernetes components...
	I0816 17:05:48.320297   27287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 17:05:48.576351   27287 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 17:05:48.624479   27287 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19461-9545/kubeconfig
	I0816 17:05:48.624847   27287 kapi.go:59] client config for ha-764617: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/client.crt", KeyFile:"/home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/client.key", CAFile:"/home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f189a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0816 17:05:48.624934   27287 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.18:8443
	I0816 17:05:48.625243   27287 node_ready.go:35] waiting up to 6m0s for node "ha-764617-m02" to be "Ready" ...
	I0816 17:05:48.625361   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:05:48.625373   27287 round_trippers.go:469] Request Headers:
	I0816 17:05:48.625384   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:05:48.625395   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:05:48.635759   27287 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0816 17:05:49.125849   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:05:49.125873   27287 round_trippers.go:469] Request Headers:
	I0816 17:05:49.125882   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:05:49.125891   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:05:49.129669   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:05:49.625957   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:05:49.625985   27287 round_trippers.go:469] Request Headers:
	I0816 17:05:49.625996   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:05:49.626002   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:05:49.631635   27287 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0816 17:05:50.126287   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:05:50.126314   27287 round_trippers.go:469] Request Headers:
	I0816 17:05:50.126325   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:05:50.126333   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:05:50.129386   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:05:50.626242   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:05:50.626267   27287 round_trippers.go:469] Request Headers:
	I0816 17:05:50.626277   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:05:50.626282   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:05:50.629996   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:05:50.630684   27287 node_ready.go:53] node "ha-764617-m02" has status "Ready":"False"
	I0816 17:05:51.125895   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:05:51.125920   27287 round_trippers.go:469] Request Headers:
	I0816 17:05:51.125932   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:05:51.125940   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:05:51.129303   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:05:51.625899   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:05:51.625918   27287 round_trippers.go:469] Request Headers:
	I0816 17:05:51.625926   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:05:51.625929   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:05:51.629226   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:05:52.125938   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:05:52.125959   27287 round_trippers.go:469] Request Headers:
	I0816 17:05:52.125971   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:05:52.125977   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:05:52.129480   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:05:52.625960   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:05:52.625988   27287 round_trippers.go:469] Request Headers:
	I0816 17:05:52.626001   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:05:52.626010   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:05:52.629829   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:05:53.125688   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:05:53.125715   27287 round_trippers.go:469] Request Headers:
	I0816 17:05:53.125728   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:05:53.125737   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:05:53.129371   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:05:53.129815   27287 node_ready.go:53] node "ha-764617-m02" has status "Ready":"False"
	I0816 17:05:53.625804   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:05:53.625824   27287 round_trippers.go:469] Request Headers:
	I0816 17:05:53.625832   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:05:53.625837   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:05:53.629053   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:05:54.125891   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:05:54.125911   27287 round_trippers.go:469] Request Headers:
	I0816 17:05:54.125918   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:05:54.125922   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:05:54.129059   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:05:54.626345   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:05:54.626368   27287 round_trippers.go:469] Request Headers:
	I0816 17:05:54.626378   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:05:54.626383   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:05:54.629930   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:05:55.126447   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:05:55.126466   27287 round_trippers.go:469] Request Headers:
	I0816 17:05:55.126475   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:05:55.126479   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:05:55.130021   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:05:55.130693   27287 node_ready.go:53] node "ha-764617-m02" has status "Ready":"False"
	I0816 17:05:55.626194   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:05:55.626219   27287 round_trippers.go:469] Request Headers:
	I0816 17:05:55.626231   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:05:55.626238   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:05:55.629738   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:05:56.125942   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:05:56.125966   27287 round_trippers.go:469] Request Headers:
	I0816 17:05:56.125976   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:05:56.125980   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:05:56.129149   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:05:56.625647   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:05:56.625670   27287 round_trippers.go:469] Request Headers:
	I0816 17:05:56.625685   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:05:56.625690   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:05:56.629084   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:05:57.126223   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:05:57.126248   27287 round_trippers.go:469] Request Headers:
	I0816 17:05:57.126256   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:05:57.126260   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:05:57.129302   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:05:57.625891   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:05:57.625914   27287 round_trippers.go:469] Request Headers:
	I0816 17:05:57.625922   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:05:57.625926   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:05:57.629144   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:05:57.629735   27287 node_ready.go:53] node "ha-764617-m02" has status "Ready":"False"
	I0816 17:05:58.125853   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:05:58.125871   27287 round_trippers.go:469] Request Headers:
	I0816 17:05:58.125879   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:05:58.125882   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:05:58.129672   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:05:58.625546   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:05:58.625570   27287 round_trippers.go:469] Request Headers:
	I0816 17:05:58.625579   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:05:58.625584   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:05:58.639274   27287 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0816 17:05:59.125870   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:05:59.125892   27287 round_trippers.go:469] Request Headers:
	I0816 17:05:59.125900   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:05:59.125904   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:05:59.129682   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:05:59.626254   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:05:59.626306   27287 round_trippers.go:469] Request Headers:
	I0816 17:05:59.626317   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:05:59.626325   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:05:59.630113   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:05:59.630646   27287 node_ready.go:53] node "ha-764617-m02" has status "Ready":"False"
	I0816 17:06:00.125423   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:06:00.125445   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:00.125456   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:00.125461   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:00.128282   27287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 17:06:00.625883   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:06:00.625908   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:00.625916   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:00.625920   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:00.629761   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:06:01.125630   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:06:01.125653   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:01.125662   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:01.125669   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:01.128763   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:06:01.625534   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:06:01.625559   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:01.625579   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:01.625585   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:01.628446   27287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 17:06:02.126446   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:06:02.126466   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:02.126474   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:02.126479   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:02.130362   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:06:02.131056   27287 node_ready.go:53] node "ha-764617-m02" has status "Ready":"False"
	I0816 17:06:02.626468   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:06:02.626493   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:02.626502   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:02.626506   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:02.629586   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:06:03.125618   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:06:03.125642   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:03.125650   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:03.125654   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:03.128720   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:06:03.625485   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:06:03.625510   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:03.625516   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:03.625520   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:03.628843   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:06:04.125808   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:06:04.125831   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:04.125838   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:04.125842   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:04.129078   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:06:04.626408   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:06:04.626430   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:04.626438   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:04.626442   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:04.629697   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:06:04.630263   27287 node_ready.go:53] node "ha-764617-m02" has status "Ready":"False"
	I0816 17:06:05.126111   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:06:05.126133   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:05.126141   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:05.126147   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:05.129626   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:06:05.625626   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:06:05.625649   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:05.625657   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:05.625660   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:05.629302   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:06:06.125886   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:06:06.125908   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:06.125915   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:06.125919   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:06.129037   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:06:06.625492   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:06:06.625514   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:06.625523   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:06.625527   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:06.629059   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:06:07.126088   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:06:07.126120   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:07.126129   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:07.126133   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:07.130295   27287 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 17:06:07.130922   27287 node_ready.go:49] node "ha-764617-m02" has status "Ready":"True"
	I0816 17:06:07.130940   27287 node_ready.go:38] duration metric: took 18.50566774s for node "ha-764617-m02" to be "Ready" ...
	I0816 17:06:07.130947   27287 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 17:06:07.131007   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods
	I0816 17:06:07.131017   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:07.131024   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:07.131027   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:07.136228   27287 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0816 17:06:07.141833   27287 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-d6c7g" in "kube-system" namespace to be "Ready" ...
	I0816 17:06:07.141903   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d6c7g
	I0816 17:06:07.141909   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:07.141922   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:07.141929   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:07.145327   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:06:07.146083   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617
	I0816 17:06:07.146096   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:07.146103   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:07.146106   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:07.149233   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:06:07.149830   27287 pod_ready.go:93] pod "coredns-6f6b679f8f-d6c7g" in "kube-system" namespace has status "Ready":"True"
	I0816 17:06:07.149846   27287 pod_ready.go:82] duration metric: took 7.989214ms for pod "coredns-6f6b679f8f-d6c7g" in "kube-system" namespace to be "Ready" ...
	I0816 17:06:07.149857   27287 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-rhb6h" in "kube-system" namespace to be "Ready" ...
	I0816 17:06:07.149910   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-rhb6h
	I0816 17:06:07.149920   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:07.149929   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:07.149936   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:07.154058   27287 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 17:06:07.154960   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617
	I0816 17:06:07.154974   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:07.154983   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:07.154987   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:07.157780   27287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 17:06:07.158442   27287 pod_ready.go:93] pod "coredns-6f6b679f8f-rhb6h" in "kube-system" namespace has status "Ready":"True"
	I0816 17:06:07.158456   27287 pod_ready.go:82] duration metric: took 8.592818ms for pod "coredns-6f6b679f8f-rhb6h" in "kube-system" namespace to be "Ready" ...
	I0816 17:06:07.158465   27287 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-764617" in "kube-system" namespace to be "Ready" ...
	I0816 17:06:07.158511   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/etcd-ha-764617
	I0816 17:06:07.158518   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:07.158525   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:07.158529   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:07.161185   27287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 17:06:07.161743   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617
	I0816 17:06:07.161756   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:07.161764   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:07.161769   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:07.164153   27287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 17:06:07.164684   27287 pod_ready.go:93] pod "etcd-ha-764617" in "kube-system" namespace has status "Ready":"True"
	I0816 17:06:07.164700   27287 pod_ready.go:82] duration metric: took 6.229555ms for pod "etcd-ha-764617" in "kube-system" namespace to be "Ready" ...
	I0816 17:06:07.164708   27287 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-764617-m02" in "kube-system" namespace to be "Ready" ...
	I0816 17:06:07.164749   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/etcd-ha-764617-m02
	I0816 17:06:07.164756   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:07.164763   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:07.164767   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:07.167071   27287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 17:06:07.167532   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:06:07.167545   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:07.167554   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:07.167559   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:07.170156   27287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 17:06:07.170924   27287 pod_ready.go:93] pod "etcd-ha-764617-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 17:06:07.170938   27287 pod_ready.go:82] duration metric: took 6.224878ms for pod "etcd-ha-764617-m02" in "kube-system" namespace to be "Ready" ...
	I0816 17:06:07.170950   27287 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-764617" in "kube-system" namespace to be "Ready" ...
	I0816 17:06:07.326885   27287 request.go:632] Waited for 155.886265ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-764617
	I0816 17:06:07.326971   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-764617
	I0816 17:06:07.326983   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:07.326995   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:07.327007   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:07.331545   27287 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 17:06:07.526808   27287 request.go:632] Waited for 194.414508ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/nodes/ha-764617
	I0816 17:06:07.526869   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617
	I0816 17:06:07.526880   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:07.526888   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:07.526895   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:07.529997   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:06:07.530407   27287 pod_ready.go:93] pod "kube-apiserver-ha-764617" in "kube-system" namespace has status "Ready":"True"
	I0816 17:06:07.530424   27287 pod_ready.go:82] duration metric: took 359.467581ms for pod "kube-apiserver-ha-764617" in "kube-system" namespace to be "Ready" ...
	I0816 17:06:07.530433   27287 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-764617-m02" in "kube-system" namespace to be "Ready" ...
	I0816 17:06:07.726626   27287 request.go:632] Waited for 196.114068ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-764617-m02
	I0816 17:06:07.726680   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-764617-m02
	I0816 17:06:07.726685   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:07.726695   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:07.726700   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:07.729960   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:06:07.927084   27287 request.go:632] Waited for 196.35442ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:06:07.927140   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:06:07.927146   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:07.927153   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:07.927157   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:07.930674   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:06:07.931268   27287 pod_ready.go:93] pod "kube-apiserver-ha-764617-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 17:06:07.931286   27287 pod_ready.go:82] duration metric: took 400.847633ms for pod "kube-apiserver-ha-764617-m02" in "kube-system" namespace to be "Ready" ...
	I0816 17:06:07.931295   27287 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-764617" in "kube-system" namespace to be "Ready" ...
	I0816 17:06:08.126382   27287 request.go:632] Waited for 195.016683ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-764617
	I0816 17:06:08.126448   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-764617
	I0816 17:06:08.126456   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:08.126493   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:08.126505   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:08.130005   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:06:08.326983   27287 request.go:632] Waited for 196.407146ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/nodes/ha-764617
	I0816 17:06:08.327035   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617
	I0816 17:06:08.327040   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:08.327050   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:08.327055   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:08.330358   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:06:08.331167   27287 pod_ready.go:93] pod "kube-controller-manager-ha-764617" in "kube-system" namespace has status "Ready":"True"
	I0816 17:06:08.331187   27287 pod_ready.go:82] duration metric: took 399.883787ms for pod "kube-controller-manager-ha-764617" in "kube-system" namespace to be "Ready" ...
	I0816 17:06:08.331197   27287 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-764617-m02" in "kube-system" namespace to be "Ready" ...
	I0816 17:06:08.526198   27287 request.go:632] Waited for 194.936804ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-764617-m02
	I0816 17:06:08.526271   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-764617-m02
	I0816 17:06:08.526282   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:08.526290   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:08.526296   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:08.529885   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:06:08.726890   27287 request.go:632] Waited for 196.397476ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:06:08.726937   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:06:08.726942   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:08.726950   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:08.726956   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:08.730426   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:06:08.730891   27287 pod_ready.go:93] pod "kube-controller-manager-ha-764617-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 17:06:08.730908   27287 pod_ready.go:82] duration metric: took 399.705397ms for pod "kube-controller-manager-ha-764617-m02" in "kube-system" namespace to be "Ready" ...
	I0816 17:06:08.730920   27287 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-g5szr" in "kube-system" namespace to be "Ready" ...
	I0816 17:06:08.926092   27287 request.go:632] Waited for 195.101826ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g5szr
	I0816 17:06:08.926174   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g5szr
	I0816 17:06:08.926185   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:08.926196   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:08.926205   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:08.929724   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:06:09.126742   27287 request.go:632] Waited for 196.364545ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:06:09.126820   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:06:09.126828   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:09.126839   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:09.126846   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:09.130173   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:06:09.130824   27287 pod_ready.go:93] pod "kube-proxy-g5szr" in "kube-system" namespace has status "Ready":"True"
	I0816 17:06:09.130842   27287 pod_ready.go:82] duration metric: took 399.914041ms for pod "kube-proxy-g5szr" in "kube-system" namespace to be "Ready" ...
	I0816 17:06:09.130853   27287 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-j75vc" in "kube-system" namespace to be "Ready" ...
	I0816 17:06:09.326977   27287 request.go:632] Waited for 196.050409ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j75vc
	I0816 17:06:09.327040   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j75vc
	I0816 17:06:09.327049   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:09.327057   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:09.327067   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:09.330384   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:06:09.526672   27287 request.go:632] Waited for 195.249789ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/nodes/ha-764617
	I0816 17:06:09.526748   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617
	I0816 17:06:09.526759   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:09.526771   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:09.526780   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:09.530244   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:06:09.530754   27287 pod_ready.go:93] pod "kube-proxy-j75vc" in "kube-system" namespace has status "Ready":"True"
	I0816 17:06:09.530772   27287 pod_ready.go:82] duration metric: took 399.912331ms for pod "kube-proxy-j75vc" in "kube-system" namespace to be "Ready" ...
	I0816 17:06:09.530780   27287 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-764617" in "kube-system" namespace to be "Ready" ...
	I0816 17:06:09.726271   27287 request.go:632] Waited for 195.417063ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-764617
	I0816 17:06:09.726348   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-764617
	I0816 17:06:09.726354   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:09.726362   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:09.726367   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:09.729273   27287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 17:06:09.926217   27287 request.go:632] Waited for 196.280639ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/nodes/ha-764617
	I0816 17:06:09.926274   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617
	I0816 17:06:09.926279   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:09.926286   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:09.926290   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:09.929573   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:06:09.930500   27287 pod_ready.go:93] pod "kube-scheduler-ha-764617" in "kube-system" namespace has status "Ready":"True"
	I0816 17:06:09.930521   27287 pod_ready.go:82] duration metric: took 399.733691ms for pod "kube-scheduler-ha-764617" in "kube-system" namespace to be "Ready" ...
	I0816 17:06:09.930532   27287 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-764617-m02" in "kube-system" namespace to be "Ready" ...
	I0816 17:06:10.126599   27287 request.go:632] Waited for 195.994963ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-764617-m02
	I0816 17:06:10.126685   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-764617-m02
	I0816 17:06:10.126692   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:10.126709   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:10.126715   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:10.130573   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:06:10.326558   27287 request.go:632] Waited for 195.354006ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:06:10.326618   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:06:10.326624   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:10.326634   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:10.326638   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:10.330414   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:06:10.331169   27287 pod_ready.go:93] pod "kube-scheduler-ha-764617-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 17:06:10.331188   27287 pod_ready.go:82] duration metric: took 400.644815ms for pod "kube-scheduler-ha-764617-m02" in "kube-system" namespace to be "Ready" ...
	I0816 17:06:10.331201   27287 pod_ready.go:39] duration metric: took 3.200242246s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 17:06:10.331218   27287 api_server.go:52] waiting for apiserver process to appear ...
	I0816 17:06:10.331273   27287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 17:06:10.346906   27287 api_server.go:72] duration metric: took 22.029737745s to wait for apiserver process to appear ...
	I0816 17:06:10.346937   27287 api_server.go:88] waiting for apiserver healthz status ...
	I0816 17:06:10.346960   27287 api_server.go:253] Checking apiserver healthz at https://192.168.39.18:8443/healthz ...
	I0816 17:06:10.353559   27287 api_server.go:279] https://192.168.39.18:8443/healthz returned 200:
	ok
	I0816 17:06:10.353633   27287 round_trippers.go:463] GET https://192.168.39.18:8443/version
	I0816 17:06:10.353643   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:10.353650   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:10.353656   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:10.354592   27287 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0816 17:06:10.354683   27287 api_server.go:141] control plane version: v1.31.0
	I0816 17:06:10.354697   27287 api_server.go:131] duration metric: took 7.75392ms to wait for apiserver health ...
	I0816 17:06:10.354704   27287 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 17:06:10.526994   27287 request.go:632] Waited for 172.221674ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods
	I0816 17:06:10.527062   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods
	I0816 17:06:10.527067   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:10.527075   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:10.527081   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:10.532825   27287 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0816 17:06:10.537771   27287 system_pods.go:59] 17 kube-system pods found
	I0816 17:06:10.537798   27287 system_pods.go:61] "coredns-6f6b679f8f-d6c7g" [255004b9-d05e-4686-9e9c-6ec6f7aae439] Running
	I0816 17:06:10.537804   27287 system_pods.go:61] "coredns-6f6b679f8f-rhb6h" [ea20ec0a-a16e-4703-bb54-2e54c31acd40] Running
	I0816 17:06:10.537808   27287 system_pods.go:61] "etcd-ha-764617" [3dcae246-5101-4a41-9f28-a6a1740644d4] Running
	I0816 17:06:10.537812   27287 system_pods.go:61] "etcd-ha-764617-m02" [650d9e63-004f-414b-8a2a-e97bf4d38065] Running
	I0816 17:06:10.537816   27287 system_pods.go:61] "kindnet-7l8xt" [ee8130fb-5347-4f22-849f-ebb68e6fc48e] Running
	I0816 17:06:10.537820   27287 system_pods.go:61] "kindnet-94vkj" [a1ce0b8c-c2c8-400a-a013-6eb89e550cd9] Running
	I0816 17:06:10.537823   27287 system_pods.go:61] "kube-apiserver-ha-764617" [85909d10-ec15-4749-9972-40ededb0e610] Running
	I0816 17:06:10.537826   27287 system_pods.go:61] "kube-apiserver-ha-764617-m02" [adc1ab1c-c514-4e9a-bd9f-4458dbe442b4] Running
	I0816 17:06:10.537829   27287 system_pods.go:61] "kube-controller-manager-ha-764617" [31c5a5d2-e4a5-4405-8f99-f13c12763055] Running
	I0816 17:06:10.537832   27287 system_pods.go:61] "kube-controller-manager-ha-764617-m02" [8d094585-050e-49b9-b2f3-ffa45eadb25b] Running
	I0816 17:06:10.537835   27287 system_pods.go:61] "kube-proxy-g5szr" [6adedbcf-cd3b-4a09-8759-c0e9e4d5ddb5] Running
	I0816 17:06:10.537838   27287 system_pods.go:61] "kube-proxy-j75vc" [50262aeb-9d97-4093-a43f-cb24a5515abb] Running
	I0816 17:06:10.537842   27287 system_pods.go:61] "kube-scheduler-ha-764617" [4c45b1dc-cc6e-41e2-a059-955fa9fd79aa] Running
	I0816 17:06:10.537845   27287 system_pods.go:61] "kube-scheduler-ha-764617-m02" [bb3e6b70-5a60-49f8-a1c3-08690fda371d] Running
	I0816 17:06:10.537848   27287 system_pods.go:61] "kube-vip-ha-764617" [a30deffd-45c9-4685-ae4c-0c0f113f3bd7] Running
	I0816 17:06:10.537851   27287 system_pods.go:61] "kube-vip-ha-764617-m02" [869da559-ebdf-417f-9494-eb1cacbeab97] Running
	I0816 17:06:10.537854   27287 system_pods.go:61] "storage-provisioner" [15a0a2d4-69d6-4a6b-9199-f8785e015c3b] Running
	I0816 17:06:10.537860   27287 system_pods.go:74] duration metric: took 183.150927ms to wait for pod list to return data ...
	I0816 17:06:10.537869   27287 default_sa.go:34] waiting for default service account to be created ...
	I0816 17:06:10.726208   27287 request.go:632] Waited for 188.25ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/default/serviceaccounts
	I0816 17:06:10.726268   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/default/serviceaccounts
	I0816 17:06:10.726273   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:10.726280   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:10.726285   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:10.730022   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:06:10.730207   27287 default_sa.go:45] found service account: "default"
	I0816 17:06:10.730221   27287 default_sa.go:55] duration metric: took 192.346564ms for default service account to be created ...
	I0816 17:06:10.730228   27287 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 17:06:10.926666   27287 request.go:632] Waited for 196.354803ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods
	I0816 17:06:10.926718   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods
	I0816 17:06:10.926723   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:10.926730   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:10.926734   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:10.931197   27287 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 17:06:10.935789   27287 system_pods.go:86] 17 kube-system pods found
	I0816 17:06:10.935816   27287 system_pods.go:89] "coredns-6f6b679f8f-d6c7g" [255004b9-d05e-4686-9e9c-6ec6f7aae439] Running
	I0816 17:06:10.935821   27287 system_pods.go:89] "coredns-6f6b679f8f-rhb6h" [ea20ec0a-a16e-4703-bb54-2e54c31acd40] Running
	I0816 17:06:10.935825   27287 system_pods.go:89] "etcd-ha-764617" [3dcae246-5101-4a41-9f28-a6a1740644d4] Running
	I0816 17:06:10.935829   27287 system_pods.go:89] "etcd-ha-764617-m02" [650d9e63-004f-414b-8a2a-e97bf4d38065] Running
	I0816 17:06:10.935833   27287 system_pods.go:89] "kindnet-7l8xt" [ee8130fb-5347-4f22-849f-ebb68e6fc48e] Running
	I0816 17:06:10.935836   27287 system_pods.go:89] "kindnet-94vkj" [a1ce0b8c-c2c8-400a-a013-6eb89e550cd9] Running
	I0816 17:06:10.935839   27287 system_pods.go:89] "kube-apiserver-ha-764617" [85909d10-ec15-4749-9972-40ededb0e610] Running
	I0816 17:06:10.935842   27287 system_pods.go:89] "kube-apiserver-ha-764617-m02" [adc1ab1c-c514-4e9a-bd9f-4458dbe442b4] Running
	I0816 17:06:10.935846   27287 system_pods.go:89] "kube-controller-manager-ha-764617" [31c5a5d2-e4a5-4405-8f99-f13c12763055] Running
	I0816 17:06:10.935848   27287 system_pods.go:89] "kube-controller-manager-ha-764617-m02" [8d094585-050e-49b9-b2f3-ffa45eadb25b] Running
	I0816 17:06:10.935851   27287 system_pods.go:89] "kube-proxy-g5szr" [6adedbcf-cd3b-4a09-8759-c0e9e4d5ddb5] Running
	I0816 17:06:10.935854   27287 system_pods.go:89] "kube-proxy-j75vc" [50262aeb-9d97-4093-a43f-cb24a5515abb] Running
	I0816 17:06:10.935857   27287 system_pods.go:89] "kube-scheduler-ha-764617" [4c45b1dc-cc6e-41e2-a059-955fa9fd79aa] Running
	I0816 17:06:10.935860   27287 system_pods.go:89] "kube-scheduler-ha-764617-m02" [bb3e6b70-5a60-49f8-a1c3-08690fda371d] Running
	I0816 17:06:10.935862   27287 system_pods.go:89] "kube-vip-ha-764617" [a30deffd-45c9-4685-ae4c-0c0f113f3bd7] Running
	I0816 17:06:10.935865   27287 system_pods.go:89] "kube-vip-ha-764617-m02" [869da559-ebdf-417f-9494-eb1cacbeab97] Running
	I0816 17:06:10.935868   27287 system_pods.go:89] "storage-provisioner" [15a0a2d4-69d6-4a6b-9199-f8785e015c3b] Running
	I0816 17:06:10.935874   27287 system_pods.go:126] duration metric: took 205.640857ms to wait for k8s-apps to be running ...
	I0816 17:06:10.935880   27287 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 17:06:10.935936   27287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 17:06:10.950115   27287 system_svc.go:56] duration metric: took 14.228019ms WaitForService to wait for kubelet
	I0816 17:06:10.950139   27287 kubeadm.go:582] duration metric: took 22.632976027s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 17:06:10.950155   27287 node_conditions.go:102] verifying NodePressure condition ...
	I0816 17:06:11.126606   27287 request.go:632] Waited for 176.366577ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/nodes
	I0816 17:06:11.126656   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes
	I0816 17:06:11.126661   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:11.126672   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:11.126675   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:11.130382   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:06:11.131338   27287 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 17:06:11.131363   27287 node_conditions.go:123] node cpu capacity is 2
	I0816 17:06:11.131375   27287 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 17:06:11.131379   27287 node_conditions.go:123] node cpu capacity is 2
	I0816 17:06:11.131385   27287 node_conditions.go:105] duration metric: took 181.224588ms to run NodePressure ...
	I0816 17:06:11.131398   27287 start.go:241] waiting for startup goroutines ...
	I0816 17:06:11.131428   27287 start.go:255] writing updated cluster config ...
	I0816 17:06:11.133606   27287 out.go:201] 
	I0816 17:06:11.135100   27287 config.go:182] Loaded profile config "ha-764617": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 17:06:11.135243   27287 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/config.json ...
	I0816 17:06:11.136943   27287 out.go:177] * Starting "ha-764617-m03" control-plane node in "ha-764617" cluster
	I0816 17:06:11.138071   27287 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 17:06:11.138100   27287 cache.go:56] Caching tarball of preloaded images
	I0816 17:06:11.138215   27287 preload.go:172] Found /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 17:06:11.138234   27287 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0816 17:06:11.138351   27287 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/config.json ...
	I0816 17:06:11.138617   27287 start.go:360] acquireMachinesLock for ha-764617-m03: {Name:mke1ffa1f2d6d714bdd85e184816ba8f4dfd08f1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 17:06:11.138678   27287 start.go:364] duration metric: took 35.792µs to acquireMachinesLock for "ha-764617-m03"
	I0816 17:06:11.138700   27287 start.go:93] Provisioning new machine with config: &{Name:ha-764617 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-764617 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.18 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.184 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 17:06:11.138787   27287 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0816 17:06:11.140278   27287 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 17:06:11.140389   27287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:06:11.140435   27287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:06:11.156921   27287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33897
	I0816 17:06:11.157298   27287 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:06:11.157696   27287 main.go:141] libmachine: Using API Version  1
	I0816 17:06:11.157714   27287 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:06:11.157989   27287 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:06:11.158175   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetMachineName
	I0816 17:06:11.158308   27287 main.go:141] libmachine: (ha-764617-m03) Calling .DriverName
	I0816 17:06:11.158480   27287 start.go:159] libmachine.API.Create for "ha-764617" (driver="kvm2")
	I0816 17:06:11.158505   27287 client.go:168] LocalClient.Create starting
	I0816 17:06:11.158532   27287 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem
	I0816 17:06:11.158564   27287 main.go:141] libmachine: Decoding PEM data...
	I0816 17:06:11.158579   27287 main.go:141] libmachine: Parsing certificate...
	I0816 17:06:11.158623   27287 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem
	I0816 17:06:11.158649   27287 main.go:141] libmachine: Decoding PEM data...
	I0816 17:06:11.158662   27287 main.go:141] libmachine: Parsing certificate...
	I0816 17:06:11.158678   27287 main.go:141] libmachine: Running pre-create checks...
	I0816 17:06:11.158686   27287 main.go:141] libmachine: (ha-764617-m03) Calling .PreCreateCheck
	I0816 17:06:11.158867   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetConfigRaw
	I0816 17:06:11.159187   27287 main.go:141] libmachine: Creating machine...
	I0816 17:06:11.159198   27287 main.go:141] libmachine: (ha-764617-m03) Calling .Create
	I0816 17:06:11.159342   27287 main.go:141] libmachine: (ha-764617-m03) Creating KVM machine...
	I0816 17:06:11.160569   27287 main.go:141] libmachine: (ha-764617-m03) DBG | found existing default KVM network
	I0816 17:06:11.160698   27287 main.go:141] libmachine: (ha-764617-m03) DBG | found existing private KVM network mk-ha-764617
	I0816 17:06:11.160819   27287 main.go:141] libmachine: (ha-764617-m03) Setting up store path in /home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m03 ...
	I0816 17:06:11.160844   27287 main.go:141] libmachine: (ha-764617-m03) Building disk image from file:///home/jenkins/minikube-integration/19461-9545/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0816 17:06:11.160882   27287 main.go:141] libmachine: (ha-764617-m03) DBG | I0816 17:06:11.160812   28044 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19461-9545/.minikube
	I0816 17:06:11.160983   27287 main.go:141] libmachine: (ha-764617-m03) Downloading /home/jenkins/minikube-integration/19461-9545/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19461-9545/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0816 17:06:11.412790   27287 main.go:141] libmachine: (ha-764617-m03) DBG | I0816 17:06:11.412650   28044 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m03/id_rsa...
	I0816 17:06:11.668182   27287 main.go:141] libmachine: (ha-764617-m03) DBG | I0816 17:06:11.668074   28044 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m03/ha-764617-m03.rawdisk...
	I0816 17:06:11.668206   27287 main.go:141] libmachine: (ha-764617-m03) DBG | Writing magic tar header
	I0816 17:06:11.668216   27287 main.go:141] libmachine: (ha-764617-m03) DBG | Writing SSH key tar header
	I0816 17:06:11.668225   27287 main.go:141] libmachine: (ha-764617-m03) DBG | I0816 17:06:11.668183   28044 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m03 ...
	I0816 17:06:11.668301   27287 main.go:141] libmachine: (ha-764617-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m03
	I0816 17:06:11.668320   27287 main.go:141] libmachine: (ha-764617-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19461-9545/.minikube/machines
	I0816 17:06:11.668329   27287 main.go:141] libmachine: (ha-764617-m03) Setting executable bit set on /home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m03 (perms=drwx------)
	I0816 17:06:11.668339   27287 main.go:141] libmachine: (ha-764617-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19461-9545/.minikube
	I0816 17:06:11.668350   27287 main.go:141] libmachine: (ha-764617-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19461-9545
	I0816 17:06:11.668359   27287 main.go:141] libmachine: (ha-764617-m03) Setting executable bit set on /home/jenkins/minikube-integration/19461-9545/.minikube/machines (perms=drwxr-xr-x)
	I0816 17:06:11.668368   27287 main.go:141] libmachine: (ha-764617-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0816 17:06:11.668378   27287 main.go:141] libmachine: (ha-764617-m03) DBG | Checking permissions on dir: /home/jenkins
	I0816 17:06:11.668388   27287 main.go:141] libmachine: (ha-764617-m03) Setting executable bit set on /home/jenkins/minikube-integration/19461-9545/.minikube (perms=drwxr-xr-x)
	I0816 17:06:11.668399   27287 main.go:141] libmachine: (ha-764617-m03) Setting executable bit set on /home/jenkins/minikube-integration/19461-9545 (perms=drwxrwxr-x)
	I0816 17:06:11.668408   27287 main.go:141] libmachine: (ha-764617-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0816 17:06:11.668414   27287 main.go:141] libmachine: (ha-764617-m03) DBG | Checking permissions on dir: /home
	I0816 17:06:11.668424   27287 main.go:141] libmachine: (ha-764617-m03) DBG | Skipping /home - not owner
	I0816 17:06:11.668433   27287 main.go:141] libmachine: (ha-764617-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0816 17:06:11.668440   27287 main.go:141] libmachine: (ha-764617-m03) Creating domain...
	I0816 17:06:11.669535   27287 main.go:141] libmachine: (ha-764617-m03) define libvirt domain using xml: 
	I0816 17:06:11.669561   27287 main.go:141] libmachine: (ha-764617-m03) <domain type='kvm'>
	I0816 17:06:11.669585   27287 main.go:141] libmachine: (ha-764617-m03)   <name>ha-764617-m03</name>
	I0816 17:06:11.669602   27287 main.go:141] libmachine: (ha-764617-m03)   <memory unit='MiB'>2200</memory>
	I0816 17:06:11.669611   27287 main.go:141] libmachine: (ha-764617-m03)   <vcpu>2</vcpu>
	I0816 17:06:11.669616   27287 main.go:141] libmachine: (ha-764617-m03)   <features>
	I0816 17:06:11.669630   27287 main.go:141] libmachine: (ha-764617-m03)     <acpi/>
	I0816 17:06:11.669637   27287 main.go:141] libmachine: (ha-764617-m03)     <apic/>
	I0816 17:06:11.669642   27287 main.go:141] libmachine: (ha-764617-m03)     <pae/>
	I0816 17:06:11.669647   27287 main.go:141] libmachine: (ha-764617-m03)     
	I0816 17:06:11.669652   27287 main.go:141] libmachine: (ha-764617-m03)   </features>
	I0816 17:06:11.669659   27287 main.go:141] libmachine: (ha-764617-m03)   <cpu mode='host-passthrough'>
	I0816 17:06:11.669664   27287 main.go:141] libmachine: (ha-764617-m03)   
	I0816 17:06:11.669671   27287 main.go:141] libmachine: (ha-764617-m03)   </cpu>
	I0816 17:06:11.669676   27287 main.go:141] libmachine: (ha-764617-m03)   <os>
	I0816 17:06:11.669683   27287 main.go:141] libmachine: (ha-764617-m03)     <type>hvm</type>
	I0816 17:06:11.669689   27287 main.go:141] libmachine: (ha-764617-m03)     <boot dev='cdrom'/>
	I0816 17:06:11.669694   27287 main.go:141] libmachine: (ha-764617-m03)     <boot dev='hd'/>
	I0816 17:06:11.669725   27287 main.go:141] libmachine: (ha-764617-m03)     <bootmenu enable='no'/>
	I0816 17:06:11.669743   27287 main.go:141] libmachine: (ha-764617-m03)   </os>
	I0816 17:06:11.669756   27287 main.go:141] libmachine: (ha-764617-m03)   <devices>
	I0816 17:06:11.669770   27287 main.go:141] libmachine: (ha-764617-m03)     <disk type='file' device='cdrom'>
	I0816 17:06:11.669789   27287 main.go:141] libmachine: (ha-764617-m03)       <source file='/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m03/boot2docker.iso'/>
	I0816 17:06:11.669801   27287 main.go:141] libmachine: (ha-764617-m03)       <target dev='hdc' bus='scsi'/>
	I0816 17:06:11.669813   27287 main.go:141] libmachine: (ha-764617-m03)       <readonly/>
	I0816 17:06:11.669824   27287 main.go:141] libmachine: (ha-764617-m03)     </disk>
	I0816 17:06:11.669838   27287 main.go:141] libmachine: (ha-764617-m03)     <disk type='file' device='disk'>
	I0816 17:06:11.669855   27287 main.go:141] libmachine: (ha-764617-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0816 17:06:11.669872   27287 main.go:141] libmachine: (ha-764617-m03)       <source file='/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m03/ha-764617-m03.rawdisk'/>
	I0816 17:06:11.669881   27287 main.go:141] libmachine: (ha-764617-m03)       <target dev='hda' bus='virtio'/>
	I0816 17:06:11.669894   27287 main.go:141] libmachine: (ha-764617-m03)     </disk>
	I0816 17:06:11.669906   27287 main.go:141] libmachine: (ha-764617-m03)     <interface type='network'>
	I0816 17:06:11.669919   27287 main.go:141] libmachine: (ha-764617-m03)       <source network='mk-ha-764617'/>
	I0816 17:06:11.669934   27287 main.go:141] libmachine: (ha-764617-m03)       <model type='virtio'/>
	I0816 17:06:11.669947   27287 main.go:141] libmachine: (ha-764617-m03)     </interface>
	I0816 17:06:11.669958   27287 main.go:141] libmachine: (ha-764617-m03)     <interface type='network'>
	I0816 17:06:11.669971   27287 main.go:141] libmachine: (ha-764617-m03)       <source network='default'/>
	I0816 17:06:11.669982   27287 main.go:141] libmachine: (ha-764617-m03)       <model type='virtio'/>
	I0816 17:06:11.669992   27287 main.go:141] libmachine: (ha-764617-m03)     </interface>
	I0816 17:06:11.670008   27287 main.go:141] libmachine: (ha-764617-m03)     <serial type='pty'>
	I0816 17:06:11.670020   27287 main.go:141] libmachine: (ha-764617-m03)       <target port='0'/>
	I0816 17:06:11.670031   27287 main.go:141] libmachine: (ha-764617-m03)     </serial>
	I0816 17:06:11.670044   27287 main.go:141] libmachine: (ha-764617-m03)     <console type='pty'>
	I0816 17:06:11.670055   27287 main.go:141] libmachine: (ha-764617-m03)       <target type='serial' port='0'/>
	I0816 17:06:11.670067   27287 main.go:141] libmachine: (ha-764617-m03)     </console>
	I0816 17:06:11.670082   27287 main.go:141] libmachine: (ha-764617-m03)     <rng model='virtio'>
	I0816 17:06:11.670096   27287 main.go:141] libmachine: (ha-764617-m03)       <backend model='random'>/dev/random</backend>
	I0816 17:06:11.670107   27287 main.go:141] libmachine: (ha-764617-m03)     </rng>
	I0816 17:06:11.670118   27287 main.go:141] libmachine: (ha-764617-m03)     
	I0816 17:06:11.670128   27287 main.go:141] libmachine: (ha-764617-m03)     
	I0816 17:06:11.670139   27287 main.go:141] libmachine: (ha-764617-m03)   </devices>
	I0816 17:06:11.670149   27287 main.go:141] libmachine: (ha-764617-m03) </domain>
	I0816 17:06:11.670164   27287 main.go:141] libmachine: (ha-764617-m03) 
	I0816 17:06:11.676575   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:9e:e2:cb in network default
	I0816 17:06:11.677145   27287 main.go:141] libmachine: (ha-764617-m03) Ensuring networks are active...
	I0816 17:06:11.677169   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:11.677817   27287 main.go:141] libmachine: (ha-764617-m03) Ensuring network default is active
	I0816 17:06:11.678098   27287 main.go:141] libmachine: (ha-764617-m03) Ensuring network mk-ha-764617 is active
	I0816 17:06:11.678600   27287 main.go:141] libmachine: (ha-764617-m03) Getting domain xml...
	I0816 17:06:11.679382   27287 main.go:141] libmachine: (ha-764617-m03) Creating domain...
	I0816 17:06:12.915512   27287 main.go:141] libmachine: (ha-764617-m03) Waiting to get IP...
	I0816 17:06:12.916236   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:12.916750   27287 main.go:141] libmachine: (ha-764617-m03) DBG | unable to find current IP address of domain ha-764617-m03 in network mk-ha-764617
	I0816 17:06:12.916774   27287 main.go:141] libmachine: (ha-764617-m03) DBG | I0816 17:06:12.916711   28044 retry.go:31] will retry after 273.815084ms: waiting for machine to come up
	I0816 17:06:13.192119   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:13.192776   27287 main.go:141] libmachine: (ha-764617-m03) DBG | unable to find current IP address of domain ha-764617-m03 in network mk-ha-764617
	I0816 17:06:13.192807   27287 main.go:141] libmachine: (ha-764617-m03) DBG | I0816 17:06:13.192721   28044 retry.go:31] will retry after 272.739513ms: waiting for machine to come up
	I0816 17:06:13.467229   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:13.467817   27287 main.go:141] libmachine: (ha-764617-m03) DBG | unable to find current IP address of domain ha-764617-m03 in network mk-ha-764617
	I0816 17:06:13.467854   27287 main.go:141] libmachine: (ha-764617-m03) DBG | I0816 17:06:13.467745   28044 retry.go:31] will retry after 450.727942ms: waiting for machine to come up
	I0816 17:06:13.920234   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:13.920782   27287 main.go:141] libmachine: (ha-764617-m03) DBG | unable to find current IP address of domain ha-764617-m03 in network mk-ha-764617
	I0816 17:06:13.920818   27287 main.go:141] libmachine: (ha-764617-m03) DBG | I0816 17:06:13.920701   28044 retry.go:31] will retry after 544.193183ms: waiting for machine to come up
	I0816 17:06:14.466229   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:14.466662   27287 main.go:141] libmachine: (ha-764617-m03) DBG | unable to find current IP address of domain ha-764617-m03 in network mk-ha-764617
	I0816 17:06:14.466688   27287 main.go:141] libmachine: (ha-764617-m03) DBG | I0816 17:06:14.466620   28044 retry.go:31] will retry after 511.913006ms: waiting for machine to come up
	I0816 17:06:14.979976   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:14.980459   27287 main.go:141] libmachine: (ha-764617-m03) DBG | unable to find current IP address of domain ha-764617-m03 in network mk-ha-764617
	I0816 17:06:14.980480   27287 main.go:141] libmachine: (ha-764617-m03) DBG | I0816 17:06:14.980401   28044 retry.go:31] will retry after 937.618553ms: waiting for machine to come up
	I0816 17:06:15.919639   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:15.920082   27287 main.go:141] libmachine: (ha-764617-m03) DBG | unable to find current IP address of domain ha-764617-m03 in network mk-ha-764617
	I0816 17:06:15.920117   27287 main.go:141] libmachine: (ha-764617-m03) DBG | I0816 17:06:15.920026   28044 retry.go:31] will retry after 880.489014ms: waiting for machine to come up
	I0816 17:06:16.802468   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:16.802933   27287 main.go:141] libmachine: (ha-764617-m03) DBG | unable to find current IP address of domain ha-764617-m03 in network mk-ha-764617
	I0816 17:06:16.802957   27287 main.go:141] libmachine: (ha-764617-m03) DBG | I0816 17:06:16.802877   28044 retry.go:31] will retry after 1.36764588s: waiting for machine to come up
	I0816 17:06:18.172580   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:18.173085   27287 main.go:141] libmachine: (ha-764617-m03) DBG | unable to find current IP address of domain ha-764617-m03 in network mk-ha-764617
	I0816 17:06:18.173111   27287 main.go:141] libmachine: (ha-764617-m03) DBG | I0816 17:06:18.173046   28044 retry.go:31] will retry after 1.838306763s: waiting for machine to come up
	I0816 17:06:20.013961   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:20.014417   27287 main.go:141] libmachine: (ha-764617-m03) DBG | unable to find current IP address of domain ha-764617-m03 in network mk-ha-764617
	I0816 17:06:20.014444   27287 main.go:141] libmachine: (ha-764617-m03) DBG | I0816 17:06:20.014371   28044 retry.go:31] will retry after 1.673586915s: waiting for machine to come up
	I0816 17:06:21.689665   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:21.690180   27287 main.go:141] libmachine: (ha-764617-m03) DBG | unable to find current IP address of domain ha-764617-m03 in network mk-ha-764617
	I0816 17:06:21.690212   27287 main.go:141] libmachine: (ha-764617-m03) DBG | I0816 17:06:21.690116   28044 retry.go:31] will retry after 2.511086993s: waiting for machine to come up
	I0816 17:06:24.204711   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:24.205193   27287 main.go:141] libmachine: (ha-764617-m03) DBG | unable to find current IP address of domain ha-764617-m03 in network mk-ha-764617
	I0816 17:06:24.205214   27287 main.go:141] libmachine: (ha-764617-m03) DBG | I0816 17:06:24.205157   28044 retry.go:31] will retry after 2.19927087s: waiting for machine to come up
	I0816 17:06:26.405994   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:26.406431   27287 main.go:141] libmachine: (ha-764617-m03) DBG | unable to find current IP address of domain ha-764617-m03 in network mk-ha-764617
	I0816 17:06:26.406451   27287 main.go:141] libmachine: (ha-764617-m03) DBG | I0816 17:06:26.406391   28044 retry.go:31] will retry after 3.745095666s: waiting for machine to come up
	I0816 17:06:30.153573   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:30.154034   27287 main.go:141] libmachine: (ha-764617-m03) DBG | unable to find current IP address of domain ha-764617-m03 in network mk-ha-764617
	I0816 17:06:30.154058   27287 main.go:141] libmachine: (ha-764617-m03) DBG | I0816 17:06:30.153986   28044 retry.go:31] will retry after 4.789795394s: waiting for machine to come up
	I0816 17:06:34.948182   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:34.948661   27287 main.go:141] libmachine: (ha-764617-m03) Found IP for machine: 192.168.39.253
	I0816 17:06:34.948679   27287 main.go:141] libmachine: (ha-764617-m03) Reserving static IP address...
	I0816 17:06:34.948693   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has current primary IP address 192.168.39.253 and MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:34.949121   27287 main.go:141] libmachine: (ha-764617-m03) DBG | unable to find host DHCP lease matching {name: "ha-764617-m03", mac: "52:54:00:b2:4e:81", ip: "192.168.39.253"} in network mk-ha-764617
	I0816 17:06:35.024241   27287 main.go:141] libmachine: (ha-764617-m03) DBG | Getting to WaitForSSH function...
	I0816 17:06:35.024268   27287 main.go:141] libmachine: (ha-764617-m03) Reserved static IP address: 192.168.39.253
	I0816 17:06:35.024281   27287 main.go:141] libmachine: (ha-764617-m03) Waiting for SSH to be available...
	I0816 17:06:35.026795   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:35.027288   27287 main.go:141] libmachine: (ha-764617-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:4e:81", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:06:25 +0000 UTC Type:0 Mac:52:54:00:b2:4e:81 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b2:4e:81}
	I0816 17:06:35.027315   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:35.027506   27287 main.go:141] libmachine: (ha-764617-m03) DBG | Using SSH client type: external
	I0816 17:06:35.027538   27287 main.go:141] libmachine: (ha-764617-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m03/id_rsa (-rw-------)
	I0816 17:06:35.027570   27287 main.go:141] libmachine: (ha-764617-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.253 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 17:06:35.027583   27287 main.go:141] libmachine: (ha-764617-m03) DBG | About to run SSH command:
	I0816 17:06:35.027617   27287 main.go:141] libmachine: (ha-764617-m03) DBG | exit 0
	I0816 17:06:35.148697   27287 main.go:141] libmachine: (ha-764617-m03) DBG | SSH cmd err, output: <nil>: 
	I0816 17:06:35.148982   27287 main.go:141] libmachine: (ha-764617-m03) KVM machine creation complete!
	I0816 17:06:35.149291   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetConfigRaw
	I0816 17:06:35.149823   27287 main.go:141] libmachine: (ha-764617-m03) Calling .DriverName
	I0816 17:06:35.149991   27287 main.go:141] libmachine: (ha-764617-m03) Calling .DriverName
	I0816 17:06:35.150229   27287 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0816 17:06:35.150243   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetState
	I0816 17:06:35.151514   27287 main.go:141] libmachine: Detecting operating system of created instance...
	I0816 17:06:35.151531   27287 main.go:141] libmachine: Waiting for SSH to be available...
	I0816 17:06:35.151550   27287 main.go:141] libmachine: Getting to WaitForSSH function...
	I0816 17:06:35.151559   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHHostname
	I0816 17:06:35.154047   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:35.154433   27287 main.go:141] libmachine: (ha-764617-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:4e:81", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:06:25 +0000 UTC Type:0 Mac:52:54:00:b2:4e:81 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-764617-m03 Clientid:01:52:54:00:b2:4e:81}
	I0816 17:06:35.154454   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:35.154636   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHPort
	I0816 17:06:35.154844   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHKeyPath
	I0816 17:06:35.154998   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHKeyPath
	I0816 17:06:35.155145   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHUsername
	I0816 17:06:35.155314   27287 main.go:141] libmachine: Using SSH client type: native
	I0816 17:06:35.155504   27287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0816 17:06:35.155515   27287 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0816 17:06:35.251611   27287 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 17:06:35.251636   27287 main.go:141] libmachine: Detecting the provisioner...
	I0816 17:06:35.251645   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHHostname
	I0816 17:06:35.254692   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:35.255051   27287 main.go:141] libmachine: (ha-764617-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:4e:81", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:06:25 +0000 UTC Type:0 Mac:52:54:00:b2:4e:81 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-764617-m03 Clientid:01:52:54:00:b2:4e:81}
	I0816 17:06:35.255069   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:35.255294   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHPort
	I0816 17:06:35.255515   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHKeyPath
	I0816 17:06:35.255694   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHKeyPath
	I0816 17:06:35.255897   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHUsername
	I0816 17:06:35.256100   27287 main.go:141] libmachine: Using SSH client type: native
	I0816 17:06:35.256260   27287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0816 17:06:35.256271   27287 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0816 17:06:35.352847   27287 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0816 17:06:35.352931   27287 main.go:141] libmachine: found compatible host: buildroot
	I0816 17:06:35.352946   27287 main.go:141] libmachine: Provisioning with buildroot...
	I0816 17:06:35.352960   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetMachineName
	I0816 17:06:35.353217   27287 buildroot.go:166] provisioning hostname "ha-764617-m03"
	I0816 17:06:35.353249   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetMachineName
	I0816 17:06:35.353470   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHHostname
	I0816 17:06:35.356181   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:35.356707   27287 main.go:141] libmachine: (ha-764617-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:4e:81", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:06:25 +0000 UTC Type:0 Mac:52:54:00:b2:4e:81 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-764617-m03 Clientid:01:52:54:00:b2:4e:81}
	I0816 17:06:35.356742   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:35.357178   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHPort
	I0816 17:06:35.357423   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHKeyPath
	I0816 17:06:35.357596   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHKeyPath
	I0816 17:06:35.357756   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHUsername
	I0816 17:06:35.357919   27287 main.go:141] libmachine: Using SSH client type: native
	I0816 17:06:35.358118   27287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0816 17:06:35.358133   27287 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-764617-m03 && echo "ha-764617-m03" | sudo tee /etc/hostname
	I0816 17:06:35.471766   27287 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-764617-m03
	
	I0816 17:06:35.471796   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHHostname
	I0816 17:06:35.474625   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:35.475017   27287 main.go:141] libmachine: (ha-764617-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:4e:81", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:06:25 +0000 UTC Type:0 Mac:52:54:00:b2:4e:81 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-764617-m03 Clientid:01:52:54:00:b2:4e:81}
	I0816 17:06:35.475047   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:35.475203   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHPort
	I0816 17:06:35.475401   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHKeyPath
	I0816 17:06:35.475591   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHKeyPath
	I0816 17:06:35.475735   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHUsername
	I0816 17:06:35.475883   27287 main.go:141] libmachine: Using SSH client type: native
	I0816 17:06:35.476080   27287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0816 17:06:35.476103   27287 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-764617-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-764617-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-764617-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 17:06:35.585652   27287 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 17:06:35.585680   27287 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19461-9545/.minikube CaCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19461-9545/.minikube}
	I0816 17:06:35.585693   27287 buildroot.go:174] setting up certificates
	I0816 17:06:35.585700   27287 provision.go:84] configureAuth start
	I0816 17:06:35.585708   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetMachineName
	I0816 17:06:35.585971   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetIP
	I0816 17:06:35.588524   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:35.588946   27287 main.go:141] libmachine: (ha-764617-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:4e:81", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:06:25 +0000 UTC Type:0 Mac:52:54:00:b2:4e:81 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-764617-m03 Clientid:01:52:54:00:b2:4e:81}
	I0816 17:06:35.588979   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:35.589077   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHHostname
	I0816 17:06:35.591437   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:35.591747   27287 main.go:141] libmachine: (ha-764617-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:4e:81", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:06:25 +0000 UTC Type:0 Mac:52:54:00:b2:4e:81 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-764617-m03 Clientid:01:52:54:00:b2:4e:81}
	I0816 17:06:35.591768   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:35.591924   27287 provision.go:143] copyHostCerts
	I0816 17:06:35.591956   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem
	I0816 17:06:35.591983   27287 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem, removing ...
	I0816 17:06:35.591992   27287 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem
	I0816 17:06:35.592058   27287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem (1082 bytes)
	I0816 17:06:35.592140   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem
	I0816 17:06:35.592158   27287 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem, removing ...
	I0816 17:06:35.592173   27287 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem
	I0816 17:06:35.592219   27287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem (1123 bytes)
	I0816 17:06:35.592280   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem
	I0816 17:06:35.592296   27287 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem, removing ...
	I0816 17:06:35.592303   27287 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem
	I0816 17:06:35.592326   27287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem (1679 bytes)
	I0816 17:06:35.592389   27287 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem org=jenkins.ha-764617-m03 san=[127.0.0.1 192.168.39.253 ha-764617-m03 localhost minikube]
	I0816 17:06:35.662762   27287 provision.go:177] copyRemoteCerts
	I0816 17:06:35.662814   27287 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 17:06:35.662835   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHHostname
	I0816 17:06:35.665701   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:35.666047   27287 main.go:141] libmachine: (ha-764617-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:4e:81", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:06:25 +0000 UTC Type:0 Mac:52:54:00:b2:4e:81 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-764617-m03 Clientid:01:52:54:00:b2:4e:81}
	I0816 17:06:35.666075   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:35.666262   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHPort
	I0816 17:06:35.666438   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHKeyPath
	I0816 17:06:35.666551   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHUsername
	I0816 17:06:35.666656   27287 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m03/id_rsa Username:docker}
	I0816 17:06:35.746127   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 17:06:35.746201   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 17:06:35.769929   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 17:06:35.770012   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0816 17:06:35.794481   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 17:06:35.794571   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 17:06:35.819190   27287 provision.go:87] duration metric: took 233.477927ms to configureAuth
	I0816 17:06:35.819221   27287 buildroot.go:189] setting minikube options for container-runtime
	I0816 17:06:35.819480   27287 config.go:182] Loaded profile config "ha-764617": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 17:06:35.819562   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHHostname
	I0816 17:06:35.822367   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:35.822747   27287 main.go:141] libmachine: (ha-764617-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:4e:81", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:06:25 +0000 UTC Type:0 Mac:52:54:00:b2:4e:81 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-764617-m03 Clientid:01:52:54:00:b2:4e:81}
	I0816 17:06:35.822777   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:35.822929   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHPort
	I0816 17:06:35.823112   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHKeyPath
	I0816 17:06:35.823256   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHKeyPath
	I0816 17:06:35.823376   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHUsername
	I0816 17:06:35.823515   27287 main.go:141] libmachine: Using SSH client type: native
	I0816 17:06:35.823729   27287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0816 17:06:35.823793   27287 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 17:06:36.078096   27287 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 17:06:36.078133   27287 main.go:141] libmachine: Checking connection to Docker...
	I0816 17:06:36.078144   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetURL
	I0816 17:06:36.079497   27287 main.go:141] libmachine: (ha-764617-m03) DBG | Using libvirt version 6000000
	I0816 17:06:36.081628   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:36.081985   27287 main.go:141] libmachine: (ha-764617-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:4e:81", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:06:25 +0000 UTC Type:0 Mac:52:54:00:b2:4e:81 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-764617-m03 Clientid:01:52:54:00:b2:4e:81}
	I0816 17:06:36.082007   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:36.082204   27287 main.go:141] libmachine: Docker is up and running!
	I0816 17:06:36.082219   27287 main.go:141] libmachine: Reticulating splines...
	I0816 17:06:36.082227   27287 client.go:171] duration metric: took 24.923714073s to LocalClient.Create
	I0816 17:06:36.082249   27287 start.go:167] duration metric: took 24.923767974s to libmachine.API.Create "ha-764617"
	I0816 17:06:36.082261   27287 start.go:293] postStartSetup for "ha-764617-m03" (driver="kvm2")
	I0816 17:06:36.082274   27287 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 17:06:36.082295   27287 main.go:141] libmachine: (ha-764617-m03) Calling .DriverName
	I0816 17:06:36.082574   27287 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 17:06:36.082601   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHHostname
	I0816 17:06:36.084986   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:36.085346   27287 main.go:141] libmachine: (ha-764617-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:4e:81", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:06:25 +0000 UTC Type:0 Mac:52:54:00:b2:4e:81 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-764617-m03 Clientid:01:52:54:00:b2:4e:81}
	I0816 17:06:36.085376   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:36.085540   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHPort
	I0816 17:06:36.085739   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHKeyPath
	I0816 17:06:36.085901   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHUsername
	I0816 17:06:36.086073   27287 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m03/id_rsa Username:docker}
	I0816 17:06:36.161891   27287 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 17:06:36.165990   27287 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 17:06:36.166014   27287 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/addons for local assets ...
	I0816 17:06:36.166084   27287 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/files for local assets ...
	I0816 17:06:36.166169   27287 filesync.go:149] local asset: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem -> 167532.pem in /etc/ssl/certs
	I0816 17:06:36.166180   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem -> /etc/ssl/certs/167532.pem
	I0816 17:06:36.166282   27287 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 17:06:36.174715   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /etc/ssl/certs/167532.pem (1708 bytes)
	I0816 17:06:36.197988   27287 start.go:296] duration metric: took 115.714381ms for postStartSetup
	I0816 17:06:36.198032   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetConfigRaw
	I0816 17:06:36.198601   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetIP
	I0816 17:06:36.201489   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:36.201887   27287 main.go:141] libmachine: (ha-764617-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:4e:81", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:06:25 +0000 UTC Type:0 Mac:52:54:00:b2:4e:81 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-764617-m03 Clientid:01:52:54:00:b2:4e:81}
	I0816 17:06:36.201918   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:36.202168   27287 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/config.json ...
	I0816 17:06:36.202411   27287 start.go:128] duration metric: took 25.063611638s to createHost
	I0816 17:06:36.202443   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHHostname
	I0816 17:06:36.205107   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:36.205499   27287 main.go:141] libmachine: (ha-764617-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:4e:81", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:06:25 +0000 UTC Type:0 Mac:52:54:00:b2:4e:81 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-764617-m03 Clientid:01:52:54:00:b2:4e:81}
	I0816 17:06:36.205524   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:36.205684   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHPort
	I0816 17:06:36.205844   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHKeyPath
	I0816 17:06:36.205988   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHKeyPath
	I0816 17:06:36.206109   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHUsername
	I0816 17:06:36.206254   27287 main.go:141] libmachine: Using SSH client type: native
	I0816 17:06:36.206419   27287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0816 17:06:36.206432   27287 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 17:06:36.308842   27287 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723827996.286893525
	
	I0816 17:06:36.308865   27287 fix.go:216] guest clock: 1723827996.286893525
	I0816 17:06:36.308876   27287 fix.go:229] Guest: 2024-08-16 17:06:36.286893525 +0000 UTC Remote: 2024-08-16 17:06:36.202426568 +0000 UTC m=+145.059887392 (delta=84.466957ms)
	I0816 17:06:36.308895   27287 fix.go:200] guest clock delta is within tolerance: 84.466957ms
	I0816 17:06:36.308902   27287 start.go:83] releasing machines lock for "ha-764617-m03", held for 25.170212902s
	I0816 17:06:36.308924   27287 main.go:141] libmachine: (ha-764617-m03) Calling .DriverName
	I0816 17:06:36.309142   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetIP
	I0816 17:06:36.311958   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:36.312372   27287 main.go:141] libmachine: (ha-764617-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:4e:81", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:06:25 +0000 UTC Type:0 Mac:52:54:00:b2:4e:81 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-764617-m03 Clientid:01:52:54:00:b2:4e:81}
	I0816 17:06:36.312398   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:36.314684   27287 out.go:177] * Found network options:
	I0816 17:06:36.316208   27287 out.go:177]   - NO_PROXY=192.168.39.18,192.168.39.184
	W0816 17:06:36.317562   27287 proxy.go:119] fail to check proxy env: Error ip not in block
	W0816 17:06:36.317581   27287 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 17:06:36.317592   27287 main.go:141] libmachine: (ha-764617-m03) Calling .DriverName
	I0816 17:06:36.318048   27287 main.go:141] libmachine: (ha-764617-m03) Calling .DriverName
	I0816 17:06:36.318207   27287 main.go:141] libmachine: (ha-764617-m03) Calling .DriverName
	I0816 17:06:36.318304   27287 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 17:06:36.318340   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHHostname
	W0816 17:06:36.318416   27287 proxy.go:119] fail to check proxy env: Error ip not in block
	W0816 17:06:36.318432   27287 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 17:06:36.318484   27287 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 17:06:36.318503   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHHostname
	I0816 17:06:36.321171   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:36.321384   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:36.321583   27287 main.go:141] libmachine: (ha-764617-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:4e:81", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:06:25 +0000 UTC Type:0 Mac:52:54:00:b2:4e:81 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-764617-m03 Clientid:01:52:54:00:b2:4e:81}
	I0816 17:06:36.321607   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:36.321754   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHPort
	I0816 17:06:36.321868   27287 main.go:141] libmachine: (ha-764617-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:4e:81", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:06:25 +0000 UTC Type:0 Mac:52:54:00:b2:4e:81 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-764617-m03 Clientid:01:52:54:00:b2:4e:81}
	I0816 17:06:36.321892   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:36.321912   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHKeyPath
	I0816 17:06:36.322035   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHPort
	I0816 17:06:36.322131   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHUsername
	I0816 17:06:36.322226   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHKeyPath
	I0816 17:06:36.322296   27287 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m03/id_rsa Username:docker}
	I0816 17:06:36.322375   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHUsername
	I0816 17:06:36.322517   27287 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m03/id_rsa Username:docker}
	I0816 17:06:36.549816   27287 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 17:06:36.555487   27287 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 17:06:36.555545   27287 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 17:06:36.573414   27287 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 17:06:36.573436   27287 start.go:495] detecting cgroup driver to use...
	I0816 17:06:36.573504   27287 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 17:06:36.590169   27287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 17:06:36.603784   27287 docker.go:217] disabling cri-docker service (if available) ...
	I0816 17:06:36.603836   27287 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 17:06:36.617748   27287 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 17:06:36.630805   27287 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 17:06:36.745094   27287 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 17:06:36.898097   27287 docker.go:233] disabling docker service ...
	I0816 17:06:36.898154   27287 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 17:06:36.911588   27287 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 17:06:36.923400   27287 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 17:06:37.066157   27287 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 17:06:37.185218   27287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 17:06:37.199415   27287 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 17:06:37.218994   27287 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 17:06:37.219059   27287 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:06:37.229416   27287 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 17:06:37.229480   27287 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:06:37.239655   27287 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:06:37.249436   27287 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:06:37.259163   27287 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 17:06:37.269306   27287 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:06:37.278899   27287 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:06:37.295570   27287 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:06:37.305152   27287 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 17:06:37.313710   27287 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 17:06:37.313760   27287 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 17:06:37.326116   27287 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 17:06:37.334896   27287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 17:06:37.461973   27287 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 17:06:37.589731   27287 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 17:06:37.589799   27287 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 17:06:37.594349   27287 start.go:563] Will wait 60s for crictl version
	I0816 17:06:37.594404   27287 ssh_runner.go:195] Run: which crictl
	I0816 17:06:37.597876   27287 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 17:06:37.636651   27287 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 17:06:37.636732   27287 ssh_runner.go:195] Run: crio --version
	I0816 17:06:37.663227   27287 ssh_runner.go:195] Run: crio --version
	I0816 17:06:37.691490   27287 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 17:06:37.693121   27287 out.go:177]   - env NO_PROXY=192.168.39.18
	I0816 17:06:37.694722   27287 out.go:177]   - env NO_PROXY=192.168.39.18,192.168.39.184
	I0816 17:06:37.696038   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetIP
	I0816 17:06:37.698755   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:37.699119   27287 main.go:141] libmachine: (ha-764617-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:4e:81", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:06:25 +0000 UTC Type:0 Mac:52:54:00:b2:4e:81 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-764617-m03 Clientid:01:52:54:00:b2:4e:81}
	I0816 17:06:37.699145   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:37.699374   27287 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0816 17:06:37.703276   27287 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 17:06:37.715076   27287 mustload.go:65] Loading cluster: ha-764617
	I0816 17:06:37.715374   27287 config.go:182] Loaded profile config "ha-764617": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 17:06:37.715741   27287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:06:37.715787   27287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:06:37.731775   27287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39509
	I0816 17:06:37.732200   27287 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:06:37.732774   27287 main.go:141] libmachine: Using API Version  1
	I0816 17:06:37.732800   27287 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:06:37.733080   27287 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:06:37.733298   27287 main.go:141] libmachine: (ha-764617) Calling .GetState
	I0816 17:06:37.734638   27287 host.go:66] Checking if "ha-764617" exists ...
	I0816 17:06:37.734910   27287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:06:37.734941   27287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:06:37.750425   27287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38567
	I0816 17:06:37.750936   27287 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:06:37.751428   27287 main.go:141] libmachine: Using API Version  1
	I0816 17:06:37.751452   27287 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:06:37.751774   27287 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:06:37.751981   27287 main.go:141] libmachine: (ha-764617) Calling .DriverName
	I0816 17:06:37.752172   27287 certs.go:68] Setting up /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617 for IP: 192.168.39.253
	I0816 17:06:37.752187   27287 certs.go:194] generating shared ca certs ...
	I0816 17:06:37.752205   27287 certs.go:226] acquiring lock for ca certs: {Name:mk1e000575c94c138513704c2900b8a68810eb65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:06:37.752349   27287 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key
	I0816 17:06:37.752405   27287 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key
	I0816 17:06:37.752423   27287 certs.go:256] generating profile certs ...
	I0816 17:06:37.752526   27287 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/client.key
	I0816 17:06:37.752567   27287 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.key.4d0c836e
	I0816 17:06:37.752588   27287 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.crt.4d0c836e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.18 192.168.39.184 192.168.39.253 192.168.39.254]
	I0816 17:06:37.883447   27287 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.crt.4d0c836e ...
	I0816 17:06:37.883477   27287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.crt.4d0c836e: {Name:mke5ffa004a00b8dc15e1b58cef73083e4ecf103 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:06:37.883643   27287 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.key.4d0c836e ...
	I0816 17:06:37.883655   27287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.key.4d0c836e: {Name:mk866f1ba5180fdb0967c8d90670c43aaf810f15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:06:37.883723   27287 certs.go:381] copying /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.crt.4d0c836e -> /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.crt
	I0816 17:06:37.883852   27287 certs.go:385] copying /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.key.4d0c836e -> /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.key
	I0816 17:06:37.883992   27287 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/proxy-client.key
	I0816 17:06:37.884011   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0816 17:06:37.884062   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0816 17:06:37.884089   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0816 17:06:37.884107   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0816 17:06:37.884129   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0816 17:06:37.884143   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0816 17:06:37.884155   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0816 17:06:37.884174   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0816 17:06:37.884244   27287 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem (1338 bytes)
	W0816 17:06:37.884285   27287 certs.go:480] ignoring /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753_empty.pem, impossibly tiny 0 bytes
	I0816 17:06:37.884299   27287 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 17:06:37.884340   27287 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem (1082 bytes)
	I0816 17:06:37.884367   27287 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem (1123 bytes)
	I0816 17:06:37.884397   27287 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem (1679 bytes)
	I0816 17:06:37.884450   27287 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem (1708 bytes)
	I0816 17:06:37.884527   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem -> /usr/share/ca-certificates/16753.pem
	I0816 17:06:37.884554   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem -> /usr/share/ca-certificates/167532.pem
	I0816 17:06:37.884571   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0816 17:06:37.884610   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHHostname
	I0816 17:06:37.888020   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:06:37.888397   27287 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:06:37.888411   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:06:37.888575   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHPort
	I0816 17:06:37.888809   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:06:37.888963   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHUsername
	I0816 17:06:37.889115   27287 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617/id_rsa Username:docker}
	I0816 17:06:37.968988   27287 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0816 17:06:37.974680   27287 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0816 17:06:37.989666   27287 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0816 17:06:37.994201   27287 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0816 17:06:38.004434   27287 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0816 17:06:38.008441   27287 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0816 17:06:38.021704   27287 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0816 17:06:38.026123   27287 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0816 17:06:38.036326   27287 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0816 17:06:38.040000   27287 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0816 17:06:38.049939   27287 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0816 17:06:38.061869   27287 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0816 17:06:38.072212   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 17:06:38.096256   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 17:06:38.120261   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 17:06:38.143728   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 17:06:38.166144   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0816 17:06:38.188759   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 17:06:38.211236   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 17:06:38.234675   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 17:06:38.257472   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem --> /usr/share/ca-certificates/16753.pem (1338 bytes)
	I0816 17:06:38.280278   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /usr/share/ca-certificates/167532.pem (1708 bytes)
	I0816 17:06:38.303544   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 17:06:38.325759   27287 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0816 17:06:38.340908   27287 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0816 17:06:38.356140   27287 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0816 17:06:38.372500   27287 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0816 17:06:38.389308   27287 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0816 17:06:38.405213   27287 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0816 17:06:38.421405   27287 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0816 17:06:38.437521   27287 ssh_runner.go:195] Run: openssl version
	I0816 17:06:38.443055   27287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16753.pem && ln -fs /usr/share/ca-certificates/16753.pem /etc/ssl/certs/16753.pem"
	I0816 17:06:38.454048   27287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16753.pem
	I0816 17:06:38.458400   27287 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 17:00 /usr/share/ca-certificates/16753.pem
	I0816 17:06:38.458446   27287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16753.pem
	I0816 17:06:38.464066   27287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16753.pem /etc/ssl/certs/51391683.0"
	I0816 17:06:38.473681   27287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167532.pem && ln -fs /usr/share/ca-certificates/167532.pem /etc/ssl/certs/167532.pem"
	I0816 17:06:38.483895   27287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167532.pem
	I0816 17:06:38.488554   27287 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 17:00 /usr/share/ca-certificates/167532.pem
	I0816 17:06:38.488609   27287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167532.pem
	I0816 17:06:38.493699   27287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167532.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 17:06:38.504218   27287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 17:06:38.513940   27287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 17:06:38.518030   27287 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:49 /usr/share/ca-certificates/minikubeCA.pem
	I0816 17:06:38.518087   27287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 17:06:38.523180   27287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 17:06:38.533149   27287 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 17:06:38.536987   27287 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0816 17:06:38.537041   27287 kubeadm.go:934] updating node {m03 192.168.39.253 8443 v1.31.0 crio true true} ...
	I0816 17:06:38.537133   27287 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-764617-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.253
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-764617 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 17:06:38.537159   27287 kube-vip.go:115] generating kube-vip config ...
	I0816 17:06:38.537199   27287 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0816 17:06:38.552759   27287 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0816 17:06:38.552834   27287 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0816 17:06:38.552879   27287 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 17:06:38.561603   27287 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0816 17:06:38.561655   27287 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0816 17:06:38.570815   27287 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0816 17:06:38.570847   27287 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256
	I0816 17:06:38.570864   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0816 17:06:38.570934   27287 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0816 17:06:38.570848   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0816 17:06:38.570819   27287 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256
	I0816 17:06:38.571031   27287 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0816 17:06:38.571056   27287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 17:06:38.578386   27287 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0816 17:06:38.578415   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0816 17:06:38.578432   27287 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0816 17:06:38.578455   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0816 17:06:38.604998   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0816 17:06:38.605083   27287 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0816 17:06:38.720683   27287 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0816 17:06:38.720727   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0816 17:06:39.388735   27287 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0816 17:06:39.399345   27287 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0816 17:06:39.416213   27287 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 17:06:39.432846   27287 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0816 17:06:39.450175   27287 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0816 17:06:39.453970   27287 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 17:06:39.466538   27287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 17:06:39.608143   27287 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 17:06:39.626972   27287 host.go:66] Checking if "ha-764617" exists ...
	I0816 17:06:39.627450   27287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:06:39.627509   27287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:06:39.643247   27287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42709
	I0816 17:06:39.643826   27287 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:06:39.644414   27287 main.go:141] libmachine: Using API Version  1
	I0816 17:06:39.644442   27287 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:06:39.644838   27287 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:06:39.645034   27287 main.go:141] libmachine: (ha-764617) Calling .DriverName
	I0816 17:06:39.645198   27287 start.go:317] joinCluster: &{Name:ha-764617 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-764617 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.18 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.184 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.253 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 17:06:39.645348   27287 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0816 17:06:39.645369   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHHostname
	I0816 17:06:39.647997   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:06:39.648435   27287 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:06:39.648465   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:06:39.648656   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHPort
	I0816 17:06:39.648836   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:06:39.649006   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHUsername
	I0816 17:06:39.649157   27287 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617/id_rsa Username:docker}
	I0816 17:06:39.830980   27287 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.253 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 17:06:39.831031   27287 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ym0jgi.rwnboocl3slfp5fi --discovery-token-ca-cert-hash sha256:340cd7e6f4afd769ffe0b97bb40a910498795aaf29fab06cd6b8c9a3957ccd57 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-764617-m03 --control-plane --apiserver-advertise-address=192.168.39.253 --apiserver-bind-port=8443"
	I0816 17:07:04.251949   27287 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ym0jgi.rwnboocl3slfp5fi --discovery-token-ca-cert-hash sha256:340cd7e6f4afd769ffe0b97bb40a910498795aaf29fab06cd6b8c9a3957ccd57 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-764617-m03 --control-plane --apiserver-advertise-address=192.168.39.253 --apiserver-bind-port=8443": (24.420887637s)
	I0816 17:07:04.251982   27287 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0816 17:07:04.730128   27287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-764617-m03 minikube.k8s.io/updated_at=2024_08_16T17_07_04_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd minikube.k8s.io/name=ha-764617 minikube.k8s.io/primary=false
	I0816 17:07:04.844238   27287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-764617-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0816 17:07:04.984499   27287 start.go:319] duration metric: took 25.339299308s to joinCluster
	I0816 17:07:04.984574   27287 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.253 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 17:07:04.984929   27287 config.go:182] Loaded profile config "ha-764617": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 17:07:04.986175   27287 out.go:177] * Verifying Kubernetes components...
	I0816 17:07:04.987571   27287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 17:07:05.202133   27287 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 17:07:05.220118   27287 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19461-9545/kubeconfig
	I0816 17:07:05.220449   27287 kapi.go:59] client config for ha-764617: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/client.crt", KeyFile:"/home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/client.key", CAFile:"/home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f189a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0816 17:07:05.220532   27287 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.18:8443
	I0816 17:07:05.220857   27287 node_ready.go:35] waiting up to 6m0s for node "ha-764617-m03" to be "Ready" ...
	I0816 17:07:05.220966   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:05.220977   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:05.220987   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:05.220995   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:05.223989   27287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 17:07:05.721388   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:05.721413   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:05.721421   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:05.721425   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:05.725056   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:06.221188   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:06.221210   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:06.221219   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:06.221226   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:06.224292   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:06.721948   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:06.721976   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:06.721988   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:06.721996   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:06.725251   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:07.221989   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:07.222020   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:07.222034   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:07.222040   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:07.226279   27287 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 17:07:07.227118   27287 node_ready.go:53] node "ha-764617-m03" has status "Ready":"False"
	I0816 17:07:07.721116   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:07.721136   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:07.721147   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:07.721153   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:07.724470   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:08.221845   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:08.221867   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:08.221875   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:08.221879   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:08.225220   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:08.721906   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:08.721929   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:08.721936   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:08.721940   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:08.725450   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:09.221819   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:09.221845   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:09.221856   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:09.221864   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:09.224953   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:09.721061   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:09.721080   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:09.721088   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:09.721091   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:09.724384   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:09.724990   27287 node_ready.go:53] node "ha-764617-m03" has status "Ready":"False"
	I0816 17:07:10.221254   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:10.221281   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:10.221292   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:10.221299   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:10.224947   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:10.721865   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:10.721890   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:10.721906   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:10.721913   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:10.727785   27287 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0816 17:07:11.221053   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:11.221074   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:11.221082   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:11.221086   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:11.224379   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:11.721432   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:11.721458   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:11.721467   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:11.721473   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:11.724805   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:11.725248   27287 node_ready.go:53] node "ha-764617-m03" has status "Ready":"False"
	I0816 17:07:12.221664   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:12.221686   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:12.221696   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:12.221703   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:12.224962   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:12.721626   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:12.721647   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:12.721655   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:12.721660   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:12.725440   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:13.221169   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:13.221190   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:13.221197   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:13.221201   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:13.224223   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:13.721321   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:13.721349   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:13.721360   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:13.721367   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:13.724617   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:14.221636   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:14.221657   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:14.221665   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:14.221668   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:14.224668   27287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 17:07:14.225283   27287 node_ready.go:53] node "ha-764617-m03" has status "Ready":"False"
	I0816 17:07:14.722017   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:14.722037   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:14.722046   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:14.722049   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:14.725187   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:15.221869   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:15.221890   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:15.221898   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:15.221903   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:15.225400   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:15.721099   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:15.721125   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:15.721133   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:15.721138   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:15.723985   27287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 17:07:16.221487   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:16.221512   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:16.221524   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:16.221529   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:16.225115   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:16.225567   27287 node_ready.go:53] node "ha-764617-m03" has status "Ready":"False"
	I0816 17:07:16.721129   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:16.721149   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:16.721159   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:16.721167   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:16.724607   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:17.221755   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:17.221777   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:17.221784   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:17.221787   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:17.224849   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:17.721872   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:17.721892   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:17.721899   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:17.721903   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:17.725194   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:18.221870   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:18.221910   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:18.221929   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:18.221936   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:18.225188   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:18.225847   27287 node_ready.go:53] node "ha-764617-m03" has status "Ready":"False"
	I0816 17:07:18.721895   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:18.721919   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:18.721927   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:18.721932   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:18.725279   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:19.221868   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:19.221888   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:19.221897   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:19.221901   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:19.225584   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:19.721838   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:19.721864   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:19.721877   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:19.721884   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:19.725111   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:20.221883   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:20.221910   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:20.221921   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:20.221925   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:20.225402   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:20.225927   27287 node_ready.go:53] node "ha-764617-m03" has status "Ready":"False"
	I0816 17:07:20.721946   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:20.721973   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:20.721981   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:20.721987   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:20.725470   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:21.221077   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:21.221100   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:21.221111   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:21.221115   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:21.224422   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:21.721833   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:21.721853   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:21.721861   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:21.721865   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:21.725192   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:22.221865   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:22.221886   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:22.221894   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:22.221897   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:22.225581   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:22.226057   27287 node_ready.go:49] node "ha-764617-m03" has status "Ready":"True"
	I0816 17:07:22.226073   27287 node_ready.go:38] duration metric: took 17.005191544s for node "ha-764617-m03" to be "Ready" ...
	I0816 17:07:22.226081   27287 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 17:07:22.226140   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods
	I0816 17:07:22.226150   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:22.226157   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:22.226161   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:22.231618   27287 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0816 17:07:22.237858   27287 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-d6c7g" in "kube-system" namespace to be "Ready" ...
	I0816 17:07:22.237926   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d6c7g
	I0816 17:07:22.237934   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:22.237942   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:22.237946   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:22.240591   27287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 17:07:22.241240   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617
	I0816 17:07:22.241257   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:22.241264   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:22.241267   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:22.244062   27287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 17:07:22.244536   27287 pod_ready.go:93] pod "coredns-6f6b679f8f-d6c7g" in "kube-system" namespace has status "Ready":"True"
	I0816 17:07:22.244551   27287 pod_ready.go:82] duration metric: took 6.674274ms for pod "coredns-6f6b679f8f-d6c7g" in "kube-system" namespace to be "Ready" ...
	I0816 17:07:22.244559   27287 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-rhb6h" in "kube-system" namespace to be "Ready" ...
	I0816 17:07:22.244639   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-rhb6h
	I0816 17:07:22.244651   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:22.244659   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:22.244663   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:22.247015   27287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 17:07:22.247522   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617
	I0816 17:07:22.247535   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:22.247542   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:22.247547   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:22.250092   27287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 17:07:22.250520   27287 pod_ready.go:93] pod "coredns-6f6b679f8f-rhb6h" in "kube-system" namespace has status "Ready":"True"
	I0816 17:07:22.250539   27287 pod_ready.go:82] duration metric: took 5.973797ms for pod "coredns-6f6b679f8f-rhb6h" in "kube-system" namespace to be "Ready" ...
	I0816 17:07:22.250550   27287 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-764617" in "kube-system" namespace to be "Ready" ...
	I0816 17:07:22.250600   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/etcd-ha-764617
	I0816 17:07:22.250607   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:22.250614   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:22.250618   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:22.253077   27287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 17:07:22.253728   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617
	I0816 17:07:22.253741   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:22.253748   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:22.253751   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:22.255903   27287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 17:07:22.256324   27287 pod_ready.go:93] pod "etcd-ha-764617" in "kube-system" namespace has status "Ready":"True"
	I0816 17:07:22.256339   27287 pod_ready.go:82] duration metric: took 5.782852ms for pod "etcd-ha-764617" in "kube-system" namespace to be "Ready" ...
	I0816 17:07:22.256348   27287 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-764617-m02" in "kube-system" namespace to be "Ready" ...
	I0816 17:07:22.256393   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/etcd-ha-764617-m02
	I0816 17:07:22.256400   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:22.256406   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:22.256410   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:22.258656   27287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 17:07:22.259145   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:07:22.259160   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:22.259167   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:22.259170   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:22.261391   27287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 17:07:22.261931   27287 pod_ready.go:93] pod "etcd-ha-764617-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 17:07:22.261952   27287 pod_ready.go:82] duration metric: took 5.594854ms for pod "etcd-ha-764617-m02" in "kube-system" namespace to be "Ready" ...
	I0816 17:07:22.261963   27287 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-764617-m03" in "kube-system" namespace to be "Ready" ...
	I0816 17:07:22.422196   27287 request.go:632] Waited for 160.179926ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/etcd-ha-764617-m03
	I0816 17:07:22.422272   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/etcd-ha-764617-m03
	I0816 17:07:22.422280   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:22.422288   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:22.422294   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:22.425474   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:22.622409   27287 request.go:632] Waited for 196.369915ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:22.622456   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:22.622462   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:22.622469   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:22.622473   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:22.625652   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:22.626282   27287 pod_ready.go:93] pod "etcd-ha-764617-m03" in "kube-system" namespace has status "Ready":"True"
	I0816 17:07:22.626300   27287 pod_ready.go:82] duration metric: took 364.331128ms for pod "etcd-ha-764617-m03" in "kube-system" namespace to be "Ready" ...
	I0816 17:07:22.626315   27287 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-764617" in "kube-system" namespace to be "Ready" ...
	I0816 17:07:22.822194   27287 request.go:632] Waited for 195.782729ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-764617
	I0816 17:07:22.822243   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-764617
	I0816 17:07:22.822248   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:22.822256   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:22.822260   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:22.825236   27287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 17:07:23.022257   27287 request.go:632] Waited for 196.345945ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/nodes/ha-764617
	I0816 17:07:23.022327   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617
	I0816 17:07:23.022334   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:23.022342   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:23.022346   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:23.025755   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:23.026223   27287 pod_ready.go:93] pod "kube-apiserver-ha-764617" in "kube-system" namespace has status "Ready":"True"
	I0816 17:07:23.026239   27287 pod_ready.go:82] duration metric: took 399.906406ms for pod "kube-apiserver-ha-764617" in "kube-system" namespace to be "Ready" ...
	I0816 17:07:23.026254   27287 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-764617-m02" in "kube-system" namespace to be "Ready" ...
	I0816 17:07:23.222330   27287 request.go:632] Waited for 195.998261ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-764617-m02
	I0816 17:07:23.222384   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-764617-m02
	I0816 17:07:23.222390   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:23.222397   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:23.222401   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:23.225522   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:23.422660   27287 request.go:632] Waited for 196.348509ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:07:23.422727   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:07:23.422736   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:23.422746   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:23.422755   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:23.426049   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:23.426813   27287 pod_ready.go:93] pod "kube-apiserver-ha-764617-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 17:07:23.426831   27287 pod_ready.go:82] duration metric: took 400.568472ms for pod "kube-apiserver-ha-764617-m02" in "kube-system" namespace to be "Ready" ...
	I0816 17:07:23.426843   27287 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-764617-m03" in "kube-system" namespace to be "Ready" ...
	I0816 17:07:23.621895   27287 request.go:632] Waited for 194.984593ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-764617-m03
	I0816 17:07:23.621956   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-764617-m03
	I0816 17:07:23.621963   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:23.621973   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:23.621980   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:23.625094   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:23.822265   27287 request.go:632] Waited for 196.377357ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:23.822343   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:23.822351   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:23.822361   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:23.822369   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:23.825375   27287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 17:07:23.826005   27287 pod_ready.go:93] pod "kube-apiserver-ha-764617-m03" in "kube-system" namespace has status "Ready":"True"
	I0816 17:07:23.826025   27287 pod_ready.go:82] duration metric: took 399.170806ms for pod "kube-apiserver-ha-764617-m03" in "kube-system" namespace to be "Ready" ...
	I0816 17:07:23.826037   27287 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-764617" in "kube-system" namespace to be "Ready" ...
	I0816 17:07:24.022117   27287 request.go:632] Waited for 196.010046ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-764617
	I0816 17:07:24.022183   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-764617
	I0816 17:07:24.022189   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:24.022197   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:24.022202   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:24.025588   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:24.222489   27287 request.go:632] Waited for 196.285418ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/nodes/ha-764617
	I0816 17:07:24.222554   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617
	I0816 17:07:24.222562   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:24.222572   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:24.222581   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:24.226019   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:24.226722   27287 pod_ready.go:93] pod "kube-controller-manager-ha-764617" in "kube-system" namespace has status "Ready":"True"
	I0816 17:07:24.226736   27287 pod_ready.go:82] duration metric: took 400.688319ms for pod "kube-controller-manager-ha-764617" in "kube-system" namespace to be "Ready" ...
	I0816 17:07:24.226746   27287 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-764617-m02" in "kube-system" namespace to be "Ready" ...
	I0816 17:07:24.422869   27287 request.go:632] Waited for 196.059866ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-764617-m02
	I0816 17:07:24.422949   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-764617-m02
	I0816 17:07:24.422956   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:24.422964   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:24.422971   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:24.426540   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:24.622518   27287 request.go:632] Waited for 195.329518ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:07:24.622600   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:07:24.622607   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:24.622614   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:24.622620   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:24.625819   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:24.626643   27287 pod_ready.go:93] pod "kube-controller-manager-ha-764617-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 17:07:24.626663   27287 pod_ready.go:82] duration metric: took 399.910627ms for pod "kube-controller-manager-ha-764617-m02" in "kube-system" namespace to be "Ready" ...
	I0816 17:07:24.626679   27287 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-764617-m03" in "kube-system" namespace to be "Ready" ...
	I0816 17:07:24.822518   27287 request.go:632] Waited for 195.76611ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-764617-m03
	I0816 17:07:24.822578   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-764617-m03
	I0816 17:07:24.822583   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:24.822591   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:24.822594   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:24.825956   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:25.021996   27287 request.go:632] Waited for 195.285348ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:25.022054   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:25.022061   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:25.022071   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:25.022077   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:25.026843   27287 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 17:07:25.027640   27287 pod_ready.go:93] pod "kube-controller-manager-ha-764617-m03" in "kube-system" namespace has status "Ready":"True"
	I0816 17:07:25.027667   27287 pod_ready.go:82] duration metric: took 400.978413ms for pod "kube-controller-manager-ha-764617-m03" in "kube-system" namespace to be "Ready" ...
	I0816 17:07:25.027680   27287 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-g5szr" in "kube-system" namespace to be "Ready" ...
	I0816 17:07:25.222633   27287 request.go:632] Waited for 194.862724ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g5szr
	I0816 17:07:25.222695   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g5szr
	I0816 17:07:25.222702   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:25.222714   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:25.222719   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:25.225761   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:25.421886   27287 request.go:632] Waited for 195.290047ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:07:25.421981   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:07:25.421996   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:25.422005   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:25.422010   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:25.425343   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:25.425903   27287 pod_ready.go:93] pod "kube-proxy-g5szr" in "kube-system" namespace has status "Ready":"True"
	I0816 17:07:25.425920   27287 pod_ready.go:82] duration metric: took 398.23273ms for pod "kube-proxy-g5szr" in "kube-system" namespace to be "Ready" ...
	I0816 17:07:25.425928   27287 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-j75vc" in "kube-system" namespace to be "Ready" ...
	I0816 17:07:25.621878   27287 request.go:632] Waited for 195.891243ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j75vc
	I0816 17:07:25.621930   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j75vc
	I0816 17:07:25.621935   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:25.621943   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:25.621948   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:25.625111   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:25.822018   27287 request.go:632] Waited for 196.305037ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/nodes/ha-764617
	I0816 17:07:25.822089   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617
	I0816 17:07:25.822098   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:25.822107   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:25.822110   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:25.825514   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:25.826094   27287 pod_ready.go:93] pod "kube-proxy-j75vc" in "kube-system" namespace has status "Ready":"True"
	I0816 17:07:25.826110   27287 pod_ready.go:82] duration metric: took 400.176235ms for pod "kube-proxy-j75vc" in "kube-system" namespace to be "Ready" ...
	I0816 17:07:25.826119   27287 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mgvzm" in "kube-system" namespace to be "Ready" ...
	I0816 17:07:26.022246   27287 request.go:632] Waited for 196.048177ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mgvzm
	I0816 17:07:26.022342   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mgvzm
	I0816 17:07:26.022355   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:26.022365   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:26.022374   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:26.026823   27287 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 17:07:26.222887   27287 request.go:632] Waited for 195.386671ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:26.222940   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:26.222945   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:26.222952   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:26.222956   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:26.226009   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:26.226549   27287 pod_ready.go:93] pod "kube-proxy-mgvzm" in "kube-system" namespace has status "Ready":"True"
	I0816 17:07:26.226575   27287 pod_ready.go:82] duration metric: took 400.449646ms for pod "kube-proxy-mgvzm" in "kube-system" namespace to be "Ready" ...
	I0816 17:07:26.226585   27287 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-764617" in "kube-system" namespace to be "Ready" ...
	I0816 17:07:26.421902   27287 request.go:632] Waited for 195.224421ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-764617
	I0816 17:07:26.421958   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-764617
	I0816 17:07:26.421963   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:26.421970   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:26.421975   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:26.424870   27287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 17:07:26.622733   27287 request.go:632] Waited for 197.348261ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/nodes/ha-764617
	I0816 17:07:26.622793   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617
	I0816 17:07:26.622798   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:26.622806   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:26.622810   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:26.626044   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:26.626658   27287 pod_ready.go:93] pod "kube-scheduler-ha-764617" in "kube-system" namespace has status "Ready":"True"
	I0816 17:07:26.626674   27287 pod_ready.go:82] duration metric: took 400.082715ms for pod "kube-scheduler-ha-764617" in "kube-system" namespace to be "Ready" ...
	I0816 17:07:26.626682   27287 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-764617-m02" in "kube-system" namespace to be "Ready" ...
	I0816 17:07:26.822944   27287 request.go:632] Waited for 196.180078ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-764617-m02
	I0816 17:07:26.823002   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-764617-m02
	I0816 17:07:26.823008   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:26.823017   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:26.823021   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:26.826512   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:27.022573   27287 request.go:632] Waited for 195.366257ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:07:27.022621   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:07:27.022626   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:27.022635   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:27.022646   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:27.025666   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:27.026468   27287 pod_ready.go:93] pod "kube-scheduler-ha-764617-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 17:07:27.026490   27287 pod_ready.go:82] duration metric: took 399.797902ms for pod "kube-scheduler-ha-764617-m02" in "kube-system" namespace to be "Ready" ...
	I0816 17:07:27.026503   27287 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-764617-m03" in "kube-system" namespace to be "Ready" ...
	I0816 17:07:27.222447   27287 request.go:632] Waited for 195.876859ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-764617-m03
	I0816 17:07:27.222518   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-764617-m03
	I0816 17:07:27.222526   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:27.222540   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:27.222548   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:27.225901   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:27.422266   27287 request.go:632] Waited for 195.768523ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:27.422360   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:27.422377   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:27.422385   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:27.422389   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:27.425722   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:27.426363   27287 pod_ready.go:93] pod "kube-scheduler-ha-764617-m03" in "kube-system" namespace has status "Ready":"True"
	I0816 17:07:27.426383   27287 pod_ready.go:82] duration metric: took 399.872152ms for pod "kube-scheduler-ha-764617-m03" in "kube-system" namespace to be "Ready" ...
	I0816 17:07:27.426398   27287 pod_ready.go:39] duration metric: took 5.200306061s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 17:07:27.426414   27287 api_server.go:52] waiting for apiserver process to appear ...
	I0816 17:07:27.426468   27287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 17:07:27.441460   27287 api_server.go:72] duration metric: took 22.456852586s to wait for apiserver process to appear ...
	I0816 17:07:27.441487   27287 api_server.go:88] waiting for apiserver healthz status ...
	I0816 17:07:27.441509   27287 api_server.go:253] Checking apiserver healthz at https://192.168.39.18:8443/healthz ...
	I0816 17:07:27.449407   27287 api_server.go:279] https://192.168.39.18:8443/healthz returned 200:
	ok
	I0816 17:07:27.449492   27287 round_trippers.go:463] GET https://192.168.39.18:8443/version
	I0816 17:07:27.449503   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:27.449517   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:27.449525   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:27.450369   27287 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0816 17:07:27.450438   27287 api_server.go:141] control plane version: v1.31.0
	I0816 17:07:27.450452   27287 api_server.go:131] duration metric: took 8.959106ms to wait for apiserver health ...
	I0816 17:07:27.450460   27287 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 17:07:27.622863   27287 request.go:632] Waited for 172.327319ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods
	I0816 17:07:27.622955   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods
	I0816 17:07:27.622962   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:27.622972   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:27.622976   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:27.629756   27287 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0816 17:07:27.638058   27287 system_pods.go:59] 24 kube-system pods found
	I0816 17:07:27.638092   27287 system_pods.go:61] "coredns-6f6b679f8f-d6c7g" [255004b9-d05e-4686-9e9c-6ec6f7aae439] Running
	I0816 17:07:27.638100   27287 system_pods.go:61] "coredns-6f6b679f8f-rhb6h" [ea20ec0a-a16e-4703-bb54-2e54c31acd40] Running
	I0816 17:07:27.638105   27287 system_pods.go:61] "etcd-ha-764617" [3dcae246-5101-4a41-9f28-a6a1740644d4] Running
	I0816 17:07:27.638110   27287 system_pods.go:61] "etcd-ha-764617-m02" [650d9e63-004f-414b-8a2a-e97bf4d38065] Running
	I0816 17:07:27.638115   27287 system_pods.go:61] "etcd-ha-764617-m03" [5149ba57-c3cc-40b3-a502-b782ac9e3124] Running
	I0816 17:07:27.638119   27287 system_pods.go:61] "kindnet-7l8xt" [ee8130fb-5347-4f22-849f-ebb68e6fc48e] Running
	I0816 17:07:27.638125   27287 system_pods.go:61] "kindnet-94vkj" [a1ce0b8c-c2c8-400a-a013-6eb89e550cd9] Running
	I0816 17:07:27.638129   27287 system_pods.go:61] "kindnet-fvp67" [cab5cbb1-9c16-4639-a182-f9dc0b5c674a] Running
	I0816 17:07:27.638134   27287 system_pods.go:61] "kube-apiserver-ha-764617" [85909d10-ec15-4749-9972-40ededb0e610] Running
	I0816 17:07:27.638146   27287 system_pods.go:61] "kube-apiserver-ha-764617-m02" [adc1ab1c-c514-4e9a-bd9f-4458dbe442b4] Running
	I0816 17:07:27.638155   27287 system_pods.go:61] "kube-apiserver-ha-764617-m03" [390f78be-da45-4134-a1f9-a5605a5f8e4d] Running
	I0816 17:07:27.638161   27287 system_pods.go:61] "kube-controller-manager-ha-764617" [31c5a5d2-e4a5-4405-8f99-f13c12763055] Running
	I0816 17:07:27.638168   27287 system_pods.go:61] "kube-controller-manager-ha-764617-m02" [8d094585-050e-49b9-b2f3-ffa45eadb25b] Running
	I0816 17:07:27.638174   27287 system_pods.go:61] "kube-controller-manager-ha-764617-m03" [5389ff46-3e33-4d65-b268-e749f05c25a7] Running
	I0816 17:07:27.638182   27287 system_pods.go:61] "kube-proxy-g5szr" [6adedbcf-cd3b-4a09-8759-c0e9e4d5ddb5] Running
	I0816 17:07:27.638188   27287 system_pods.go:61] "kube-proxy-j75vc" [50262aeb-9d97-4093-a43f-cb24a5515abb] Running
	I0816 17:07:27.638196   27287 system_pods.go:61] "kube-proxy-mgvzm" [6c8796c4-3856-4e4c-984f-501bba6459e2] Running
	I0816 17:07:27.638202   27287 system_pods.go:61] "kube-scheduler-ha-764617" [4c45b1dc-cc6e-41e2-a059-955fa9fd79aa] Running
	I0816 17:07:27.638207   27287 system_pods.go:61] "kube-scheduler-ha-764617-m02" [bb3e6b70-5a60-49f8-a1c3-08690fda371d] Running
	I0816 17:07:27.638213   27287 system_pods.go:61] "kube-scheduler-ha-764617-m03" [6cc05023-8264-4400-856e-5dbf10494aec] Running
	I0816 17:07:27.638222   27287 system_pods.go:61] "kube-vip-ha-764617" [a30deffd-45c9-4685-ae4c-0c0f113f3bd7] Running
	I0816 17:07:27.638228   27287 system_pods.go:61] "kube-vip-ha-764617-m02" [869da559-ebdf-417f-9494-eb1cacbeab97] Running
	I0816 17:07:27.638235   27287 system_pods.go:61] "kube-vip-ha-764617-m03" [e1ad6002-e6a5-48ef-976e-1212312bd233] Running
	I0816 17:07:27.638240   27287 system_pods.go:61] "storage-provisioner" [15a0a2d4-69d6-4a6b-9199-f8785e015c3b] Running
	I0816 17:07:27.638248   27287 system_pods.go:74] duration metric: took 187.778992ms to wait for pod list to return data ...
	I0816 17:07:27.638260   27287 default_sa.go:34] waiting for default service account to be created ...
	I0816 17:07:27.822726   27287 request.go:632] Waited for 184.385657ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/default/serviceaccounts
	I0816 17:07:27.822777   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/default/serviceaccounts
	I0816 17:07:27.822783   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:27.822791   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:27.822795   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:27.826494   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:27.826619   27287 default_sa.go:45] found service account: "default"
	I0816 17:07:27.826633   27287 default_sa.go:55] duration metric: took 188.367368ms for default service account to be created ...
	I0816 17:07:27.826642   27287 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 17:07:28.021999   27287 request.go:632] Waited for 195.297338ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods
	I0816 17:07:28.022054   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods
	I0816 17:07:28.022059   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:28.022115   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:28.022126   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:28.028302   27287 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0816 17:07:28.035191   27287 system_pods.go:86] 24 kube-system pods found
	I0816 17:07:28.035214   27287 system_pods.go:89] "coredns-6f6b679f8f-d6c7g" [255004b9-d05e-4686-9e9c-6ec6f7aae439] Running
	I0816 17:07:28.035220   27287 system_pods.go:89] "coredns-6f6b679f8f-rhb6h" [ea20ec0a-a16e-4703-bb54-2e54c31acd40] Running
	I0816 17:07:28.035224   27287 system_pods.go:89] "etcd-ha-764617" [3dcae246-5101-4a41-9f28-a6a1740644d4] Running
	I0816 17:07:28.035228   27287 system_pods.go:89] "etcd-ha-764617-m02" [650d9e63-004f-414b-8a2a-e97bf4d38065] Running
	I0816 17:07:28.035231   27287 system_pods.go:89] "etcd-ha-764617-m03" [5149ba57-c3cc-40b3-a502-b782ac9e3124] Running
	I0816 17:07:28.035234   27287 system_pods.go:89] "kindnet-7l8xt" [ee8130fb-5347-4f22-849f-ebb68e6fc48e] Running
	I0816 17:07:28.035237   27287 system_pods.go:89] "kindnet-94vkj" [a1ce0b8c-c2c8-400a-a013-6eb89e550cd9] Running
	I0816 17:07:28.035240   27287 system_pods.go:89] "kindnet-fvp67" [cab5cbb1-9c16-4639-a182-f9dc0b5c674a] Running
	I0816 17:07:28.035248   27287 system_pods.go:89] "kube-apiserver-ha-764617" [85909d10-ec15-4749-9972-40ededb0e610] Running
	I0816 17:07:28.035251   27287 system_pods.go:89] "kube-apiserver-ha-764617-m02" [adc1ab1c-c514-4e9a-bd9f-4458dbe442b4] Running
	I0816 17:07:28.035254   27287 system_pods.go:89] "kube-apiserver-ha-764617-m03" [390f78be-da45-4134-a1f9-a5605a5f8e4d] Running
	I0816 17:07:28.035262   27287 system_pods.go:89] "kube-controller-manager-ha-764617" [31c5a5d2-e4a5-4405-8f99-f13c12763055] Running
	I0816 17:07:28.035268   27287 system_pods.go:89] "kube-controller-manager-ha-764617-m02" [8d094585-050e-49b9-b2f3-ffa45eadb25b] Running
	I0816 17:07:28.035272   27287 system_pods.go:89] "kube-controller-manager-ha-764617-m03" [5389ff46-3e33-4d65-b268-e749f05c25a7] Running
	I0816 17:07:28.035274   27287 system_pods.go:89] "kube-proxy-g5szr" [6adedbcf-cd3b-4a09-8759-c0e9e4d5ddb5] Running
	I0816 17:07:28.035282   27287 system_pods.go:89] "kube-proxy-j75vc" [50262aeb-9d97-4093-a43f-cb24a5515abb] Running
	I0816 17:07:28.035287   27287 system_pods.go:89] "kube-proxy-mgvzm" [6c8796c4-3856-4e4c-984f-501bba6459e2] Running
	I0816 17:07:28.035290   27287 system_pods.go:89] "kube-scheduler-ha-764617" [4c45b1dc-cc6e-41e2-a059-955fa9fd79aa] Running
	I0816 17:07:28.035293   27287 system_pods.go:89] "kube-scheduler-ha-764617-m02" [bb3e6b70-5a60-49f8-a1c3-08690fda371d] Running
	I0816 17:07:28.035296   27287 system_pods.go:89] "kube-scheduler-ha-764617-m03" [6cc05023-8264-4400-856e-5dbf10494aec] Running
	I0816 17:07:28.035299   27287 system_pods.go:89] "kube-vip-ha-764617" [a30deffd-45c9-4685-ae4c-0c0f113f3bd7] Running
	I0816 17:07:28.035301   27287 system_pods.go:89] "kube-vip-ha-764617-m02" [869da559-ebdf-417f-9494-eb1cacbeab97] Running
	I0816 17:07:28.035304   27287 system_pods.go:89] "kube-vip-ha-764617-m03" [e1ad6002-e6a5-48ef-976e-1212312bd233] Running
	I0816 17:07:28.035307   27287 system_pods.go:89] "storage-provisioner" [15a0a2d4-69d6-4a6b-9199-f8785e015c3b] Running
	I0816 17:07:28.035312   27287 system_pods.go:126] duration metric: took 208.66562ms to wait for k8s-apps to be running ...
	I0816 17:07:28.035321   27287 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 17:07:28.035361   27287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 17:07:28.048890   27287 system_svc.go:56] duration metric: took 13.562693ms WaitForService to wait for kubelet
	I0816 17:07:28.048913   27287 kubeadm.go:582] duration metric: took 23.064308432s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 17:07:28.048934   27287 node_conditions.go:102] verifying NodePressure condition ...
	I0816 17:07:28.222387   27287 request.go:632] Waited for 173.376848ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/nodes
	I0816 17:07:28.222479   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes
	I0816 17:07:28.222489   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:28.222497   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:28.222506   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:28.226095   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:28.227070   27287 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 17:07:28.227090   27287 node_conditions.go:123] node cpu capacity is 2
	I0816 17:07:28.227100   27287 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 17:07:28.227104   27287 node_conditions.go:123] node cpu capacity is 2
	I0816 17:07:28.227107   27287 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 17:07:28.227110   27287 node_conditions.go:123] node cpu capacity is 2
	I0816 17:07:28.227114   27287 node_conditions.go:105] duration metric: took 178.175166ms to run NodePressure ...
	I0816 17:07:28.227124   27287 start.go:241] waiting for startup goroutines ...
	I0816 17:07:28.227145   27287 start.go:255] writing updated cluster config ...
	I0816 17:07:28.227412   27287 ssh_runner.go:195] Run: rm -f paused
	I0816 17:07:28.278695   27287 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0816 17:07:28.281265   27287 out.go:177] * Done! kubectl is now configured to use "ha-764617" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 16 17:11:06 ha-764617 crio[676]: time="2024-08-16 17:11:06.526904319Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cd731da4-04a7-4bd3-8cf9-2cc1a28a7c23 name=/runtime.v1.RuntimeService/Version
	Aug 16 17:11:06 ha-764617 crio[676]: time="2024-08-16 17:11:06.527875795Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1ff906b0-4286-463a-80d3-12bc37037aac name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 17:11:06 ha-764617 crio[676]: time="2024-08-16 17:11:06.528467653Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828266528443789,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1ff906b0-4286-463a-80d3-12bc37037aac name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 17:11:06 ha-764617 crio[676]: time="2024-08-16 17:11:06.528923028Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=89b935db-4100-4dbb-858b-dfd7e4f2b581 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 17:11:06 ha-764617 crio[676]: time="2024-08-16 17:11:06.528990276Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=89b935db-4100-4dbb-858b-dfd7e4f2b581 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 17:11:06 ha-764617 crio[676]: time="2024-08-16 17:11:06.529276055Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8f49214f24a1f9d4e237db072dea4cb4011708fed1d55a3518bae64afc9a36de,PodSandboxId:31ad2ee33305c2e247dc968727a480a35dd4e341f754742f3ab45df490c9d9b6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723828052423881885,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rcq66,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef4f9584-2155-48ce-80fa-30bac466b9f5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7484d3705a58cf84eea46cc2853fefc74ff28ce7be490d80fd998780a1345a8b,PodSandboxId:0158b06f966cea3c881bdd10c5c53ac153d60e8f64868f2f1893a602660250cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723827909501655918,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15a0a2d4-69d6-4a6b-9199-f8785e015c3b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d21ff55e0d1541ec985fefc0bbe41954428037be46427e4a0b9b9f6f59ff38b5,PodSandboxId:570a9af97580c893bd59ad0da740672446aad828cc2953120c5936aa6c511600,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723827909473257011,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-rhb6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea20ec0a-a16e-4703-bb54-2e54c31acd40,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eefbb289cdc64359e43e612fda427c74a5cd21d8765173cad59129f113b6faf,PodSandboxId:a96010807e82ab97d921d55fe01739b0b7dc0b5234c6f3d29ffd5e0d7ae9a661,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723827909453822317,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-d6c7g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 255004b9-d0
5e-4686-9e9c-6ec6f7aae439,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7c860bdbf8f8c3ce3e6ce12afcd55605f1afdf02828581465b57f088bf9fc24,PodSandboxId:850550a63d423740e066cd344cb8eca06c8ab1ce91d384e5962c2c137bc4c4e2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1723827897695724443,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-94vkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1ce0b8c-c2c8-400a-a013-6eb89e550cd9,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aaf72ada1592eb9db02d4b420e3712769e6486d17ade2669153a86196aec79d,PodSandboxId:7fa8ce6eea9326aa2d9e78422a5917fa128392accfa0f9395fc23862f5324831,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172382789
4189999095,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j75vc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50262aeb-9d97-4093-a43f-cb24a5515abb,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b4d4cb04162c2a865b03b9d68c6d63fe9ac39bfd8c3a34420cef100c23de268,PodSandboxId:29c0393581395683e0841872a8b47c31fae1d73c260f1331ec0727d42d4c4898,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172382788525
6090219,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5ac04a3c0a524fb49fee0e7201d9eee,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c020d60e48e21e3f68c2ab1e7dbc4570867af80f727edd6529d1d59b17f11a6f,PodSandboxId:c5d6c0455efc0ceafe45e3e9defa6b3e39bf97e12d07d6d742a88b3cbeaf65b6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723827882764931369,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a89700dd245c99cee73a27284c5b094,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:547ba7c3099cf77e755ecafde9e451130522c4ac7ad82e4d1425c79c1f03ae1b,PodSandboxId:09ec8ad12f1f1dcd1c0e02dc275ba62dff2a2b5f6ca7e7fd781694c2d45e4a18,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723827882761194658,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45c4aa250fdc29f3673166187d642d12,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d7b524ef17cfbc76cf8e0ec5c8dc05fb415ba95dd20034cc9e994fe15802183,PodSandboxId:9410cce2ddb5a77033469e2fea5eb8cce49cb54d02df3492ef98005be3b04efe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723827882756386863,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d9a187c472f17e2ba03b6daf392b7e4,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5964f78981acee32a76525df3d36071ce0c8b129aa0af6ff7aa1cdaff80b4110,PodSandboxId:df0ff04111d0b9081730712d0f7526286300e56603bc40376b44099e52560716,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723827882544331793,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32cbd9593cdf012e272df9a250d0e00c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=89b935db-4100-4dbb-858b-dfd7e4f2b581 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 17:11:06 ha-764617 crio[676]: time="2024-08-16 17:11:06.565022054Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=027a3d51-97b6-456d-ad4b-1bf10d53bc85 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 16 17:11:06 ha-764617 crio[676]: time="2024-08-16 17:11:06.565466299Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:31ad2ee33305c2e247dc968727a480a35dd4e341f754742f3ab45df490c9d9b6,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-rcq66,Uid:ef4f9584-2155-48ce-80fa-30bac466b9f5,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723828049534595642,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-rcq66,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef4f9584-2155-48ce-80fa-30bac466b9f5,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-16T17:07:29.211435080Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0158b06f966cea3c881bdd10c5c53ac153d60e8f64868f2f1893a602660250cd,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:15a0a2d4-69d6-4a6b-9199-f8785e015c3b,Namespace:kube-system,Attempt:0,},State:SANDBO
X_READY,CreatedAt:1723827909257516749,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15a0a2d4-69d6-4a6b-9199-f8785e015c3b,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"ty
pe\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-16T17:05:08.947671592Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:570a9af97580c893bd59ad0da740672446aad828cc2953120c5936aa6c511600,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-rhb6h,Uid:ea20ec0a-a16e-4703-bb54-2e54c31acd40,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723827909252535935,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-rhb6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea20ec0a-a16e-4703-bb54-2e54c31acd40,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-16T17:05:08.945477305Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a96010807e82ab97d921d55fe01739b0b7dc0b5234c6f3d29ffd5e0d7ae9a661,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-d6c7g,Uid:255004b9-d05e-4686-9e9c-6ec6f7aae439,Namespace:kube-system,Atte
mpt:0,},State:SANDBOX_READY,CreatedAt:1723827909243835803,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-d6c7g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 255004b9-d05e-4686-9e9c-6ec6f7aae439,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-16T17:05:08.937602295Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:850550a63d423740e066cd344cb8eca06c8ab1ce91d384e5962c2c137bc4c4e2,Metadata:&PodSandboxMetadata{Name:kindnet-94vkj,Uid:a1ce0b8c-c2c8-400a-a013-6eb89e550cd9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723827893824088626,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-94vkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1ce0b8c-c2c8-400a-a013-6eb89e550cd9,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotati
ons:map[string]string{kubernetes.io/config.seen: 2024-08-16T17:04:53.516935393Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7fa8ce6eea9326aa2d9e78422a5917fa128392accfa0f9395fc23862f5324831,Metadata:&PodSandboxMetadata{Name:kube-proxy-j75vc,Uid:50262aeb-9d97-4093-a43f-cb24a5515abb,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723827893800697455,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-j75vc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50262aeb-9d97-4093-a43f-cb24a5515abb,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-16T17:04:53.492251550Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9410cce2ddb5a77033469e2fea5eb8cce49cb54d02df3492ef98005be3b04efe,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-764617,Uid:3d9a187c472f17e2ba03b6daf392b7e4,Namespace:kube-system,
Attempt:0,},State:SANDBOX_READY,CreatedAt:1723827882213476307,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d9a187c472f17e2ba03b6daf392b7e4,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 3d9a187c472f17e2ba03b6daf392b7e4,kubernetes.io/config.seen: 2024-08-16T17:04:41.720950945Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:09ec8ad12f1f1dcd1c0e02dc275ba62dff2a2b5f6ca7e7fd781694c2d45e4a18,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-764617,Uid:45c4aa250fdc29f3673166187d642d12,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723827882205635298,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45c4aa250fdc29f3673166187d642d1
2,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 45c4aa250fdc29f3673166187d642d12,kubernetes.io/config.seen: 2024-08-16T17:04:41.720951957Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:29c0393581395683e0841872a8b47c31fae1d73c260f1331ec0727d42d4c4898,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-764617,Uid:d5ac04a3c0a524fb49fee0e7201d9eee,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723827882197182861,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5ac04a3c0a524fb49fee0e7201d9eee,},Annotations:map[string]string{kubernetes.io/config.hash: d5ac04a3c0a524fb49fee0e7201d9eee,kubernetes.io/config.seen: 2024-08-16T17:04:41.720952794Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:df0ff04111d0b9081730712d0f7526286300e56603bc40376b44099e52560716,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-764617,Ui
d:32cbd9593cdf012e272df9a250d0e00c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723827882191880684,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32cbd9593cdf012e272df9a250d0e00c,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.18:8443,kubernetes.io/config.hash: 32cbd9593cdf012e272df9a250d0e00c,kubernetes.io/config.seen: 2024-08-16T17:04:41.720949579Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c5d6c0455efc0ceafe45e3e9defa6b3e39bf97e12d07d6d742a88b3cbeaf65b6,Metadata:&PodSandboxMetadata{Name:etcd-ha-764617,Uid:3a89700dd245c99cee73a27284c5b094,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723827882185855928,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-764617,io
.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a89700dd245c99cee73a27284c5b094,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.18:2379,kubernetes.io/config.hash: 3a89700dd245c99cee73a27284c5b094,kubernetes.io/config.seen: 2024-08-16T17:04:41.720945788Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=027a3d51-97b6-456d-ad4b-1bf10d53bc85 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 16 17:11:06 ha-764617 crio[676]: time="2024-08-16 17:11:06.566405359Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0ab9d822-8402-4734-b1c3-ff3338289299 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 17:11:06 ha-764617 crio[676]: time="2024-08-16 17:11:06.566467587Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0ab9d822-8402-4734-b1c3-ff3338289299 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 17:11:06 ha-764617 crio[676]: time="2024-08-16 17:11:06.566702206Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8f49214f24a1f9d4e237db072dea4cb4011708fed1d55a3518bae64afc9a36de,PodSandboxId:31ad2ee33305c2e247dc968727a480a35dd4e341f754742f3ab45df490c9d9b6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723828052423881885,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rcq66,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef4f9584-2155-48ce-80fa-30bac466b9f5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7484d3705a58cf84eea46cc2853fefc74ff28ce7be490d80fd998780a1345a8b,PodSandboxId:0158b06f966cea3c881bdd10c5c53ac153d60e8f64868f2f1893a602660250cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723827909501655918,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15a0a2d4-69d6-4a6b-9199-f8785e015c3b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d21ff55e0d1541ec985fefc0bbe41954428037be46427e4a0b9b9f6f59ff38b5,PodSandboxId:570a9af97580c893bd59ad0da740672446aad828cc2953120c5936aa6c511600,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723827909473257011,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-rhb6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea20ec0a-a16e-4703-bb54-2e54c31acd40,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eefbb289cdc64359e43e612fda427c74a5cd21d8765173cad59129f113b6faf,PodSandboxId:a96010807e82ab97d921d55fe01739b0b7dc0b5234c6f3d29ffd5e0d7ae9a661,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723827909453822317,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-d6c7g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 255004b9-d0
5e-4686-9e9c-6ec6f7aae439,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7c860bdbf8f8c3ce3e6ce12afcd55605f1afdf02828581465b57f088bf9fc24,PodSandboxId:850550a63d423740e066cd344cb8eca06c8ab1ce91d384e5962c2c137bc4c4e2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1723827897695724443,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-94vkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1ce0b8c-c2c8-400a-a013-6eb89e550cd9,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aaf72ada1592eb9db02d4b420e3712769e6486d17ade2669153a86196aec79d,PodSandboxId:7fa8ce6eea9326aa2d9e78422a5917fa128392accfa0f9395fc23862f5324831,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172382789
4189999095,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j75vc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50262aeb-9d97-4093-a43f-cb24a5515abb,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b4d4cb04162c2a865b03b9d68c6d63fe9ac39bfd8c3a34420cef100c23de268,PodSandboxId:29c0393581395683e0841872a8b47c31fae1d73c260f1331ec0727d42d4c4898,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172382788525
6090219,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5ac04a3c0a524fb49fee0e7201d9eee,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c020d60e48e21e3f68c2ab1e7dbc4570867af80f727edd6529d1d59b17f11a6f,PodSandboxId:c5d6c0455efc0ceafe45e3e9defa6b3e39bf97e12d07d6d742a88b3cbeaf65b6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723827882764931369,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a89700dd245c99cee73a27284c5b094,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:547ba7c3099cf77e755ecafde9e451130522c4ac7ad82e4d1425c79c1f03ae1b,PodSandboxId:09ec8ad12f1f1dcd1c0e02dc275ba62dff2a2b5f6ca7e7fd781694c2d45e4a18,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723827882761194658,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45c4aa250fdc29f3673166187d642d12,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d7b524ef17cfbc76cf8e0ec5c8dc05fb415ba95dd20034cc9e994fe15802183,PodSandboxId:9410cce2ddb5a77033469e2fea5eb8cce49cb54d02df3492ef98005be3b04efe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723827882756386863,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d9a187c472f17e2ba03b6daf392b7e4,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5964f78981acee32a76525df3d36071ce0c8b129aa0af6ff7aa1cdaff80b4110,PodSandboxId:df0ff04111d0b9081730712d0f7526286300e56603bc40376b44099e52560716,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723827882544331793,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32cbd9593cdf012e272df9a250d0e00c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0ab9d822-8402-4734-b1c3-ff3338289299 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 17:11:06 ha-764617 crio[676]: time="2024-08-16 17:11:06.567508056Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cfaad0d5-cd83-4a13-96a8-ac8bc8b78cbd name=/runtime.v1.RuntimeService/Version
	Aug 16 17:11:06 ha-764617 crio[676]: time="2024-08-16 17:11:06.567574921Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cfaad0d5-cd83-4a13-96a8-ac8bc8b78cbd name=/runtime.v1.RuntimeService/Version
	Aug 16 17:11:06 ha-764617 crio[676]: time="2024-08-16 17:11:06.569515898Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b24e6394-24d3-4a80-b1a4-2b1483b71e1e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 17:11:06 ha-764617 crio[676]: time="2024-08-16 17:11:06.569942411Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828266569919137,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b24e6394-24d3-4a80-b1a4-2b1483b71e1e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 17:11:06 ha-764617 crio[676]: time="2024-08-16 17:11:06.570472813Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dd546d9b-1725-4b68-94ed-cc3eb993d4f6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 17:11:06 ha-764617 crio[676]: time="2024-08-16 17:11:06.570519017Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dd546d9b-1725-4b68-94ed-cc3eb993d4f6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 17:11:06 ha-764617 crio[676]: time="2024-08-16 17:11:06.570747841Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8f49214f24a1f9d4e237db072dea4cb4011708fed1d55a3518bae64afc9a36de,PodSandboxId:31ad2ee33305c2e247dc968727a480a35dd4e341f754742f3ab45df490c9d9b6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723828052423881885,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rcq66,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef4f9584-2155-48ce-80fa-30bac466b9f5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7484d3705a58cf84eea46cc2853fefc74ff28ce7be490d80fd998780a1345a8b,PodSandboxId:0158b06f966cea3c881bdd10c5c53ac153d60e8f64868f2f1893a602660250cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723827909501655918,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15a0a2d4-69d6-4a6b-9199-f8785e015c3b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d21ff55e0d1541ec985fefc0bbe41954428037be46427e4a0b9b9f6f59ff38b5,PodSandboxId:570a9af97580c893bd59ad0da740672446aad828cc2953120c5936aa6c511600,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723827909473257011,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-rhb6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea20ec0a-a16e-4703-bb54-2e54c31acd40,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eefbb289cdc64359e43e612fda427c74a5cd21d8765173cad59129f113b6faf,PodSandboxId:a96010807e82ab97d921d55fe01739b0b7dc0b5234c6f3d29ffd5e0d7ae9a661,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723827909453822317,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-d6c7g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 255004b9-d0
5e-4686-9e9c-6ec6f7aae439,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7c860bdbf8f8c3ce3e6ce12afcd55605f1afdf02828581465b57f088bf9fc24,PodSandboxId:850550a63d423740e066cd344cb8eca06c8ab1ce91d384e5962c2c137bc4c4e2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1723827897695724443,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-94vkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1ce0b8c-c2c8-400a-a013-6eb89e550cd9,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aaf72ada1592eb9db02d4b420e3712769e6486d17ade2669153a86196aec79d,PodSandboxId:7fa8ce6eea9326aa2d9e78422a5917fa128392accfa0f9395fc23862f5324831,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172382789
4189999095,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j75vc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50262aeb-9d97-4093-a43f-cb24a5515abb,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b4d4cb04162c2a865b03b9d68c6d63fe9ac39bfd8c3a34420cef100c23de268,PodSandboxId:29c0393581395683e0841872a8b47c31fae1d73c260f1331ec0727d42d4c4898,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172382788525
6090219,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5ac04a3c0a524fb49fee0e7201d9eee,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c020d60e48e21e3f68c2ab1e7dbc4570867af80f727edd6529d1d59b17f11a6f,PodSandboxId:c5d6c0455efc0ceafe45e3e9defa6b3e39bf97e12d07d6d742a88b3cbeaf65b6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723827882764931369,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a89700dd245c99cee73a27284c5b094,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:547ba7c3099cf77e755ecafde9e451130522c4ac7ad82e4d1425c79c1f03ae1b,PodSandboxId:09ec8ad12f1f1dcd1c0e02dc275ba62dff2a2b5f6ca7e7fd781694c2d45e4a18,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723827882761194658,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45c4aa250fdc29f3673166187d642d12,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d7b524ef17cfbc76cf8e0ec5c8dc05fb415ba95dd20034cc9e994fe15802183,PodSandboxId:9410cce2ddb5a77033469e2fea5eb8cce49cb54d02df3492ef98005be3b04efe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723827882756386863,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d9a187c472f17e2ba03b6daf392b7e4,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5964f78981acee32a76525df3d36071ce0c8b129aa0af6ff7aa1cdaff80b4110,PodSandboxId:df0ff04111d0b9081730712d0f7526286300e56603bc40376b44099e52560716,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723827882544331793,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32cbd9593cdf012e272df9a250d0e00c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dd546d9b-1725-4b68-94ed-cc3eb993d4f6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 17:11:06 ha-764617 crio[676]: time="2024-08-16 17:11:06.606836870Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=898cac9a-45e5-41d7-b355-92689d814489 name=/runtime.v1.RuntimeService/Version
	Aug 16 17:11:06 ha-764617 crio[676]: time="2024-08-16 17:11:06.606925768Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=898cac9a-45e5-41d7-b355-92689d814489 name=/runtime.v1.RuntimeService/Version
	Aug 16 17:11:06 ha-764617 crio[676]: time="2024-08-16 17:11:06.608247623Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1d214c0a-0fde-4b59-9fed-fe24b8f280d2 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 17:11:06 ha-764617 crio[676]: time="2024-08-16 17:11:06.608707318Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828266608683281,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1d214c0a-0fde-4b59-9fed-fe24b8f280d2 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 17:11:06 ha-764617 crio[676]: time="2024-08-16 17:11:06.609414154Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=214db66b-6602-4a19-b361-46a251346110 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 17:11:06 ha-764617 crio[676]: time="2024-08-16 17:11:06.609483583Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=214db66b-6602-4a19-b361-46a251346110 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 17:11:06 ha-764617 crio[676]: time="2024-08-16 17:11:06.609717987Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8f49214f24a1f9d4e237db072dea4cb4011708fed1d55a3518bae64afc9a36de,PodSandboxId:31ad2ee33305c2e247dc968727a480a35dd4e341f754742f3ab45df490c9d9b6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723828052423881885,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rcq66,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef4f9584-2155-48ce-80fa-30bac466b9f5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7484d3705a58cf84eea46cc2853fefc74ff28ce7be490d80fd998780a1345a8b,PodSandboxId:0158b06f966cea3c881bdd10c5c53ac153d60e8f64868f2f1893a602660250cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723827909501655918,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15a0a2d4-69d6-4a6b-9199-f8785e015c3b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d21ff55e0d1541ec985fefc0bbe41954428037be46427e4a0b9b9f6f59ff38b5,PodSandboxId:570a9af97580c893bd59ad0da740672446aad828cc2953120c5936aa6c511600,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723827909473257011,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-rhb6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea20ec0a-a16e-4703-bb54-2e54c31acd40,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eefbb289cdc64359e43e612fda427c74a5cd21d8765173cad59129f113b6faf,PodSandboxId:a96010807e82ab97d921d55fe01739b0b7dc0b5234c6f3d29ffd5e0d7ae9a661,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723827909453822317,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-d6c7g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 255004b9-d0
5e-4686-9e9c-6ec6f7aae439,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7c860bdbf8f8c3ce3e6ce12afcd55605f1afdf02828581465b57f088bf9fc24,PodSandboxId:850550a63d423740e066cd344cb8eca06c8ab1ce91d384e5962c2c137bc4c4e2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1723827897695724443,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-94vkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1ce0b8c-c2c8-400a-a013-6eb89e550cd9,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aaf72ada1592eb9db02d4b420e3712769e6486d17ade2669153a86196aec79d,PodSandboxId:7fa8ce6eea9326aa2d9e78422a5917fa128392accfa0f9395fc23862f5324831,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172382789
4189999095,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j75vc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50262aeb-9d97-4093-a43f-cb24a5515abb,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b4d4cb04162c2a865b03b9d68c6d63fe9ac39bfd8c3a34420cef100c23de268,PodSandboxId:29c0393581395683e0841872a8b47c31fae1d73c260f1331ec0727d42d4c4898,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172382788525
6090219,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5ac04a3c0a524fb49fee0e7201d9eee,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c020d60e48e21e3f68c2ab1e7dbc4570867af80f727edd6529d1d59b17f11a6f,PodSandboxId:c5d6c0455efc0ceafe45e3e9defa6b3e39bf97e12d07d6d742a88b3cbeaf65b6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723827882764931369,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a89700dd245c99cee73a27284c5b094,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:547ba7c3099cf77e755ecafde9e451130522c4ac7ad82e4d1425c79c1f03ae1b,PodSandboxId:09ec8ad12f1f1dcd1c0e02dc275ba62dff2a2b5f6ca7e7fd781694c2d45e4a18,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723827882761194658,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45c4aa250fdc29f3673166187d642d12,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d7b524ef17cfbc76cf8e0ec5c8dc05fb415ba95dd20034cc9e994fe15802183,PodSandboxId:9410cce2ddb5a77033469e2fea5eb8cce49cb54d02df3492ef98005be3b04efe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723827882756386863,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d9a187c472f17e2ba03b6daf392b7e4,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5964f78981acee32a76525df3d36071ce0c8b129aa0af6ff7aa1cdaff80b4110,PodSandboxId:df0ff04111d0b9081730712d0f7526286300e56603bc40376b44099e52560716,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723827882544331793,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32cbd9593cdf012e272df9a250d0e00c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=214db66b-6602-4a19-b361-46a251346110 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8f49214f24a1f       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   31ad2ee33305c       busybox-7dff88458-rcq66
	7484d3705a58c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   0158b06f966ce       storage-provisioner
	d21ff55e0d154       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   570a9af97580c       coredns-6f6b679f8f-rhb6h
	8eefbb289cdc6       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   a96010807e82a       coredns-6f6b679f8f-d6c7g
	b7c860bdbf8f8       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    6 minutes ago       Running             kindnet-cni               0                   850550a63d423       kindnet-94vkj
	1aaf72ada1592       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      6 minutes ago       Running             kube-proxy                0                   7fa8ce6eea932       kube-proxy-j75vc
	6b4d4cb04162c       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   29c0393581395       kube-vip-ha-764617
	c020d60e48e21       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   c5d6c0455efc0       etcd-ha-764617
	547ba7c3099cf       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      6 minutes ago       Running             kube-scheduler            0                   09ec8ad12f1f1       kube-scheduler-ha-764617
	0d7b524ef17cf       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      6 minutes ago       Running             kube-controller-manager   0                   9410cce2ddb5a       kube-controller-manager-ha-764617
	5964f78981ace       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      6 minutes ago       Running             kube-apiserver            0                   df0ff04111d0b       kube-apiserver-ha-764617
	
	
	==> coredns [8eefbb289cdc64359e43e612fda427c74a5cd21d8765173cad59129f113b6faf] <==
	[INFO] 10.244.0.4:55343 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117682s
	[INFO] 10.244.0.4:40863 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000090711s
	[INFO] 10.244.2.2:52832 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114258s
	[INFO] 10.244.2.2:42301 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000139766s
	[INFO] 10.244.1.2:36594 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003847303s
	[INFO] 10.244.1.2:49450 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000234436s
	[INFO] 10.244.1.2:57236 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000164289s
	[INFO] 10.244.1.2:42444 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.01143086s
	[INFO] 10.244.1.2:55740 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00014199s
	[INFO] 10.244.0.4:37842 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000213176s
	[INFO] 10.244.2.2:33930 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001767359s
	[INFO] 10.244.2.2:58987 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000092615s
	[INFO] 10.244.2.2:33562 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000210507s
	[INFO] 10.244.1.2:37263 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000099612s
	[INFO] 10.244.0.4:45145 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000086744s
	[INFO] 10.244.0.4:33500 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000050948s
	[INFO] 10.244.2.2:35019 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000095066s
	[INFO] 10.244.2.2:58975 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000209149s
	[INFO] 10.244.2.2:53664 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077503s
	[INFO] 10.244.1.2:52681 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013622s
	[INFO] 10.244.1.2:34428 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000179694s
	[INFO] 10.244.1.2:38361 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000107495s
	[INFO] 10.244.0.4:33031 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000072835s
	[INFO] 10.244.0.4:46219 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00004433s
	[INFO] 10.244.2.2:36496 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000117578s
	
	
	==> coredns [d21ff55e0d1541ec985fefc0bbe41954428037be46427e4a0b9b9f6f59ff38b5] <==
	[INFO] 10.244.1.2:44737 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000149459s
	[INFO] 10.244.0.4:48083 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00009692s
	[INFO] 10.244.0.4:46968 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001630816s
	[INFO] 10.244.0.4:57470 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000132496s
	[INFO] 10.244.0.4:48384 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001005045s
	[INFO] 10.244.0.4:40408 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000078758s
	[INFO] 10.244.0.4:54196 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000053068s
	[INFO] 10.244.0.4:58299 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000099814s
	[INFO] 10.244.2.2:44737 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000172429s
	[INFO] 10.244.2.2:44835 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000087998s
	[INFO] 10.244.2.2:59750 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001386651s
	[INFO] 10.244.2.2:36531 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000075822s
	[INFO] 10.244.2.2:33517 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00005988s
	[INFO] 10.244.1.2:58731 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000174613s
	[INFO] 10.244.1.2:43400 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000105057s
	[INFO] 10.244.1.2:41968 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000104182s
	[INFO] 10.244.0.4:46666 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121402s
	[INFO] 10.244.0.4:46004 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000066296s
	[INFO] 10.244.2.2:39282 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00010929s
	[INFO] 10.244.1.2:58290 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000151089s
	[INFO] 10.244.0.4:38377 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152447s
	[INFO] 10.244.0.4:57414 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000061601s
	[INFO] 10.244.2.2:49722 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000182712s
	[INFO] 10.244.2.2:47690 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00014162s
	[INFO] 10.244.2.2:41318 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000108034s
	
	
	==> describe nodes <==
	Name:               ha-764617
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-764617
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd
	                    minikube.k8s.io/name=ha-764617
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_16T17_04_49_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 17:04:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-764617
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 17:10:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 17:07:52 +0000   Fri, 16 Aug 2024 17:04:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 17:07:52 +0000   Fri, 16 Aug 2024 17:04:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 17:07:52 +0000   Fri, 16 Aug 2024 17:04:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 17:07:52 +0000   Fri, 16 Aug 2024 17:05:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.18
	  Hostname:    ha-764617
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c56e74c3649b4538acc75a2edf2b5dea
	  System UUID:                c56e74c3-649b-4538-acc7-5a2edf2b5dea
	  Boot ID:                    b56c67cf-18b1-46e0-819e-927538c01209
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-rcq66              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m37s
	  kube-system                 coredns-6f6b679f8f-d6c7g             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m13s
	  kube-system                 coredns-6f6b679f8f-rhb6h             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m13s
	  kube-system                 etcd-ha-764617                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m18s
	  kube-system                 kindnet-94vkj                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m13s
	  kube-system                 kube-apiserver-ha-764617             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m19s
	  kube-system                 kube-controller-manager-ha-764617    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m18s
	  kube-system                 kube-proxy-j75vc                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m13s
	  kube-system                 kube-scheduler-ha-764617             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m18s
	  kube-system                 kube-vip-ha-764617                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m20s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m12s                  kube-proxy       
	  Normal  NodeHasSufficientPID     6m25s (x7 over 6m25s)  kubelet          Node ha-764617 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m25s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m25s (x8 over 6m25s)  kubelet          Node ha-764617 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m25s (x8 over 6m25s)  kubelet          Node ha-764617 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 6m18s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m18s                  kubelet          Node ha-764617 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m18s                  kubelet          Node ha-764617 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m18s                  kubelet          Node ha-764617 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m14s                  node-controller  Node ha-764617 event: Registered Node ha-764617 in Controller
	  Normal  NodeReady                5m58s                  kubelet          Node ha-764617 status is now: NodeReady
	  Normal  RegisteredNode           5m13s                  node-controller  Node ha-764617 event: Registered Node ha-764617 in Controller
	  Normal  RegisteredNode           3m57s                  node-controller  Node ha-764617 event: Registered Node ha-764617 in Controller
	
	
	Name:               ha-764617-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-764617-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd
	                    minikube.k8s.io/name=ha-764617
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_16T17_05_48_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 17:05:45 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-764617-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 17:08:38 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 16 Aug 2024 17:07:47 +0000   Fri, 16 Aug 2024 17:09:19 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 16 Aug 2024 17:07:47 +0000   Fri, 16 Aug 2024 17:09:19 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 16 Aug 2024 17:07:47 +0000   Fri, 16 Aug 2024 17:09:19 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 16 Aug 2024 17:07:47 +0000   Fri, 16 Aug 2024 17:09:19 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.184
	  Hostname:    ha-764617-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9b9263e99d3f46399a1ef68b5c9541da
	  System UUID:                9b9263e9-9d3f-4639-9a1e-f68b5c9541da
	  Boot ID:                    64559aa2-31fd-4afa-b1e1-b351bc809c37
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-5kg62                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m37s
	  kube-system                 etcd-ha-764617-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m19s
	  kube-system                 kindnet-7l8xt                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m21s
	  kube-system                 kube-apiserver-ha-764617-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m19s
	  kube-system                 kube-controller-manager-ha-764617-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m19s
	  kube-system                 kube-proxy-g5szr                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m21s
	  kube-system                 kube-scheduler-ha-764617-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m12s
	  kube-system                 kube-vip-ha-764617-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m17s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m21s (x8 over 5m21s)  kubelet          Node ha-764617-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m21s (x8 over 5m21s)  kubelet          Node ha-764617-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m21s (x7 over 5m21s)  kubelet          Node ha-764617-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m19s                  node-controller  Node ha-764617-m02 event: Registered Node ha-764617-m02 in Controller
	  Normal  RegisteredNode           5m13s                  node-controller  Node ha-764617-m02 event: Registered Node ha-764617-m02 in Controller
	  Normal  RegisteredNode           3m57s                  node-controller  Node ha-764617-m02 event: Registered Node ha-764617-m02 in Controller
	  Normal  NodeNotReady             107s                   node-controller  Node ha-764617-m02 status is now: NodeNotReady
	
	
	Name:               ha-764617-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-764617-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd
	                    minikube.k8s.io/name=ha-764617
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_16T17_07_04_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 17:07:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-764617-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 17:11:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 17:08:02 +0000   Fri, 16 Aug 2024 17:07:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 17:08:02 +0000   Fri, 16 Aug 2024 17:07:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 17:08:02 +0000   Fri, 16 Aug 2024 17:07:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 17:08:02 +0000   Fri, 16 Aug 2024 17:07:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.253
	  Hostname:    ha-764617-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c731249060784cabbf92c847e80f83c3
	  System UUID:                c7312490-6078-4cab-bf92-c847e80f83c3
	  Boot ID:                    af3e1a19-01a5-4968-b106-ed3a1fef8c3a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-rvd47                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m37s
	  kube-system                 etcd-ha-764617-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m3s
	  kube-system                 kindnet-fvp67                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m5s
	  kube-system                 kube-apiserver-ha-764617-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m5s
	  kube-system                 kube-controller-manager-ha-764617-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 kube-proxy-mgvzm                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m5s
	  kube-system                 kube-scheduler-ha-764617-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m5s
	  kube-system                 kube-vip-ha-764617-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m2s                 kube-proxy       
	  Normal  NodeAllocatableEnforced  4m6s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m5s (x8 over 4m6s)  kubelet          Node ha-764617-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m5s (x8 over 4m6s)  kubelet          Node ha-764617-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m5s (x7 over 4m6s)  kubelet          Node ha-764617-m03 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m4s                 node-controller  Node ha-764617-m03 event: Registered Node ha-764617-m03 in Controller
	  Normal  RegisteredNode           4m3s                 node-controller  Node ha-764617-m03 event: Registered Node ha-764617-m03 in Controller
	  Normal  RegisteredNode           3m57s                node-controller  Node ha-764617-m03 event: Registered Node ha-764617-m03 in Controller
	
	
	Name:               ha-764617-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-764617-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd
	                    minikube.k8s.io/name=ha-764617
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_16T17_08_05_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 17:08:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-764617-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 17:10:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 17:08:35 +0000   Fri, 16 Aug 2024 17:08:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 17:08:35 +0000   Fri, 16 Aug 2024 17:08:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 17:08:35 +0000   Fri, 16 Aug 2024 17:08:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 17:08:35 +0000   Fri, 16 Aug 2024 17:08:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.137
	  Hostname:    ha-764617-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6601760275c145fda2c7de8f57c611fa
	  System UUID:                66017602-75c1-45fd-a2c7-de8f57c611fa
	  Boot ID:                    2537bdd8-4785-401f-91cd-561e77b7360b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-785hx       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m1s
	  kube-system                 kube-proxy-p9gpb    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m55s                kube-proxy       
	  Normal  NodeHasSufficientMemory  3m1s (x2 over 3m1s)  kubelet          Node ha-764617-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m1s (x2 over 3m1s)  kubelet          Node ha-764617-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m1s (x2 over 3m1s)  kubelet          Node ha-764617-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m59s                node-controller  Node ha-764617-m04 event: Registered Node ha-764617-m04 in Controller
	  Normal  RegisteredNode           2m58s                node-controller  Node ha-764617-m04 event: Registered Node ha-764617-m04 in Controller
	  Normal  RegisteredNode           2m57s                node-controller  Node ha-764617-m04 event: Registered Node ha-764617-m04 in Controller
	  Normal  NodeReady                2m40s                kubelet          Node ha-764617-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Aug16 17:04] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050523] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036974] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.680592] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.748851] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.529279] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.494535] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.053885] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056699] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.201898] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.107599] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.255485] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +3.757333] systemd-fstab-generator[760]: Ignoring "noauto" option for root device
	[  +4.397161] systemd-fstab-generator[897]: Ignoring "noauto" option for root device
	[  +0.059974] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.993084] systemd-fstab-generator[1320]: Ignoring "noauto" option for root device
	[  +0.077626] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.633091] kauditd_printk_skb: 18 callbacks suppressed
	[Aug16 17:05] kauditd_printk_skb: 41 callbacks suppressed
	[ +41.798128] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [c020d60e48e21e3f68c2ab1e7dbc4570867af80f727edd6529d1d59b17f11a6f] <==
	{"level":"warn","ts":"2024-08-16T17:11:06.859770Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d6d01a71dfc61a14","from":"d6d01a71dfc61a14","remote-peer-id":"c0829ec3e89b55c7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T17:11:06.867254Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d6d01a71dfc61a14","from":"d6d01a71dfc61a14","remote-peer-id":"c0829ec3e89b55c7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T17:11:06.871904Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d6d01a71dfc61a14","from":"d6d01a71dfc61a14","remote-peer-id":"c0829ec3e89b55c7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T17:11:06.873050Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d6d01a71dfc61a14","from":"d6d01a71dfc61a14","remote-peer-id":"c0829ec3e89b55c7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T17:11:06.882791Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d6d01a71dfc61a14","from":"d6d01a71dfc61a14","remote-peer-id":"c0829ec3e89b55c7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T17:11:06.891518Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d6d01a71dfc61a14","from":"d6d01a71dfc61a14","remote-peer-id":"c0829ec3e89b55c7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T17:11:06.898020Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d6d01a71dfc61a14","from":"d6d01a71dfc61a14","remote-peer-id":"c0829ec3e89b55c7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T17:11:06.901704Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d6d01a71dfc61a14","from":"d6d01a71dfc61a14","remote-peer-id":"c0829ec3e89b55c7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T17:11:06.905340Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d6d01a71dfc61a14","from":"d6d01a71dfc61a14","remote-peer-id":"c0829ec3e89b55c7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T17:11:06.911442Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d6d01a71dfc61a14","from":"d6d01a71dfc61a14","remote-peer-id":"c0829ec3e89b55c7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T17:11:06.917340Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d6d01a71dfc61a14","from":"d6d01a71dfc61a14","remote-peer-id":"c0829ec3e89b55c7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T17:11:06.923766Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d6d01a71dfc61a14","from":"d6d01a71dfc61a14","remote-peer-id":"c0829ec3e89b55c7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T17:11:06.926999Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d6d01a71dfc61a14","from":"d6d01a71dfc61a14","remote-peer-id":"c0829ec3e89b55c7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T17:11:06.929737Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d6d01a71dfc61a14","from":"d6d01a71dfc61a14","remote-peer-id":"c0829ec3e89b55c7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T17:11:06.934868Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d6d01a71dfc61a14","from":"d6d01a71dfc61a14","remote-peer-id":"c0829ec3e89b55c7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T17:11:06.940659Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d6d01a71dfc61a14","from":"d6d01a71dfc61a14","remote-peer-id":"c0829ec3e89b55c7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T17:11:06.946565Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d6d01a71dfc61a14","from":"d6d01a71dfc61a14","remote-peer-id":"c0829ec3e89b55c7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T17:11:06.950008Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d6d01a71dfc61a14","from":"d6d01a71dfc61a14","remote-peer-id":"c0829ec3e89b55c7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T17:11:06.953086Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d6d01a71dfc61a14","from":"d6d01a71dfc61a14","remote-peer-id":"c0829ec3e89b55c7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T17:11:06.956127Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d6d01a71dfc61a14","from":"d6d01a71dfc61a14","remote-peer-id":"c0829ec3e89b55c7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T17:11:06.961938Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d6d01a71dfc61a14","from":"d6d01a71dfc61a14","remote-peer-id":"c0829ec3e89b55c7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T17:11:06.972231Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d6d01a71dfc61a14","from":"d6d01a71dfc61a14","remote-peer-id":"c0829ec3e89b55c7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T17:11:06.973214Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d6d01a71dfc61a14","from":"d6d01a71dfc61a14","remote-peer-id":"c0829ec3e89b55c7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T17:11:06.980484Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d6d01a71dfc61a14","from":"d6d01a71dfc61a14","remote-peer-id":"c0829ec3e89b55c7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T17:11:06.982249Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d6d01a71dfc61a14","from":"d6d01a71dfc61a14","remote-peer-id":"c0829ec3e89b55c7","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 17:11:07 up 6 min,  0 users,  load average: 0.43, 0.51, 0.26
	Linux ha-764617 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [b7c860bdbf8f8c3ce3e6ce12afcd55605f1afdf02828581465b57f088bf9fc24] <==
	I0816 17:10:28.551599       1 main.go:322] Node ha-764617-m04 has CIDR [10.244.3.0/24] 
	I0816 17:10:38.558773       1 main.go:295] Handling node with IPs: map[192.168.39.253:{}]
	I0816 17:10:38.558932       1 main.go:322] Node ha-764617-m03 has CIDR [10.244.2.0/24] 
	I0816 17:10:38.559125       1 main.go:295] Handling node with IPs: map[192.168.39.137:{}]
	I0816 17:10:38.559221       1 main.go:322] Node ha-764617-m04 has CIDR [10.244.3.0/24] 
	I0816 17:10:38.559312       1 main.go:295] Handling node with IPs: map[192.168.39.18:{}]
	I0816 17:10:38.559332       1 main.go:299] handling current node
	I0816 17:10:38.559364       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0816 17:10:38.559380       1 main.go:322] Node ha-764617-m02 has CIDR [10.244.1.0/24] 
	I0816 17:10:48.558696       1 main.go:295] Handling node with IPs: map[192.168.39.18:{}]
	I0816 17:10:48.558799       1 main.go:299] handling current node
	I0816 17:10:48.558828       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0816 17:10:48.558837       1 main.go:322] Node ha-764617-m02 has CIDR [10.244.1.0/24] 
	I0816 17:10:48.559079       1 main.go:295] Handling node with IPs: map[192.168.39.253:{}]
	I0816 17:10:48.559095       1 main.go:322] Node ha-764617-m03 has CIDR [10.244.2.0/24] 
	I0816 17:10:48.559251       1 main.go:295] Handling node with IPs: map[192.168.39.137:{}]
	I0816 17:10:48.559278       1 main.go:322] Node ha-764617-m04 has CIDR [10.244.3.0/24] 
	I0816 17:10:58.551271       1 main.go:295] Handling node with IPs: map[192.168.39.137:{}]
	I0816 17:10:58.551382       1 main.go:322] Node ha-764617-m04 has CIDR [10.244.3.0/24] 
	I0816 17:10:58.551613       1 main.go:295] Handling node with IPs: map[192.168.39.18:{}]
	I0816 17:10:58.551638       1 main.go:299] handling current node
	I0816 17:10:58.551660       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0816 17:10:58.551676       1 main.go:322] Node ha-764617-m02 has CIDR [10.244.1.0/24] 
	I0816 17:10:58.551744       1 main.go:295] Handling node with IPs: map[192.168.39.253:{}]
	I0816 17:10:58.551763       1 main.go:322] Node ha-764617-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [5964f78981acee32a76525df3d36071ce0c8b129aa0af6ff7aa1cdaff80b4110] <==
	W0816 17:04:47.326211       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.18]
	I0816 17:04:47.327200       1 controller.go:615] quota admission added evaluator for: endpoints
	I0816 17:04:47.331997       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0816 17:04:47.701125       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0816 17:04:48.671616       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0816 17:04:48.688479       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0816 17:04:48.696576       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0816 17:04:53.201660       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0816 17:04:53.459024       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0816 17:07:33.706199       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47396: use of closed network connection
	E0816 17:07:33.897650       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50922: use of closed network connection
	E0816 17:07:34.095470       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50948: use of closed network connection
	E0816 17:07:34.279524       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50966: use of closed network connection
	E0816 17:07:34.448894       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50984: use of closed network connection
	E0816 17:07:34.624762       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51016: use of closed network connection
	E0816 17:07:34.807764       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51030: use of closed network connection
	E0816 17:07:34.983617       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51036: use of closed network connection
	E0816 17:07:35.162995       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51062: use of closed network connection
	E0816 17:07:35.438882       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51072: use of closed network connection
	E0816 17:07:35.614283       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51098: use of closed network connection
	E0816 17:07:35.789491       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51122: use of closed network connection
	E0816 17:07:35.955099       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51152: use of closed network connection
	E0816 17:07:36.141080       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51164: use of closed network connection
	E0816 17:07:36.312313       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51186: use of closed network connection
	W0816 17:08:57.342700       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.18 192.168.39.253]
	
	
	==> kube-controller-manager [0d7b524ef17cfbc76cf8e0ec5c8dc05fb415ba95dd20034cc9e994fe15802183] <==
	I0816 17:08:05.397423       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-764617-m04" podCIDRs=["10.244.3.0/24"]
	I0816 17:08:05.397475       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-764617-m04"
	I0816 17:08:05.397557       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-764617-m04"
	I0816 17:08:05.408900       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-764617-m04"
	I0816 17:08:05.521368       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-764617-m04"
	I0816 17:08:05.923939       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-764617-m04"
	I0816 17:08:07.468502       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-764617-m04"
	I0816 17:08:07.537876       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-764617-m04"
	I0816 17:08:08.309439       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-764617-m04"
	I0816 17:08:08.353504       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-764617-m04"
	I0816 17:08:09.380633       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-764617-m04"
	I0816 17:08:09.463903       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-764617-m04"
	I0816 17:08:15.643550       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-764617-m04"
	I0816 17:08:26.461323       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-764617-m04"
	I0816 17:08:26.461538       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-764617-m04"
	I0816 17:08:26.483748       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-764617-m04"
	I0816 17:08:27.486425       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-764617-m04"
	I0816 17:08:35.906607       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-764617-m04"
	I0816 17:09:19.401331       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-764617-m02"
	I0816 17:09:19.401391       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-764617-m04"
	I0816 17:09:19.423873       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-764617-m02"
	I0816 17:09:19.561938       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="33.322302ms"
	I0816 17:09:19.562198       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="171.577µs"
	I0816 17:09:22.527934       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-764617-m02"
	I0816 17:09:24.671322       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-764617-m02"
	
	
	==> kube-proxy [1aaf72ada1592eb9db02d4b420e3712769e6486d17ade2669153a86196aec79d] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0816 17:04:54.457594       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0816 17:04:54.467881       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.18"]
	E0816 17:04:54.467972       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0816 17:04:54.505988       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0816 17:04:54.506046       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0816 17:04:54.506075       1 server_linux.go:169] "Using iptables Proxier"
	I0816 17:04:54.508357       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0816 17:04:54.508740       1 server.go:483] "Version info" version="v1.31.0"
	I0816 17:04:54.508807       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 17:04:54.510310       1 config.go:197] "Starting service config controller"
	I0816 17:04:54.510367       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0816 17:04:54.510427       1 config.go:104] "Starting endpoint slice config controller"
	I0816 17:04:54.510443       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0816 17:04:54.510949       1 config.go:326] "Starting node config controller"
	I0816 17:04:54.510985       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0816 17:04:54.610842       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0816 17:04:54.610910       1 shared_informer.go:320] Caches are synced for service config
	I0816 17:04:54.611095       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [547ba7c3099cf77e755ecafde9e451130522c4ac7ad82e4d1425c79c1f03ae1b] <==
	W0816 17:04:46.565034       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 17:04:46.565183       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0816 17:04:46.612257       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0816 17:04:46.612306       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:04:46.612647       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 17:04:46.612679       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:04:46.730255       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0816 17:04:46.730301       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 17:04:46.739812       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 17:04:46.739857       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0816 17:04:46.794371       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 17:04:46.794604       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:04:46.794371       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0816 17:04:46.794716       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0816 17:04:46.812070       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 17:04:46.812114       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0816 17:04:48.641797       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0816 17:07:29.208916       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-rvd47\": pod busybox-7dff88458-rvd47 is already assigned to node \"ha-764617-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-rvd47" node="ha-764617-m03"
	E0816 17:07:29.209097       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-rvd47\": pod busybox-7dff88458-rvd47 is already assigned to node \"ha-764617-m03\"" pod="default/busybox-7dff88458-rvd47"
	E0816 17:07:29.210073       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-rcq66\": pod busybox-7dff88458-rcq66 is already assigned to node \"ha-764617\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-rcq66" node="ha-764617"
	E0816 17:07:29.218500       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-rcq66\": pod busybox-7dff88458-rcq66 is already assigned to node \"ha-764617\"" pod="default/busybox-7dff88458-rcq66"
	E0816 17:08:05.463041       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-785hx\": pod kindnet-785hx is already assigned to node \"ha-764617-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-785hx" node="ha-764617-m04"
	E0816 17:08:05.468950       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 82c775a8-d580-4201-9da7-790a5a95ef6f(kube-system/kindnet-785hx) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-785hx"
	E0816 17:08:05.469002       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-785hx\": pod kindnet-785hx is already assigned to node \"ha-764617-m04\"" pod="kube-system/kindnet-785hx"
	I0816 17:08:05.469055       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-785hx" node="ha-764617-m04"
	
	
	==> kubelet <==
	Aug 16 17:09:48 ha-764617 kubelet[1328]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 17:09:48 ha-764617 kubelet[1328]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 17:09:48 ha-764617 kubelet[1328]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 17:09:48 ha-764617 kubelet[1328]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 17:09:48 ha-764617 kubelet[1328]: E0816 17:09:48.684872    1328 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828188684412388,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:09:48 ha-764617 kubelet[1328]: E0816 17:09:48.684899    1328 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828188684412388,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:09:58 ha-764617 kubelet[1328]: E0816 17:09:58.686487    1328 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828198686041508,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:09:58 ha-764617 kubelet[1328]: E0816 17:09:58.686837    1328 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828198686041508,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:10:08 ha-764617 kubelet[1328]: E0816 17:10:08.688668    1328 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828208687887214,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:10:08 ha-764617 kubelet[1328]: E0816 17:10:08.688713    1328 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828208687887214,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:10:18 ha-764617 kubelet[1328]: E0816 17:10:18.690838    1328 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828218690451450,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:10:18 ha-764617 kubelet[1328]: E0816 17:10:18.690864    1328 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828218690451450,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:10:28 ha-764617 kubelet[1328]: E0816 17:10:28.693012    1328 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828228692671726,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:10:28 ha-764617 kubelet[1328]: E0816 17:10:28.693396    1328 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828228692671726,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:10:38 ha-764617 kubelet[1328]: E0816 17:10:38.695269    1328 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828238694884936,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:10:38 ha-764617 kubelet[1328]: E0816 17:10:38.695607    1328 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828238694884936,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:10:48 ha-764617 kubelet[1328]: E0816 17:10:48.596185    1328 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 17:10:48 ha-764617 kubelet[1328]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 17:10:48 ha-764617 kubelet[1328]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 17:10:48 ha-764617 kubelet[1328]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 17:10:48 ha-764617 kubelet[1328]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 17:10:48 ha-764617 kubelet[1328]: E0816 17:10:48.697973    1328 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828248697601185,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:10:48 ha-764617 kubelet[1328]: E0816 17:10:48.698070    1328 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828248697601185,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:10:58 ha-764617 kubelet[1328]: E0816 17:10:58.699796    1328 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828258699572978,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:10:58 ha-764617 kubelet[1328]: E0816 17:10:58.699833    1328 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828258699572978,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-764617 -n ha-764617
helpers_test.go:261: (dbg) Run:  kubectl --context ha-764617 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (59.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 status -v=7 --alsologtostderr
E0816 17:11:12.269329   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-764617 status -v=7 --alsologtostderr: exit status 3 (3.201292457s)

                                                
                                                
-- stdout --
	ha-764617
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-764617-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-764617-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-764617-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 17:11:11.500126   32070 out.go:345] Setting OutFile to fd 1 ...
	I0816 17:11:11.500373   32070 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:11:11.500388   32070 out.go:358] Setting ErrFile to fd 2...
	I0816 17:11:11.500394   32070 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:11:11.500653   32070 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-9545/.minikube/bin
	I0816 17:11:11.500842   32070 out.go:352] Setting JSON to false
	I0816 17:11:11.500872   32070 mustload.go:65] Loading cluster: ha-764617
	I0816 17:11:11.500974   32070 notify.go:220] Checking for updates...
	I0816 17:11:11.502548   32070 config.go:182] Loaded profile config "ha-764617": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 17:11:11.502573   32070 status.go:255] checking status of ha-764617 ...
	I0816 17:11:11.503144   32070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:11.503187   32070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:11.518138   32070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41293
	I0816 17:11:11.518621   32070 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:11.519164   32070 main.go:141] libmachine: Using API Version  1
	I0816 17:11:11.519183   32070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:11.519512   32070 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:11.519716   32070 main.go:141] libmachine: (ha-764617) Calling .GetState
	I0816 17:11:11.521561   32070 status.go:330] ha-764617 host status = "Running" (err=<nil>)
	I0816 17:11:11.521574   32070 host.go:66] Checking if "ha-764617" exists ...
	I0816 17:11:11.521835   32070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:11.521885   32070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:11.536434   32070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45425
	I0816 17:11:11.536847   32070 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:11.537324   32070 main.go:141] libmachine: Using API Version  1
	I0816 17:11:11.537346   32070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:11.537679   32070 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:11.537884   32070 main.go:141] libmachine: (ha-764617) Calling .GetIP
	I0816 17:11:11.540546   32070 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:11:11.541014   32070 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:11:11.541042   32070 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:11:11.541213   32070 host.go:66] Checking if "ha-764617" exists ...
	I0816 17:11:11.541556   32070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:11.541593   32070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:11.557064   32070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34921
	I0816 17:11:11.557497   32070 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:11.557929   32070 main.go:141] libmachine: Using API Version  1
	I0816 17:11:11.557951   32070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:11.558226   32070 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:11.558445   32070 main.go:141] libmachine: (ha-764617) Calling .DriverName
	I0816 17:11:11.558668   32070 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 17:11:11.558691   32070 main.go:141] libmachine: (ha-764617) Calling .GetSSHHostname
	I0816 17:11:11.561358   32070 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:11:11.561675   32070 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:11:11.561710   32070 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:11:11.561820   32070 main.go:141] libmachine: (ha-764617) Calling .GetSSHPort
	I0816 17:11:11.561971   32070 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:11:11.562112   32070 main.go:141] libmachine: (ha-764617) Calling .GetSSHUsername
	I0816 17:11:11.562219   32070 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617/id_rsa Username:docker}
	I0816 17:11:11.652171   32070 ssh_runner.go:195] Run: systemctl --version
	I0816 17:11:11.658525   32070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 17:11:11.673928   32070 kubeconfig.go:125] found "ha-764617" server: "https://192.168.39.254:8443"
	I0816 17:11:11.673956   32070 api_server.go:166] Checking apiserver status ...
	I0816 17:11:11.673990   32070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 17:11:11.687503   32070 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1104/cgroup
	W0816 17:11:11.698603   32070 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1104/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 17:11:11.698657   32070 ssh_runner.go:195] Run: ls
	I0816 17:11:11.703151   32070 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0816 17:11:11.707296   32070 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0816 17:11:11.707315   32070 status.go:422] ha-764617 apiserver status = Running (err=<nil>)
	I0816 17:11:11.707325   32070 status.go:257] ha-764617 status: &{Name:ha-764617 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 17:11:11.707347   32070 status.go:255] checking status of ha-764617-m02 ...
	I0816 17:11:11.707651   32070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:11.707688   32070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:11.722944   32070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33933
	I0816 17:11:11.723338   32070 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:11.723804   32070 main.go:141] libmachine: Using API Version  1
	I0816 17:11:11.723824   32070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:11.724118   32070 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:11.724307   32070 main.go:141] libmachine: (ha-764617-m02) Calling .GetState
	I0816 17:11:11.725826   32070 status.go:330] ha-764617-m02 host status = "Running" (err=<nil>)
	I0816 17:11:11.725842   32070 host.go:66] Checking if "ha-764617-m02" exists ...
	I0816 17:11:11.726144   32070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:11.726193   32070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:11.742261   32070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34179
	I0816 17:11:11.742643   32070 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:11.743107   32070 main.go:141] libmachine: Using API Version  1
	I0816 17:11:11.743129   32070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:11.743419   32070 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:11.743611   32070 main.go:141] libmachine: (ha-764617-m02) Calling .GetIP
	I0816 17:11:11.746347   32070 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:11:11.746718   32070 main.go:141] libmachine: (ha-764617-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:3e:7f", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:05:08 +0000 UTC Type:0 Mac:52:54:00:cf:3e:7f Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-764617-m02 Clientid:01:52:54:00:cf:3e:7f}
	I0816 17:11:11.746751   32070 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:11:11.746897   32070 host.go:66] Checking if "ha-764617-m02" exists ...
	I0816 17:11:11.747264   32070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:11.747299   32070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:11.761627   32070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45459
	I0816 17:11:11.762082   32070 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:11.762500   32070 main.go:141] libmachine: Using API Version  1
	I0816 17:11:11.762518   32070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:11.762813   32070 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:11.762983   32070 main.go:141] libmachine: (ha-764617-m02) Calling .DriverName
	I0816 17:11:11.763157   32070 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 17:11:11.763175   32070 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHHostname
	I0816 17:11:11.765603   32070 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:11:11.765974   32070 main.go:141] libmachine: (ha-764617-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:3e:7f", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:05:08 +0000 UTC Type:0 Mac:52:54:00:cf:3e:7f Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-764617-m02 Clientid:01:52:54:00:cf:3e:7f}
	I0816 17:11:11.766006   32070 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:11:11.766146   32070 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHPort
	I0816 17:11:11.766293   32070 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHKeyPath
	I0816 17:11:11.766449   32070 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHUsername
	I0816 17:11:11.766599   32070 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m02/id_rsa Username:docker}
	W0816 17:11:14.316891   32070 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.184:22: connect: no route to host
	W0816 17:11:14.316970   32070 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.184:22: connect: no route to host
	E0816 17:11:14.316986   32070 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.184:22: connect: no route to host
	I0816 17:11:14.316993   32070 status.go:257] ha-764617-m02 status: &{Name:ha-764617-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0816 17:11:14.317027   32070 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.184:22: connect: no route to host
	I0816 17:11:14.317034   32070 status.go:255] checking status of ha-764617-m03 ...
	I0816 17:11:14.317338   32070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:14.317381   32070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:14.332005   32070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35865
	I0816 17:11:14.332400   32070 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:14.332896   32070 main.go:141] libmachine: Using API Version  1
	I0816 17:11:14.332932   32070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:14.333257   32070 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:14.333490   32070 main.go:141] libmachine: (ha-764617-m03) Calling .GetState
	I0816 17:11:14.335217   32070 status.go:330] ha-764617-m03 host status = "Running" (err=<nil>)
	I0816 17:11:14.335233   32070 host.go:66] Checking if "ha-764617-m03" exists ...
	I0816 17:11:14.335533   32070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:14.335579   32070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:14.349958   32070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33799
	I0816 17:11:14.350401   32070 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:14.350866   32070 main.go:141] libmachine: Using API Version  1
	I0816 17:11:14.350885   32070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:14.351258   32070 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:14.351457   32070 main.go:141] libmachine: (ha-764617-m03) Calling .GetIP
	I0816 17:11:14.353838   32070 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:11:14.354240   32070 main.go:141] libmachine: (ha-764617-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:4e:81", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:06:25 +0000 UTC Type:0 Mac:52:54:00:b2:4e:81 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-764617-m03 Clientid:01:52:54:00:b2:4e:81}
	I0816 17:11:14.354264   32070 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:11:14.354420   32070 host.go:66] Checking if "ha-764617-m03" exists ...
	I0816 17:11:14.354712   32070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:14.354744   32070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:14.368934   32070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42175
	I0816 17:11:14.369363   32070 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:14.369730   32070 main.go:141] libmachine: Using API Version  1
	I0816 17:11:14.369748   32070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:14.369965   32070 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:14.370075   32070 main.go:141] libmachine: (ha-764617-m03) Calling .DriverName
	I0816 17:11:14.370250   32070 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 17:11:14.370266   32070 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHHostname
	I0816 17:11:14.373030   32070 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:11:14.373495   32070 main.go:141] libmachine: (ha-764617-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:4e:81", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:06:25 +0000 UTC Type:0 Mac:52:54:00:b2:4e:81 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-764617-m03 Clientid:01:52:54:00:b2:4e:81}
	I0816 17:11:14.373517   32070 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:11:14.373691   32070 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHPort
	I0816 17:11:14.373875   32070 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHKeyPath
	I0816 17:11:14.374004   32070 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHUsername
	I0816 17:11:14.374137   32070 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m03/id_rsa Username:docker}
	I0816 17:11:14.451494   32070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 17:11:14.467806   32070 kubeconfig.go:125] found "ha-764617" server: "https://192.168.39.254:8443"
	I0816 17:11:14.467828   32070 api_server.go:166] Checking apiserver status ...
	I0816 17:11:14.467859   32070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 17:11:14.488073   32070 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1510/cgroup
	W0816 17:11:14.498320   32070 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1510/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 17:11:14.498373   32070 ssh_runner.go:195] Run: ls
	I0816 17:11:14.502905   32070 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0816 17:11:14.509105   32070 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0816 17:11:14.509125   32070 status.go:422] ha-764617-m03 apiserver status = Running (err=<nil>)
	I0816 17:11:14.509142   32070 status.go:257] ha-764617-m03 status: &{Name:ha-764617-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 17:11:14.509165   32070 status.go:255] checking status of ha-764617-m04 ...
	I0816 17:11:14.509481   32070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:14.509517   32070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:14.524184   32070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43003
	I0816 17:11:14.524595   32070 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:14.525133   32070 main.go:141] libmachine: Using API Version  1
	I0816 17:11:14.525164   32070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:14.525438   32070 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:14.525621   32070 main.go:141] libmachine: (ha-764617-m04) Calling .GetState
	I0816 17:11:14.527035   32070 status.go:330] ha-764617-m04 host status = "Running" (err=<nil>)
	I0816 17:11:14.527046   32070 host.go:66] Checking if "ha-764617-m04" exists ...
	I0816 17:11:14.527337   32070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:14.527367   32070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:14.543502   32070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43513
	I0816 17:11:14.543903   32070 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:14.544379   32070 main.go:141] libmachine: Using API Version  1
	I0816 17:11:14.544403   32070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:14.544745   32070 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:14.544954   32070 main.go:141] libmachine: (ha-764617-m04) Calling .GetIP
	I0816 17:11:14.547773   32070 main.go:141] libmachine: (ha-764617-m04) DBG | domain ha-764617-m04 has defined MAC address 52:54:00:61:8e:ba in network mk-ha-764617
	I0816 17:11:14.548296   32070 main.go:141] libmachine: (ha-764617-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:8e:ba", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:07:50 +0000 UTC Type:0 Mac:52:54:00:61:8e:ba Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-764617-m04 Clientid:01:52:54:00:61:8e:ba}
	I0816 17:11:14.548332   32070 main.go:141] libmachine: (ha-764617-m04) DBG | domain ha-764617-m04 has defined IP address 192.168.39.137 and MAC address 52:54:00:61:8e:ba in network mk-ha-764617
	I0816 17:11:14.548458   32070 host.go:66] Checking if "ha-764617-m04" exists ...
	I0816 17:11:14.548858   32070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:14.548901   32070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:14.563880   32070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34651
	I0816 17:11:14.564303   32070 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:14.564782   32070 main.go:141] libmachine: Using API Version  1
	I0816 17:11:14.564807   32070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:14.565153   32070 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:14.565385   32070 main.go:141] libmachine: (ha-764617-m04) Calling .DriverName
	I0816 17:11:14.565581   32070 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 17:11:14.565602   32070 main.go:141] libmachine: (ha-764617-m04) Calling .GetSSHHostname
	I0816 17:11:14.568240   32070 main.go:141] libmachine: (ha-764617-m04) DBG | domain ha-764617-m04 has defined MAC address 52:54:00:61:8e:ba in network mk-ha-764617
	I0816 17:11:14.568702   32070 main.go:141] libmachine: (ha-764617-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:8e:ba", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:07:50 +0000 UTC Type:0 Mac:52:54:00:61:8e:ba Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-764617-m04 Clientid:01:52:54:00:61:8e:ba}
	I0816 17:11:14.568726   32070 main.go:141] libmachine: (ha-764617-m04) DBG | domain ha-764617-m04 has defined IP address 192.168.39.137 and MAC address 52:54:00:61:8e:ba in network mk-ha-764617
	I0816 17:11:14.568929   32070 main.go:141] libmachine: (ha-764617-m04) Calling .GetSSHPort
	I0816 17:11:14.569104   32070 main.go:141] libmachine: (ha-764617-m04) Calling .GetSSHKeyPath
	I0816 17:11:14.569233   32070 main.go:141] libmachine: (ha-764617-m04) Calling .GetSSHUsername
	I0816 17:11:14.569420   32070 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m04/id_rsa Username:docker}
	I0816 17:11:14.647178   32070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 17:11:14.660643   32070 status.go:257] ha-764617-m04 status: &{Name:ha-764617-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-764617 status -v=7 --alsologtostderr: exit status 3 (4.871455241s)

                                                
                                                
-- stdout --
	ha-764617
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-764617-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-764617-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-764617-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 17:11:15.969119   32171 out.go:345] Setting OutFile to fd 1 ...
	I0816 17:11:15.969226   32171 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:11:15.969235   32171 out.go:358] Setting ErrFile to fd 2...
	I0816 17:11:15.969239   32171 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:11:15.969433   32171 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-9545/.minikube/bin
	I0816 17:11:15.969630   32171 out.go:352] Setting JSON to false
	I0816 17:11:15.969658   32171 mustload.go:65] Loading cluster: ha-764617
	I0816 17:11:15.969755   32171 notify.go:220] Checking for updates...
	I0816 17:11:15.970107   32171 config.go:182] Loaded profile config "ha-764617": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 17:11:15.970127   32171 status.go:255] checking status of ha-764617 ...
	I0816 17:11:15.970570   32171 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:15.970645   32171 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:15.989022   32171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46869
	I0816 17:11:15.989484   32171 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:15.990006   32171 main.go:141] libmachine: Using API Version  1
	I0816 17:11:15.990033   32171 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:15.990437   32171 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:15.990689   32171 main.go:141] libmachine: (ha-764617) Calling .GetState
	I0816 17:11:15.992162   32171 status.go:330] ha-764617 host status = "Running" (err=<nil>)
	I0816 17:11:15.992177   32171 host.go:66] Checking if "ha-764617" exists ...
	I0816 17:11:15.992455   32171 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:15.992492   32171 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:16.008116   32171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36675
	I0816 17:11:16.008508   32171 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:16.009120   32171 main.go:141] libmachine: Using API Version  1
	I0816 17:11:16.009157   32171 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:16.009493   32171 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:16.009688   32171 main.go:141] libmachine: (ha-764617) Calling .GetIP
	I0816 17:11:16.012474   32171 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:11:16.012909   32171 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:11:16.012954   32171 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:11:16.013049   32171 host.go:66] Checking if "ha-764617" exists ...
	I0816 17:11:16.013348   32171 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:16.013390   32171 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:16.028058   32171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45871
	I0816 17:11:16.028426   32171 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:16.028876   32171 main.go:141] libmachine: Using API Version  1
	I0816 17:11:16.028898   32171 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:16.029199   32171 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:16.029352   32171 main.go:141] libmachine: (ha-764617) Calling .DriverName
	I0816 17:11:16.029619   32171 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 17:11:16.029656   32171 main.go:141] libmachine: (ha-764617) Calling .GetSSHHostname
	I0816 17:11:16.032188   32171 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:11:16.032668   32171 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:11:16.032694   32171 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:11:16.032815   32171 main.go:141] libmachine: (ha-764617) Calling .GetSSHPort
	I0816 17:11:16.032986   32171 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:11:16.033127   32171 main.go:141] libmachine: (ha-764617) Calling .GetSSHUsername
	I0816 17:11:16.033285   32171 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617/id_rsa Username:docker}
	I0816 17:11:16.120290   32171 ssh_runner.go:195] Run: systemctl --version
	I0816 17:11:16.126632   32171 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 17:11:16.141235   32171 kubeconfig.go:125] found "ha-764617" server: "https://192.168.39.254:8443"
	I0816 17:11:16.141269   32171 api_server.go:166] Checking apiserver status ...
	I0816 17:11:16.141308   32171 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 17:11:16.154373   32171 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1104/cgroup
	W0816 17:11:16.165147   32171 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1104/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 17:11:16.165207   32171 ssh_runner.go:195] Run: ls
	I0816 17:11:16.168876   32171 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0816 17:11:16.172938   32171 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0816 17:11:16.172954   32171 status.go:422] ha-764617 apiserver status = Running (err=<nil>)
	I0816 17:11:16.172962   32171 status.go:257] ha-764617 status: &{Name:ha-764617 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 17:11:16.172976   32171 status.go:255] checking status of ha-764617-m02 ...
	I0816 17:11:16.173327   32171 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:16.173361   32171 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:16.189142   32171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38577
	I0816 17:11:16.189546   32171 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:16.189994   32171 main.go:141] libmachine: Using API Version  1
	I0816 17:11:16.190013   32171 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:16.190329   32171 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:16.190480   32171 main.go:141] libmachine: (ha-764617-m02) Calling .GetState
	I0816 17:11:16.192031   32171 status.go:330] ha-764617-m02 host status = "Running" (err=<nil>)
	I0816 17:11:16.192050   32171 host.go:66] Checking if "ha-764617-m02" exists ...
	I0816 17:11:16.192467   32171 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:16.192510   32171 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:16.207632   32171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38569
	I0816 17:11:16.207997   32171 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:16.208436   32171 main.go:141] libmachine: Using API Version  1
	I0816 17:11:16.208460   32171 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:16.208752   32171 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:16.208951   32171 main.go:141] libmachine: (ha-764617-m02) Calling .GetIP
	I0816 17:11:16.211808   32171 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:11:16.212212   32171 main.go:141] libmachine: (ha-764617-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:3e:7f", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:05:08 +0000 UTC Type:0 Mac:52:54:00:cf:3e:7f Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-764617-m02 Clientid:01:52:54:00:cf:3e:7f}
	I0816 17:11:16.212245   32171 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:11:16.212347   32171 host.go:66] Checking if "ha-764617-m02" exists ...
	I0816 17:11:16.212697   32171 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:16.212734   32171 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:16.226924   32171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42305
	I0816 17:11:16.227399   32171 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:16.227914   32171 main.go:141] libmachine: Using API Version  1
	I0816 17:11:16.227937   32171 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:16.228253   32171 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:16.228442   32171 main.go:141] libmachine: (ha-764617-m02) Calling .DriverName
	I0816 17:11:16.228659   32171 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 17:11:16.228682   32171 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHHostname
	I0816 17:11:16.231462   32171 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:11:16.231860   32171 main.go:141] libmachine: (ha-764617-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:3e:7f", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:05:08 +0000 UTC Type:0 Mac:52:54:00:cf:3e:7f Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-764617-m02 Clientid:01:52:54:00:cf:3e:7f}
	I0816 17:11:16.231880   32171 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:11:16.232018   32171 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHPort
	I0816 17:11:16.232195   32171 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHKeyPath
	I0816 17:11:16.232333   32171 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHUsername
	I0816 17:11:16.232451   32171 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m02/id_rsa Username:docker}
	W0816 17:11:17.388850   32171 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.184:22: connect: no route to host
	I0816 17:11:17.388891   32171 retry.go:31] will retry after 145.642125ms: dial tcp 192.168.39.184:22: connect: no route to host
	W0816 17:11:20.460941   32171 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.184:22: connect: no route to host
	W0816 17:11:20.461051   32171 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.184:22: connect: no route to host
	E0816 17:11:20.461070   32171 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.184:22: connect: no route to host
	I0816 17:11:20.461077   32171 status.go:257] ha-764617-m02 status: &{Name:ha-764617-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0816 17:11:20.461107   32171 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.184:22: connect: no route to host
	I0816 17:11:20.461121   32171 status.go:255] checking status of ha-764617-m03 ...
	I0816 17:11:20.461545   32171 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:20.461602   32171 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:20.476723   32171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44413
	I0816 17:11:20.477108   32171 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:20.477649   32171 main.go:141] libmachine: Using API Version  1
	I0816 17:11:20.477679   32171 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:20.478022   32171 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:20.478241   32171 main.go:141] libmachine: (ha-764617-m03) Calling .GetState
	I0816 17:11:20.479793   32171 status.go:330] ha-764617-m03 host status = "Running" (err=<nil>)
	I0816 17:11:20.479807   32171 host.go:66] Checking if "ha-764617-m03" exists ...
	I0816 17:11:20.480131   32171 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:20.480173   32171 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:20.496196   32171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39835
	I0816 17:11:20.496558   32171 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:20.496981   32171 main.go:141] libmachine: Using API Version  1
	I0816 17:11:20.497001   32171 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:20.497314   32171 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:20.497491   32171 main.go:141] libmachine: (ha-764617-m03) Calling .GetIP
	I0816 17:11:20.500050   32171 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:11:20.500429   32171 main.go:141] libmachine: (ha-764617-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:4e:81", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:06:25 +0000 UTC Type:0 Mac:52:54:00:b2:4e:81 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-764617-m03 Clientid:01:52:54:00:b2:4e:81}
	I0816 17:11:20.500453   32171 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:11:20.500601   32171 host.go:66] Checking if "ha-764617-m03" exists ...
	I0816 17:11:20.500948   32171 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:20.501011   32171 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:20.515345   32171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39467
	I0816 17:11:20.515894   32171 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:20.516415   32171 main.go:141] libmachine: Using API Version  1
	I0816 17:11:20.516436   32171 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:20.516778   32171 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:20.517019   32171 main.go:141] libmachine: (ha-764617-m03) Calling .DriverName
	I0816 17:11:20.517242   32171 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 17:11:20.517263   32171 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHHostname
	I0816 17:11:20.520142   32171 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:11:20.520579   32171 main.go:141] libmachine: (ha-764617-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:4e:81", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:06:25 +0000 UTC Type:0 Mac:52:54:00:b2:4e:81 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-764617-m03 Clientid:01:52:54:00:b2:4e:81}
	I0816 17:11:20.520613   32171 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:11:20.520754   32171 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHPort
	I0816 17:11:20.520923   32171 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHKeyPath
	I0816 17:11:20.521074   32171 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHUsername
	I0816 17:11:20.521225   32171 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m03/id_rsa Username:docker}
	I0816 17:11:20.600051   32171 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 17:11:20.617228   32171 kubeconfig.go:125] found "ha-764617" server: "https://192.168.39.254:8443"
	I0816 17:11:20.617253   32171 api_server.go:166] Checking apiserver status ...
	I0816 17:11:20.617288   32171 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 17:11:20.629788   32171 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1510/cgroup
	W0816 17:11:20.639104   32171 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1510/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 17:11:20.639162   32171 ssh_runner.go:195] Run: ls
	I0816 17:11:20.643252   32171 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0816 17:11:20.647368   32171 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0816 17:11:20.647390   32171 status.go:422] ha-764617-m03 apiserver status = Running (err=<nil>)
	I0816 17:11:20.647398   32171 status.go:257] ha-764617-m03 status: &{Name:ha-764617-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 17:11:20.647438   32171 status.go:255] checking status of ha-764617-m04 ...
	I0816 17:11:20.647754   32171 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:20.647787   32171 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:20.662967   32171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41491
	I0816 17:11:20.663379   32171 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:20.663863   32171 main.go:141] libmachine: Using API Version  1
	I0816 17:11:20.663881   32171 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:20.664192   32171 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:20.664387   32171 main.go:141] libmachine: (ha-764617-m04) Calling .GetState
	I0816 17:11:20.665920   32171 status.go:330] ha-764617-m04 host status = "Running" (err=<nil>)
	I0816 17:11:20.665936   32171 host.go:66] Checking if "ha-764617-m04" exists ...
	I0816 17:11:20.666237   32171 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:20.666284   32171 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:20.680706   32171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37069
	I0816 17:11:20.681206   32171 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:20.681721   32171 main.go:141] libmachine: Using API Version  1
	I0816 17:11:20.681740   32171 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:20.682036   32171 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:20.682191   32171 main.go:141] libmachine: (ha-764617-m04) Calling .GetIP
	I0816 17:11:20.684903   32171 main.go:141] libmachine: (ha-764617-m04) DBG | domain ha-764617-m04 has defined MAC address 52:54:00:61:8e:ba in network mk-ha-764617
	I0816 17:11:20.685383   32171 main.go:141] libmachine: (ha-764617-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:8e:ba", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:07:50 +0000 UTC Type:0 Mac:52:54:00:61:8e:ba Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-764617-m04 Clientid:01:52:54:00:61:8e:ba}
	I0816 17:11:20.685421   32171 main.go:141] libmachine: (ha-764617-m04) DBG | domain ha-764617-m04 has defined IP address 192.168.39.137 and MAC address 52:54:00:61:8e:ba in network mk-ha-764617
	I0816 17:11:20.685532   32171 host.go:66] Checking if "ha-764617-m04" exists ...
	I0816 17:11:20.685926   32171 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:20.685963   32171 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:20.701033   32171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33957
	I0816 17:11:20.701473   32171 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:20.702031   32171 main.go:141] libmachine: Using API Version  1
	I0816 17:11:20.702060   32171 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:20.702441   32171 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:20.702662   32171 main.go:141] libmachine: (ha-764617-m04) Calling .DriverName
	I0816 17:11:20.702854   32171 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 17:11:20.702876   32171 main.go:141] libmachine: (ha-764617-m04) Calling .GetSSHHostname
	I0816 17:11:20.705891   32171 main.go:141] libmachine: (ha-764617-m04) DBG | domain ha-764617-m04 has defined MAC address 52:54:00:61:8e:ba in network mk-ha-764617
	I0816 17:11:20.706360   32171 main.go:141] libmachine: (ha-764617-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:8e:ba", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:07:50 +0000 UTC Type:0 Mac:52:54:00:61:8e:ba Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-764617-m04 Clientid:01:52:54:00:61:8e:ba}
	I0816 17:11:20.706391   32171 main.go:141] libmachine: (ha-764617-m04) DBG | domain ha-764617-m04 has defined IP address 192.168.39.137 and MAC address 52:54:00:61:8e:ba in network mk-ha-764617
	I0816 17:11:20.706519   32171 main.go:141] libmachine: (ha-764617-m04) Calling .GetSSHPort
	I0816 17:11:20.706718   32171 main.go:141] libmachine: (ha-764617-m04) Calling .GetSSHKeyPath
	I0816 17:11:20.706873   32171 main.go:141] libmachine: (ha-764617-m04) Calling .GetSSHUsername
	I0816 17:11:20.707055   32171 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m04/id_rsa Username:docker}
	I0816 17:11:20.783553   32171 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 17:11:20.798555   32171 status.go:257] ha-764617-m04 status: &{Name:ha-764617-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-764617 status -v=7 --alsologtostderr: exit status 3 (5.215159365s)

                                                
                                                
-- stdout --
	ha-764617
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-764617-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-764617-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-764617-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 17:11:21.761073   32272 out.go:345] Setting OutFile to fd 1 ...
	I0816 17:11:21.761297   32272 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:11:21.761306   32272 out.go:358] Setting ErrFile to fd 2...
	I0816 17:11:21.761310   32272 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:11:21.761465   32272 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-9545/.minikube/bin
	I0816 17:11:21.761665   32272 out.go:352] Setting JSON to false
	I0816 17:11:21.761693   32272 mustload.go:65] Loading cluster: ha-764617
	I0816 17:11:21.761754   32272 notify.go:220] Checking for updates...
	I0816 17:11:21.762071   32272 config.go:182] Loaded profile config "ha-764617": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 17:11:21.762084   32272 status.go:255] checking status of ha-764617 ...
	I0816 17:11:21.762458   32272 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:21.762507   32272 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:21.782400   32272 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40009
	I0816 17:11:21.782791   32272 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:21.783278   32272 main.go:141] libmachine: Using API Version  1
	I0816 17:11:21.783299   32272 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:21.783606   32272 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:21.783776   32272 main.go:141] libmachine: (ha-764617) Calling .GetState
	I0816 17:11:21.785302   32272 status.go:330] ha-764617 host status = "Running" (err=<nil>)
	I0816 17:11:21.785321   32272 host.go:66] Checking if "ha-764617" exists ...
	I0816 17:11:21.785590   32272 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:21.785623   32272 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:21.800459   32272 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35863
	I0816 17:11:21.800992   32272 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:21.801510   32272 main.go:141] libmachine: Using API Version  1
	I0816 17:11:21.801534   32272 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:21.801902   32272 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:21.802102   32272 main.go:141] libmachine: (ha-764617) Calling .GetIP
	I0816 17:11:21.804843   32272 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:11:21.805228   32272 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:11:21.805253   32272 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:11:21.805415   32272 host.go:66] Checking if "ha-764617" exists ...
	I0816 17:11:21.805738   32272 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:21.805779   32272 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:21.820228   32272 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35681
	I0816 17:11:21.820682   32272 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:21.821137   32272 main.go:141] libmachine: Using API Version  1
	I0816 17:11:21.821159   32272 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:21.821456   32272 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:21.821635   32272 main.go:141] libmachine: (ha-764617) Calling .DriverName
	I0816 17:11:21.821828   32272 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 17:11:21.821848   32272 main.go:141] libmachine: (ha-764617) Calling .GetSSHHostname
	I0816 17:11:21.824113   32272 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:11:21.824482   32272 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:11:21.824508   32272 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:11:21.824570   32272 main.go:141] libmachine: (ha-764617) Calling .GetSSHPort
	I0816 17:11:21.824760   32272 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:11:21.824911   32272 main.go:141] libmachine: (ha-764617) Calling .GetSSHUsername
	I0816 17:11:21.825035   32272 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617/id_rsa Username:docker}
	I0816 17:11:21.907887   32272 ssh_runner.go:195] Run: systemctl --version
	I0816 17:11:21.913797   32272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 17:11:21.927628   32272 kubeconfig.go:125] found "ha-764617" server: "https://192.168.39.254:8443"
	I0816 17:11:21.927658   32272 api_server.go:166] Checking apiserver status ...
	I0816 17:11:21.927696   32272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 17:11:21.940191   32272 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1104/cgroup
	W0816 17:11:21.948447   32272 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1104/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 17:11:21.948491   32272 ssh_runner.go:195] Run: ls
	I0816 17:11:21.952188   32272 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0816 17:11:21.957713   32272 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0816 17:11:21.957731   32272 status.go:422] ha-764617 apiserver status = Running (err=<nil>)
	I0816 17:11:21.957740   32272 status.go:257] ha-764617 status: &{Name:ha-764617 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 17:11:21.957762   32272 status.go:255] checking status of ha-764617-m02 ...
	I0816 17:11:21.958026   32272 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:21.958060   32272 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:21.972540   32272 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39103
	I0816 17:11:21.972971   32272 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:21.973428   32272 main.go:141] libmachine: Using API Version  1
	I0816 17:11:21.973448   32272 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:21.973738   32272 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:21.973894   32272 main.go:141] libmachine: (ha-764617-m02) Calling .GetState
	I0816 17:11:21.975563   32272 status.go:330] ha-764617-m02 host status = "Running" (err=<nil>)
	I0816 17:11:21.975576   32272 host.go:66] Checking if "ha-764617-m02" exists ...
	I0816 17:11:21.975892   32272 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:21.975930   32272 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:21.990181   32272 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36459
	I0816 17:11:21.990544   32272 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:21.990962   32272 main.go:141] libmachine: Using API Version  1
	I0816 17:11:21.990983   32272 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:21.991273   32272 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:21.991460   32272 main.go:141] libmachine: (ha-764617-m02) Calling .GetIP
	I0816 17:11:21.994049   32272 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:11:21.994496   32272 main.go:141] libmachine: (ha-764617-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:3e:7f", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:05:08 +0000 UTC Type:0 Mac:52:54:00:cf:3e:7f Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-764617-m02 Clientid:01:52:54:00:cf:3e:7f}
	I0816 17:11:21.994521   32272 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:11:21.994651   32272 host.go:66] Checking if "ha-764617-m02" exists ...
	I0816 17:11:21.994969   32272 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:21.995018   32272 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:22.011807   32272 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45211
	I0816 17:11:22.012189   32272 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:22.012618   32272 main.go:141] libmachine: Using API Version  1
	I0816 17:11:22.012656   32272 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:22.012955   32272 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:22.013112   32272 main.go:141] libmachine: (ha-764617-m02) Calling .DriverName
	I0816 17:11:22.013266   32272 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 17:11:22.013291   32272 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHHostname
	I0816 17:11:22.015767   32272 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:11:22.016190   32272 main.go:141] libmachine: (ha-764617-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:3e:7f", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:05:08 +0000 UTC Type:0 Mac:52:54:00:cf:3e:7f Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-764617-m02 Clientid:01:52:54:00:cf:3e:7f}
	I0816 17:11:22.016213   32272 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:11:22.016380   32272 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHPort
	I0816 17:11:22.016744   32272 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHKeyPath
	I0816 17:11:22.016902   32272 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHUsername
	I0816 17:11:22.017025   32272 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m02/id_rsa Username:docker}
	W0816 17:11:23.532949   32272 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.184:22: connect: no route to host
	I0816 17:11:23.533012   32272 retry.go:31] will retry after 231.631684ms: dial tcp 192.168.39.184:22: connect: no route to host
	W0816 17:11:26.604912   32272 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.184:22: connect: no route to host
	W0816 17:11:26.604999   32272 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.184:22: connect: no route to host
	E0816 17:11:26.605024   32272 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.184:22: connect: no route to host
	I0816 17:11:26.605033   32272 status.go:257] ha-764617-m02 status: &{Name:ha-764617-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0816 17:11:26.605053   32272 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.184:22: connect: no route to host
	I0816 17:11:26.605060   32272 status.go:255] checking status of ha-764617-m03 ...
	I0816 17:11:26.605357   32272 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:26.605398   32272 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:26.620873   32272 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46407
	I0816 17:11:26.621305   32272 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:26.621875   32272 main.go:141] libmachine: Using API Version  1
	I0816 17:11:26.621897   32272 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:26.622219   32272 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:26.622432   32272 main.go:141] libmachine: (ha-764617-m03) Calling .GetState
	I0816 17:11:26.624047   32272 status.go:330] ha-764617-m03 host status = "Running" (err=<nil>)
	I0816 17:11:26.624061   32272 host.go:66] Checking if "ha-764617-m03" exists ...
	I0816 17:11:26.624455   32272 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:26.624497   32272 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:26.639016   32272 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34837
	I0816 17:11:26.639488   32272 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:26.639934   32272 main.go:141] libmachine: Using API Version  1
	I0816 17:11:26.639960   32272 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:26.640262   32272 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:26.640409   32272 main.go:141] libmachine: (ha-764617-m03) Calling .GetIP
	I0816 17:11:26.643224   32272 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:11:26.643665   32272 main.go:141] libmachine: (ha-764617-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:4e:81", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:06:25 +0000 UTC Type:0 Mac:52:54:00:b2:4e:81 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-764617-m03 Clientid:01:52:54:00:b2:4e:81}
	I0816 17:11:26.643686   32272 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:11:26.643845   32272 host.go:66] Checking if "ha-764617-m03" exists ...
	I0816 17:11:26.644139   32272 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:26.644174   32272 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:26.658403   32272 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38009
	I0816 17:11:26.658789   32272 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:26.659175   32272 main.go:141] libmachine: Using API Version  1
	I0816 17:11:26.659192   32272 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:26.659466   32272 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:26.659662   32272 main.go:141] libmachine: (ha-764617-m03) Calling .DriverName
	I0816 17:11:26.659835   32272 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 17:11:26.659855   32272 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHHostname
	I0816 17:11:26.662359   32272 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:11:26.662701   32272 main.go:141] libmachine: (ha-764617-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:4e:81", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:06:25 +0000 UTC Type:0 Mac:52:54:00:b2:4e:81 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-764617-m03 Clientid:01:52:54:00:b2:4e:81}
	I0816 17:11:26.662734   32272 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:11:26.662855   32272 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHPort
	I0816 17:11:26.663007   32272 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHKeyPath
	I0816 17:11:26.663141   32272 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHUsername
	I0816 17:11:26.663262   32272 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m03/id_rsa Username:docker}
	I0816 17:11:26.739853   32272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 17:11:26.753788   32272 kubeconfig.go:125] found "ha-764617" server: "https://192.168.39.254:8443"
	I0816 17:11:26.753815   32272 api_server.go:166] Checking apiserver status ...
	I0816 17:11:26.753859   32272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 17:11:26.767399   32272 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1510/cgroup
	W0816 17:11:26.776204   32272 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1510/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 17:11:26.776253   32272 ssh_runner.go:195] Run: ls
	I0816 17:11:26.780603   32272 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0816 17:11:26.784837   32272 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0816 17:11:26.784856   32272 status.go:422] ha-764617-m03 apiserver status = Running (err=<nil>)
	I0816 17:11:26.784864   32272 status.go:257] ha-764617-m03 status: &{Name:ha-764617-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 17:11:26.784878   32272 status.go:255] checking status of ha-764617-m04 ...
	I0816 17:11:26.785178   32272 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:26.785213   32272 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:26.799830   32272 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33479
	I0816 17:11:26.800248   32272 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:26.800812   32272 main.go:141] libmachine: Using API Version  1
	I0816 17:11:26.800838   32272 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:26.801151   32272 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:26.801362   32272 main.go:141] libmachine: (ha-764617-m04) Calling .GetState
	I0816 17:11:26.803209   32272 status.go:330] ha-764617-m04 host status = "Running" (err=<nil>)
	I0816 17:11:26.803222   32272 host.go:66] Checking if "ha-764617-m04" exists ...
	I0816 17:11:26.803584   32272 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:26.803637   32272 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:26.818462   32272 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44711
	I0816 17:11:26.818848   32272 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:26.819312   32272 main.go:141] libmachine: Using API Version  1
	I0816 17:11:26.819329   32272 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:26.819607   32272 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:26.819775   32272 main.go:141] libmachine: (ha-764617-m04) Calling .GetIP
	I0816 17:11:26.822404   32272 main.go:141] libmachine: (ha-764617-m04) DBG | domain ha-764617-m04 has defined MAC address 52:54:00:61:8e:ba in network mk-ha-764617
	I0816 17:11:26.822769   32272 main.go:141] libmachine: (ha-764617-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:8e:ba", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:07:50 +0000 UTC Type:0 Mac:52:54:00:61:8e:ba Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-764617-m04 Clientid:01:52:54:00:61:8e:ba}
	I0816 17:11:26.822803   32272 main.go:141] libmachine: (ha-764617-m04) DBG | domain ha-764617-m04 has defined IP address 192.168.39.137 and MAC address 52:54:00:61:8e:ba in network mk-ha-764617
	I0816 17:11:26.822928   32272 host.go:66] Checking if "ha-764617-m04" exists ...
	I0816 17:11:26.823243   32272 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:26.823288   32272 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:26.837599   32272 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43747
	I0816 17:11:26.837981   32272 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:26.838448   32272 main.go:141] libmachine: Using API Version  1
	I0816 17:11:26.838468   32272 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:26.838731   32272 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:26.838897   32272 main.go:141] libmachine: (ha-764617-m04) Calling .DriverName
	I0816 17:11:26.839084   32272 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 17:11:26.839104   32272 main.go:141] libmachine: (ha-764617-m04) Calling .GetSSHHostname
	I0816 17:11:26.841611   32272 main.go:141] libmachine: (ha-764617-m04) DBG | domain ha-764617-m04 has defined MAC address 52:54:00:61:8e:ba in network mk-ha-764617
	I0816 17:11:26.842015   32272 main.go:141] libmachine: (ha-764617-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:8e:ba", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:07:50 +0000 UTC Type:0 Mac:52:54:00:61:8e:ba Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-764617-m04 Clientid:01:52:54:00:61:8e:ba}
	I0816 17:11:26.842046   32272 main.go:141] libmachine: (ha-764617-m04) DBG | domain ha-764617-m04 has defined IP address 192.168.39.137 and MAC address 52:54:00:61:8e:ba in network mk-ha-764617
	I0816 17:11:26.842186   32272 main.go:141] libmachine: (ha-764617-m04) Calling .GetSSHPort
	I0816 17:11:26.842328   32272 main.go:141] libmachine: (ha-764617-m04) Calling .GetSSHKeyPath
	I0816 17:11:26.842474   32272 main.go:141] libmachine: (ha-764617-m04) Calling .GetSSHUsername
	I0816 17:11:26.842572   32272 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m04/id_rsa Username:docker}
	I0816 17:11:26.919276   32272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 17:11:26.933388   32272 status.go:257] ha-764617-m04 status: &{Name:ha-764617-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-764617 status -v=7 --alsologtostderr: exit status 3 (4.789209958s)

                                                
                                                
-- stdout --
	ha-764617
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-764617-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-764617-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-764617-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 17:11:28.699459   32372 out.go:345] Setting OutFile to fd 1 ...
	I0816 17:11:28.699756   32372 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:11:28.699767   32372 out.go:358] Setting ErrFile to fd 2...
	I0816 17:11:28.699773   32372 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:11:28.699935   32372 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-9545/.minikube/bin
	I0816 17:11:28.700110   32372 out.go:352] Setting JSON to false
	I0816 17:11:28.700138   32372 mustload.go:65] Loading cluster: ha-764617
	I0816 17:11:28.700183   32372 notify.go:220] Checking for updates...
	I0816 17:11:28.700576   32372 config.go:182] Loaded profile config "ha-764617": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 17:11:28.700596   32372 status.go:255] checking status of ha-764617 ...
	I0816 17:11:28.701153   32372 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:28.701221   32372 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:28.719934   32372 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44273
	I0816 17:11:28.720358   32372 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:28.720996   32372 main.go:141] libmachine: Using API Version  1
	I0816 17:11:28.721028   32372 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:28.721346   32372 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:28.721550   32372 main.go:141] libmachine: (ha-764617) Calling .GetState
	I0816 17:11:28.723410   32372 status.go:330] ha-764617 host status = "Running" (err=<nil>)
	I0816 17:11:28.723429   32372 host.go:66] Checking if "ha-764617" exists ...
	I0816 17:11:28.723688   32372 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:28.723720   32372 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:28.738986   32372 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41771
	I0816 17:11:28.739474   32372 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:28.740076   32372 main.go:141] libmachine: Using API Version  1
	I0816 17:11:28.740112   32372 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:28.740419   32372 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:28.740591   32372 main.go:141] libmachine: (ha-764617) Calling .GetIP
	I0816 17:11:28.743211   32372 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:11:28.743609   32372 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:11:28.743632   32372 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:11:28.743755   32372 host.go:66] Checking if "ha-764617" exists ...
	I0816 17:11:28.744149   32372 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:28.744188   32372 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:28.759207   32372 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38577
	I0816 17:11:28.759621   32372 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:28.760029   32372 main.go:141] libmachine: Using API Version  1
	I0816 17:11:28.760053   32372 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:28.760411   32372 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:28.760664   32372 main.go:141] libmachine: (ha-764617) Calling .DriverName
	I0816 17:11:28.760844   32372 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 17:11:28.760862   32372 main.go:141] libmachine: (ha-764617) Calling .GetSSHHostname
	I0816 17:11:28.764069   32372 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:11:28.764502   32372 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:11:28.764526   32372 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:11:28.764672   32372 main.go:141] libmachine: (ha-764617) Calling .GetSSHPort
	I0816 17:11:28.764833   32372 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:11:28.765007   32372 main.go:141] libmachine: (ha-764617) Calling .GetSSHUsername
	I0816 17:11:28.765242   32372 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617/id_rsa Username:docker}
	I0816 17:11:28.847980   32372 ssh_runner.go:195] Run: systemctl --version
	I0816 17:11:28.854396   32372 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 17:11:28.868962   32372 kubeconfig.go:125] found "ha-764617" server: "https://192.168.39.254:8443"
	I0816 17:11:28.868996   32372 api_server.go:166] Checking apiserver status ...
	I0816 17:11:28.869040   32372 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 17:11:28.882311   32372 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1104/cgroup
	W0816 17:11:28.891366   32372 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1104/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 17:11:28.891436   32372 ssh_runner.go:195] Run: ls
	I0816 17:11:28.895859   32372 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0816 17:11:28.899847   32372 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0816 17:11:28.899866   32372 status.go:422] ha-764617 apiserver status = Running (err=<nil>)
	I0816 17:11:28.899875   32372 status.go:257] ha-764617 status: &{Name:ha-764617 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 17:11:28.899889   32372 status.go:255] checking status of ha-764617-m02 ...
	I0816 17:11:28.900210   32372 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:28.900242   32372 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:28.914929   32372 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37339
	I0816 17:11:28.915335   32372 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:28.915804   32372 main.go:141] libmachine: Using API Version  1
	I0816 17:11:28.915823   32372 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:28.916163   32372 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:28.916335   32372 main.go:141] libmachine: (ha-764617-m02) Calling .GetState
	I0816 17:11:28.917798   32372 status.go:330] ha-764617-m02 host status = "Running" (err=<nil>)
	I0816 17:11:28.917811   32372 host.go:66] Checking if "ha-764617-m02" exists ...
	I0816 17:11:28.918125   32372 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:28.918161   32372 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:28.933243   32372 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35487
	I0816 17:11:28.933633   32372 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:28.934081   32372 main.go:141] libmachine: Using API Version  1
	I0816 17:11:28.934102   32372 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:28.934431   32372 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:28.934630   32372 main.go:141] libmachine: (ha-764617-m02) Calling .GetIP
	I0816 17:11:28.937614   32372 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:11:28.938140   32372 main.go:141] libmachine: (ha-764617-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:3e:7f", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:05:08 +0000 UTC Type:0 Mac:52:54:00:cf:3e:7f Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-764617-m02 Clientid:01:52:54:00:cf:3e:7f}
	I0816 17:11:28.938167   32372 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:11:28.938306   32372 host.go:66] Checking if "ha-764617-m02" exists ...
	I0816 17:11:28.938732   32372 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:28.938770   32372 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:28.953126   32372 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33727
	I0816 17:11:28.953486   32372 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:28.953963   32372 main.go:141] libmachine: Using API Version  1
	I0816 17:11:28.953985   32372 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:28.954232   32372 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:28.954429   32372 main.go:141] libmachine: (ha-764617-m02) Calling .DriverName
	I0816 17:11:28.954607   32372 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 17:11:28.954627   32372 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHHostname
	I0816 17:11:28.957400   32372 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:11:28.957809   32372 main.go:141] libmachine: (ha-764617-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:3e:7f", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:05:08 +0000 UTC Type:0 Mac:52:54:00:cf:3e:7f Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-764617-m02 Clientid:01:52:54:00:cf:3e:7f}
	I0816 17:11:28.957837   32372 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:11:28.957996   32372 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHPort
	I0816 17:11:28.958176   32372 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHKeyPath
	I0816 17:11:28.958366   32372 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHUsername
	I0816 17:11:28.958524   32372 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m02/id_rsa Username:docker}
	W0816 17:11:29.676835   32372 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.184:22: connect: no route to host
	I0816 17:11:29.676899   32372 retry.go:31] will retry after 373.08629ms: dial tcp 192.168.39.184:22: connect: no route to host
	W0816 17:11:33.100886   32372 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.184:22: connect: no route to host
	W0816 17:11:33.100982   32372 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.184:22: connect: no route to host
	E0816 17:11:33.100995   32372 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.184:22: connect: no route to host
	I0816 17:11:33.101006   32372 status.go:257] ha-764617-m02 status: &{Name:ha-764617-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0816 17:11:33.101023   32372 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.184:22: connect: no route to host
	I0816 17:11:33.101031   32372 status.go:255] checking status of ha-764617-m03 ...
	I0816 17:11:33.101335   32372 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:33.101381   32372 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:33.116742   32372 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43449
	I0816 17:11:33.117161   32372 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:33.117666   32372 main.go:141] libmachine: Using API Version  1
	I0816 17:11:33.117693   32372 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:33.118016   32372 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:33.118189   32372 main.go:141] libmachine: (ha-764617-m03) Calling .GetState
	I0816 17:11:33.119917   32372 status.go:330] ha-764617-m03 host status = "Running" (err=<nil>)
	I0816 17:11:33.119935   32372 host.go:66] Checking if "ha-764617-m03" exists ...
	I0816 17:11:33.120254   32372 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:33.120295   32372 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:33.135053   32372 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34201
	I0816 17:11:33.135414   32372 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:33.135866   32372 main.go:141] libmachine: Using API Version  1
	I0816 17:11:33.135886   32372 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:33.136146   32372 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:33.136325   32372 main.go:141] libmachine: (ha-764617-m03) Calling .GetIP
	I0816 17:11:33.139023   32372 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:11:33.139379   32372 main.go:141] libmachine: (ha-764617-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:4e:81", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:06:25 +0000 UTC Type:0 Mac:52:54:00:b2:4e:81 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-764617-m03 Clientid:01:52:54:00:b2:4e:81}
	I0816 17:11:33.139401   32372 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:11:33.139566   32372 host.go:66] Checking if "ha-764617-m03" exists ...
	I0816 17:11:33.139864   32372 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:33.139898   32372 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:33.156096   32372 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33861
	I0816 17:11:33.156495   32372 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:33.157051   32372 main.go:141] libmachine: Using API Version  1
	I0816 17:11:33.157077   32372 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:33.157425   32372 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:33.157627   32372 main.go:141] libmachine: (ha-764617-m03) Calling .DriverName
	I0816 17:11:33.157834   32372 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 17:11:33.157855   32372 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHHostname
	I0816 17:11:33.160813   32372 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:11:33.161228   32372 main.go:141] libmachine: (ha-764617-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:4e:81", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:06:25 +0000 UTC Type:0 Mac:52:54:00:b2:4e:81 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-764617-m03 Clientid:01:52:54:00:b2:4e:81}
	I0816 17:11:33.161257   32372 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:11:33.161401   32372 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHPort
	I0816 17:11:33.161552   32372 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHKeyPath
	I0816 17:11:33.161702   32372 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHUsername
	I0816 17:11:33.161827   32372 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m03/id_rsa Username:docker}
	I0816 17:11:33.240486   32372 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 17:11:33.257237   32372 kubeconfig.go:125] found "ha-764617" server: "https://192.168.39.254:8443"
	I0816 17:11:33.257265   32372 api_server.go:166] Checking apiserver status ...
	I0816 17:11:33.257295   32372 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 17:11:33.273260   32372 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1510/cgroup
	W0816 17:11:33.284713   32372 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1510/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 17:11:33.284769   32372 ssh_runner.go:195] Run: ls
	I0816 17:11:33.288777   32372 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0816 17:11:33.295185   32372 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0816 17:11:33.295208   32372 status.go:422] ha-764617-m03 apiserver status = Running (err=<nil>)
	I0816 17:11:33.295219   32372 status.go:257] ha-764617-m03 status: &{Name:ha-764617-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 17:11:33.295237   32372 status.go:255] checking status of ha-764617-m04 ...
	I0816 17:11:33.295544   32372 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:33.295590   32372 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:33.310919   32372 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39807
	I0816 17:11:33.311302   32372 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:33.311770   32372 main.go:141] libmachine: Using API Version  1
	I0816 17:11:33.311788   32372 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:33.312154   32372 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:33.312356   32372 main.go:141] libmachine: (ha-764617-m04) Calling .GetState
	I0816 17:11:33.313841   32372 status.go:330] ha-764617-m04 host status = "Running" (err=<nil>)
	I0816 17:11:33.313855   32372 host.go:66] Checking if "ha-764617-m04" exists ...
	I0816 17:11:33.314122   32372 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:33.314168   32372 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:33.330392   32372 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38585
	I0816 17:11:33.330782   32372 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:33.331200   32372 main.go:141] libmachine: Using API Version  1
	I0816 17:11:33.331220   32372 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:33.331573   32372 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:33.331740   32372 main.go:141] libmachine: (ha-764617-m04) Calling .GetIP
	I0816 17:11:33.334527   32372 main.go:141] libmachine: (ha-764617-m04) DBG | domain ha-764617-m04 has defined MAC address 52:54:00:61:8e:ba in network mk-ha-764617
	I0816 17:11:33.335046   32372 main.go:141] libmachine: (ha-764617-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:8e:ba", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:07:50 +0000 UTC Type:0 Mac:52:54:00:61:8e:ba Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-764617-m04 Clientid:01:52:54:00:61:8e:ba}
	I0816 17:11:33.335071   32372 main.go:141] libmachine: (ha-764617-m04) DBG | domain ha-764617-m04 has defined IP address 192.168.39.137 and MAC address 52:54:00:61:8e:ba in network mk-ha-764617
	I0816 17:11:33.335256   32372 host.go:66] Checking if "ha-764617-m04" exists ...
	I0816 17:11:33.335606   32372 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:33.335639   32372 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:33.351320   32372 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42961
	I0816 17:11:33.351685   32372 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:33.352166   32372 main.go:141] libmachine: Using API Version  1
	I0816 17:11:33.352184   32372 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:33.352495   32372 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:33.352673   32372 main.go:141] libmachine: (ha-764617-m04) Calling .DriverName
	I0816 17:11:33.352835   32372 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 17:11:33.352852   32372 main.go:141] libmachine: (ha-764617-m04) Calling .GetSSHHostname
	I0816 17:11:33.355723   32372 main.go:141] libmachine: (ha-764617-m04) DBG | domain ha-764617-m04 has defined MAC address 52:54:00:61:8e:ba in network mk-ha-764617
	I0816 17:11:33.356102   32372 main.go:141] libmachine: (ha-764617-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:8e:ba", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:07:50 +0000 UTC Type:0 Mac:52:54:00:61:8e:ba Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-764617-m04 Clientid:01:52:54:00:61:8e:ba}
	I0816 17:11:33.356122   32372 main.go:141] libmachine: (ha-764617-m04) DBG | domain ha-764617-m04 has defined IP address 192.168.39.137 and MAC address 52:54:00:61:8e:ba in network mk-ha-764617
	I0816 17:11:33.356262   32372 main.go:141] libmachine: (ha-764617-m04) Calling .GetSSHPort
	I0816 17:11:33.356424   32372 main.go:141] libmachine: (ha-764617-m04) Calling .GetSSHKeyPath
	I0816 17:11:33.356573   32372 main.go:141] libmachine: (ha-764617-m04) Calling .GetSSHUsername
	I0816 17:11:33.356712   32372 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m04/id_rsa Username:docker}
	I0816 17:11:33.434861   32372 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 17:11:33.447700   32372 status.go:257] ha-764617-m04 status: &{Name:ha-764617-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-764617 status -v=7 --alsologtostderr: exit status 3 (4.530555709s)

                                                
                                                
-- stdout --
	ha-764617
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-764617-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-764617-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-764617-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 17:11:35.411534   32488 out.go:345] Setting OutFile to fd 1 ...
	I0816 17:11:35.411783   32488 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:11:35.411791   32488 out.go:358] Setting ErrFile to fd 2...
	I0816 17:11:35.411795   32488 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:11:35.411973   32488 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-9545/.minikube/bin
	I0816 17:11:35.412110   32488 out.go:352] Setting JSON to false
	I0816 17:11:35.412134   32488 mustload.go:65] Loading cluster: ha-764617
	I0816 17:11:35.412252   32488 notify.go:220] Checking for updates...
	I0816 17:11:35.412495   32488 config.go:182] Loaded profile config "ha-764617": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 17:11:35.412509   32488 status.go:255] checking status of ha-764617 ...
	I0816 17:11:35.412986   32488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:35.413058   32488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:35.432572   32488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39739
	I0816 17:11:35.432975   32488 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:35.433421   32488 main.go:141] libmachine: Using API Version  1
	I0816 17:11:35.433441   32488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:35.433827   32488 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:35.433995   32488 main.go:141] libmachine: (ha-764617) Calling .GetState
	I0816 17:11:35.435596   32488 status.go:330] ha-764617 host status = "Running" (err=<nil>)
	I0816 17:11:35.435611   32488 host.go:66] Checking if "ha-764617" exists ...
	I0816 17:11:35.435924   32488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:35.435962   32488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:35.451518   32488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34335
	I0816 17:11:35.451919   32488 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:35.452408   32488 main.go:141] libmachine: Using API Version  1
	I0816 17:11:35.452430   32488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:35.452818   32488 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:35.453017   32488 main.go:141] libmachine: (ha-764617) Calling .GetIP
	I0816 17:11:35.456007   32488 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:11:35.456458   32488 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:11:35.456488   32488 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:11:35.456662   32488 host.go:66] Checking if "ha-764617" exists ...
	I0816 17:11:35.456974   32488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:35.457012   32488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:35.471712   32488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34373
	I0816 17:11:35.472166   32488 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:35.472644   32488 main.go:141] libmachine: Using API Version  1
	I0816 17:11:35.472681   32488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:35.472988   32488 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:35.473170   32488 main.go:141] libmachine: (ha-764617) Calling .DriverName
	I0816 17:11:35.473359   32488 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 17:11:35.473378   32488 main.go:141] libmachine: (ha-764617) Calling .GetSSHHostname
	I0816 17:11:35.476029   32488 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:11:35.476507   32488 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:11:35.476531   32488 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:11:35.476806   32488 main.go:141] libmachine: (ha-764617) Calling .GetSSHPort
	I0816 17:11:35.476974   32488 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:11:35.477167   32488 main.go:141] libmachine: (ha-764617) Calling .GetSSHUsername
	I0816 17:11:35.477373   32488 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617/id_rsa Username:docker}
	I0816 17:11:35.567569   32488 ssh_runner.go:195] Run: systemctl --version
	I0816 17:11:35.573341   32488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 17:11:35.587743   32488 kubeconfig.go:125] found "ha-764617" server: "https://192.168.39.254:8443"
	I0816 17:11:35.587771   32488 api_server.go:166] Checking apiserver status ...
	I0816 17:11:35.587819   32488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 17:11:35.600788   32488 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1104/cgroup
	W0816 17:11:35.611346   32488 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1104/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 17:11:35.611396   32488 ssh_runner.go:195] Run: ls
	I0816 17:11:35.620694   32488 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0816 17:11:35.624870   32488 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0816 17:11:35.624894   32488 status.go:422] ha-764617 apiserver status = Running (err=<nil>)
	I0816 17:11:35.624907   32488 status.go:257] ha-764617 status: &{Name:ha-764617 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 17:11:35.624927   32488 status.go:255] checking status of ha-764617-m02 ...
	I0816 17:11:35.625226   32488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:35.625267   32488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:35.640046   32488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33481
	I0816 17:11:35.640435   32488 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:35.640906   32488 main.go:141] libmachine: Using API Version  1
	I0816 17:11:35.640928   32488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:35.641242   32488 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:35.641419   32488 main.go:141] libmachine: (ha-764617-m02) Calling .GetState
	I0816 17:11:35.642959   32488 status.go:330] ha-764617-m02 host status = "Running" (err=<nil>)
	I0816 17:11:35.642972   32488 host.go:66] Checking if "ha-764617-m02" exists ...
	I0816 17:11:35.643267   32488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:35.643297   32488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:35.658596   32488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37181
	I0816 17:11:35.659004   32488 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:35.659462   32488 main.go:141] libmachine: Using API Version  1
	I0816 17:11:35.659487   32488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:35.659832   32488 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:35.660017   32488 main.go:141] libmachine: (ha-764617-m02) Calling .GetIP
	I0816 17:11:35.662812   32488 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:11:35.663300   32488 main.go:141] libmachine: (ha-764617-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:3e:7f", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:05:08 +0000 UTC Type:0 Mac:52:54:00:cf:3e:7f Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-764617-m02 Clientid:01:52:54:00:cf:3e:7f}
	I0816 17:11:35.663332   32488 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:11:35.663484   32488 host.go:66] Checking if "ha-764617-m02" exists ...
	I0816 17:11:35.663767   32488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:35.663818   32488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:35.679218   32488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37945
	I0816 17:11:35.679658   32488 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:35.680086   32488 main.go:141] libmachine: Using API Version  1
	I0816 17:11:35.680105   32488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:35.680415   32488 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:35.680591   32488 main.go:141] libmachine: (ha-764617-m02) Calling .DriverName
	I0816 17:11:35.680772   32488 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 17:11:35.680795   32488 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHHostname
	I0816 17:11:35.683203   32488 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:11:35.683590   32488 main.go:141] libmachine: (ha-764617-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:3e:7f", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:05:08 +0000 UTC Type:0 Mac:52:54:00:cf:3e:7f Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-764617-m02 Clientid:01:52:54:00:cf:3e:7f}
	I0816 17:11:35.683616   32488 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:11:35.683698   32488 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHPort
	I0816 17:11:35.683841   32488 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHKeyPath
	I0816 17:11:35.683981   32488 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHUsername
	I0816 17:11:35.684146   32488 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m02/id_rsa Username:docker}
	W0816 17:11:36.172871   32488 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.184:22: connect: no route to host
	I0816 17:11:36.172913   32488 retry.go:31] will retry after 322.997901ms: dial tcp 192.168.39.184:22: connect: no route to host
	W0816 17:11:39.564859   32488 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.184:22: connect: no route to host
	W0816 17:11:39.564945   32488 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.184:22: connect: no route to host
	E0816 17:11:39.564961   32488 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.184:22: connect: no route to host
	I0816 17:11:39.564970   32488 status.go:257] ha-764617-m02 status: &{Name:ha-764617-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0816 17:11:39.564988   32488 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.184:22: connect: no route to host
	I0816 17:11:39.564995   32488 status.go:255] checking status of ha-764617-m03 ...
	I0816 17:11:39.565300   32488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:39.565351   32488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:39.580238   32488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42903
	I0816 17:11:39.580671   32488 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:39.581116   32488 main.go:141] libmachine: Using API Version  1
	I0816 17:11:39.581140   32488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:39.581462   32488 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:39.581640   32488 main.go:141] libmachine: (ha-764617-m03) Calling .GetState
	I0816 17:11:39.583378   32488 status.go:330] ha-764617-m03 host status = "Running" (err=<nil>)
	I0816 17:11:39.583397   32488 host.go:66] Checking if "ha-764617-m03" exists ...
	I0816 17:11:39.583754   32488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:39.583799   32488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:39.598426   32488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41575
	I0816 17:11:39.598821   32488 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:39.599277   32488 main.go:141] libmachine: Using API Version  1
	I0816 17:11:39.599315   32488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:39.599624   32488 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:39.599855   32488 main.go:141] libmachine: (ha-764617-m03) Calling .GetIP
	I0816 17:11:39.603041   32488 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:11:39.603435   32488 main.go:141] libmachine: (ha-764617-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:4e:81", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:06:25 +0000 UTC Type:0 Mac:52:54:00:b2:4e:81 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-764617-m03 Clientid:01:52:54:00:b2:4e:81}
	I0816 17:11:39.603456   32488 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:11:39.603601   32488 host.go:66] Checking if "ha-764617-m03" exists ...
	I0816 17:11:39.604017   32488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:39.604063   32488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:39.619276   32488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41191
	I0816 17:11:39.619666   32488 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:39.620124   32488 main.go:141] libmachine: Using API Version  1
	I0816 17:11:39.620144   32488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:39.620448   32488 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:39.620649   32488 main.go:141] libmachine: (ha-764617-m03) Calling .DriverName
	I0816 17:11:39.620832   32488 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 17:11:39.620853   32488 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHHostname
	I0816 17:11:39.623515   32488 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:11:39.623953   32488 main.go:141] libmachine: (ha-764617-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:4e:81", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:06:25 +0000 UTC Type:0 Mac:52:54:00:b2:4e:81 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-764617-m03 Clientid:01:52:54:00:b2:4e:81}
	I0816 17:11:39.623988   32488 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:11:39.624098   32488 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHPort
	I0816 17:11:39.624265   32488 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHKeyPath
	I0816 17:11:39.624428   32488 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHUsername
	I0816 17:11:39.624582   32488 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m03/id_rsa Username:docker}
	I0816 17:11:39.700783   32488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 17:11:39.716206   32488 kubeconfig.go:125] found "ha-764617" server: "https://192.168.39.254:8443"
	I0816 17:11:39.716233   32488 api_server.go:166] Checking apiserver status ...
	I0816 17:11:39.716273   32488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 17:11:39.729020   32488 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1510/cgroup
	W0816 17:11:39.738039   32488 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1510/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 17:11:39.738089   32488 ssh_runner.go:195] Run: ls
	I0816 17:11:39.742196   32488 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0816 17:11:39.748099   32488 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0816 17:11:39.748125   32488 status.go:422] ha-764617-m03 apiserver status = Running (err=<nil>)
	I0816 17:11:39.748133   32488 status.go:257] ha-764617-m03 status: &{Name:ha-764617-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 17:11:39.748147   32488 status.go:255] checking status of ha-764617-m04 ...
	I0816 17:11:39.748434   32488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:39.748465   32488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:39.763307   32488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44195
	I0816 17:11:39.763734   32488 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:39.764140   32488 main.go:141] libmachine: Using API Version  1
	I0816 17:11:39.764155   32488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:39.764553   32488 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:39.764814   32488 main.go:141] libmachine: (ha-764617-m04) Calling .GetState
	I0816 17:11:39.766770   32488 status.go:330] ha-764617-m04 host status = "Running" (err=<nil>)
	I0816 17:11:39.766787   32488 host.go:66] Checking if "ha-764617-m04" exists ...
	I0816 17:11:39.767106   32488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:39.767148   32488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:39.782161   32488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40361
	I0816 17:11:39.782617   32488 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:39.783092   32488 main.go:141] libmachine: Using API Version  1
	I0816 17:11:39.783113   32488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:39.783367   32488 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:39.783560   32488 main.go:141] libmachine: (ha-764617-m04) Calling .GetIP
	I0816 17:11:39.786185   32488 main.go:141] libmachine: (ha-764617-m04) DBG | domain ha-764617-m04 has defined MAC address 52:54:00:61:8e:ba in network mk-ha-764617
	I0816 17:11:39.786601   32488 main.go:141] libmachine: (ha-764617-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:8e:ba", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:07:50 +0000 UTC Type:0 Mac:52:54:00:61:8e:ba Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-764617-m04 Clientid:01:52:54:00:61:8e:ba}
	I0816 17:11:39.786640   32488 main.go:141] libmachine: (ha-764617-m04) DBG | domain ha-764617-m04 has defined IP address 192.168.39.137 and MAC address 52:54:00:61:8e:ba in network mk-ha-764617
	I0816 17:11:39.786734   32488 host.go:66] Checking if "ha-764617-m04" exists ...
	I0816 17:11:39.787104   32488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:39.787145   32488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:39.802560   32488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35071
	I0816 17:11:39.802920   32488 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:39.803443   32488 main.go:141] libmachine: Using API Version  1
	I0816 17:11:39.803469   32488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:39.803793   32488 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:39.804015   32488 main.go:141] libmachine: (ha-764617-m04) Calling .DriverName
	I0816 17:11:39.804212   32488 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 17:11:39.804229   32488 main.go:141] libmachine: (ha-764617-m04) Calling .GetSSHHostname
	I0816 17:11:39.806911   32488 main.go:141] libmachine: (ha-764617-m04) DBG | domain ha-764617-m04 has defined MAC address 52:54:00:61:8e:ba in network mk-ha-764617
	I0816 17:11:39.807285   32488 main.go:141] libmachine: (ha-764617-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:8e:ba", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:07:50 +0000 UTC Type:0 Mac:52:54:00:61:8e:ba Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-764617-m04 Clientid:01:52:54:00:61:8e:ba}
	I0816 17:11:39.807322   32488 main.go:141] libmachine: (ha-764617-m04) DBG | domain ha-764617-m04 has defined IP address 192.168.39.137 and MAC address 52:54:00:61:8e:ba in network mk-ha-764617
	I0816 17:11:39.807426   32488 main.go:141] libmachine: (ha-764617-m04) Calling .GetSSHPort
	I0816 17:11:39.807581   32488 main.go:141] libmachine: (ha-764617-m04) Calling .GetSSHKeyPath
	I0816 17:11:39.807707   32488 main.go:141] libmachine: (ha-764617-m04) Calling .GetSSHUsername
	I0816 17:11:39.807814   32488 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m04/id_rsa Username:docker}
	I0816 17:11:39.888566   32488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 17:11:39.901970   32488 status.go:257] ha-764617-m04 status: &{Name:ha-764617-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-764617 status -v=7 --alsologtostderr: exit status 3 (3.715788735s)

                                                
                                                
-- stdout --
	ha-764617
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-764617-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-764617-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-764617-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 17:11:45.964798   32603 out.go:345] Setting OutFile to fd 1 ...
	I0816 17:11:45.964918   32603 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:11:45.964928   32603 out.go:358] Setting ErrFile to fd 2...
	I0816 17:11:45.964932   32603 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:11:45.965137   32603 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-9545/.minikube/bin
	I0816 17:11:45.965339   32603 out.go:352] Setting JSON to false
	I0816 17:11:45.965372   32603 mustload.go:65] Loading cluster: ha-764617
	I0816 17:11:45.965405   32603 notify.go:220] Checking for updates...
	I0816 17:11:45.965816   32603 config.go:182] Loaded profile config "ha-764617": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 17:11:45.965833   32603 status.go:255] checking status of ha-764617 ...
	I0816 17:11:45.966262   32603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:45.966327   32603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:45.981479   32603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39625
	I0816 17:11:45.981939   32603 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:45.982508   32603 main.go:141] libmachine: Using API Version  1
	I0816 17:11:45.982535   32603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:45.982822   32603 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:45.983034   32603 main.go:141] libmachine: (ha-764617) Calling .GetState
	I0816 17:11:45.984799   32603 status.go:330] ha-764617 host status = "Running" (err=<nil>)
	I0816 17:11:45.984824   32603 host.go:66] Checking if "ha-764617" exists ...
	I0816 17:11:45.985139   32603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:45.985191   32603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:46.000286   32603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32901
	I0816 17:11:46.000719   32603 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:46.001146   32603 main.go:141] libmachine: Using API Version  1
	I0816 17:11:46.001167   32603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:46.001474   32603 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:46.001667   32603 main.go:141] libmachine: (ha-764617) Calling .GetIP
	I0816 17:11:46.004737   32603 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:11:46.005235   32603 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:11:46.005251   32603 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:11:46.005434   32603 host.go:66] Checking if "ha-764617" exists ...
	I0816 17:11:46.005807   32603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:46.005852   32603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:46.021469   32603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33545
	I0816 17:11:46.021930   32603 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:46.022399   32603 main.go:141] libmachine: Using API Version  1
	I0816 17:11:46.022418   32603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:46.022797   32603 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:46.023049   32603 main.go:141] libmachine: (ha-764617) Calling .DriverName
	I0816 17:11:46.023284   32603 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 17:11:46.023310   32603 main.go:141] libmachine: (ha-764617) Calling .GetSSHHostname
	I0816 17:11:46.026233   32603 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:11:46.026723   32603 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:11:46.026760   32603 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:11:46.026876   32603 main.go:141] libmachine: (ha-764617) Calling .GetSSHPort
	I0816 17:11:46.027062   32603 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:11:46.027197   32603 main.go:141] libmachine: (ha-764617) Calling .GetSSHUsername
	I0816 17:11:46.027331   32603 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617/id_rsa Username:docker}
	I0816 17:11:46.115736   32603 ssh_runner.go:195] Run: systemctl --version
	I0816 17:11:46.121409   32603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 17:11:46.137176   32603 kubeconfig.go:125] found "ha-764617" server: "https://192.168.39.254:8443"
	I0816 17:11:46.137211   32603 api_server.go:166] Checking apiserver status ...
	I0816 17:11:46.137253   32603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 17:11:46.153096   32603 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1104/cgroup
	W0816 17:11:46.165725   32603 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1104/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 17:11:46.165776   32603 ssh_runner.go:195] Run: ls
	I0816 17:11:46.169914   32603 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0816 17:11:46.173874   32603 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0816 17:11:46.173894   32603 status.go:422] ha-764617 apiserver status = Running (err=<nil>)
	I0816 17:11:46.173902   32603 status.go:257] ha-764617 status: &{Name:ha-764617 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 17:11:46.173916   32603 status.go:255] checking status of ha-764617-m02 ...
	I0816 17:11:46.174215   32603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:46.174262   32603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:46.189165   32603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42829
	I0816 17:11:46.189586   32603 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:46.190022   32603 main.go:141] libmachine: Using API Version  1
	I0816 17:11:46.190045   32603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:46.190353   32603 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:46.190528   32603 main.go:141] libmachine: (ha-764617-m02) Calling .GetState
	I0816 17:11:46.192113   32603 status.go:330] ha-764617-m02 host status = "Running" (err=<nil>)
	I0816 17:11:46.192131   32603 host.go:66] Checking if "ha-764617-m02" exists ...
	I0816 17:11:46.192468   32603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:46.192502   32603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:46.209197   32603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45109
	I0816 17:11:46.209614   32603 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:46.210035   32603 main.go:141] libmachine: Using API Version  1
	I0816 17:11:46.210057   32603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:46.210476   32603 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:46.210688   32603 main.go:141] libmachine: (ha-764617-m02) Calling .GetIP
	I0816 17:11:46.213603   32603 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:11:46.214018   32603 main.go:141] libmachine: (ha-764617-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:3e:7f", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:05:08 +0000 UTC Type:0 Mac:52:54:00:cf:3e:7f Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-764617-m02 Clientid:01:52:54:00:cf:3e:7f}
	I0816 17:11:46.214044   32603 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:11:46.214189   32603 host.go:66] Checking if "ha-764617-m02" exists ...
	I0816 17:11:46.214517   32603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:46.214556   32603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:46.229810   32603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33423
	I0816 17:11:46.230173   32603 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:46.230602   32603 main.go:141] libmachine: Using API Version  1
	I0816 17:11:46.230618   32603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:46.230927   32603 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:46.231141   32603 main.go:141] libmachine: (ha-764617-m02) Calling .DriverName
	I0816 17:11:46.231336   32603 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 17:11:46.231361   32603 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHHostname
	I0816 17:11:46.234171   32603 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:11:46.234692   32603 main.go:141] libmachine: (ha-764617-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:3e:7f", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:05:08 +0000 UTC Type:0 Mac:52:54:00:cf:3e:7f Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-764617-m02 Clientid:01:52:54:00:cf:3e:7f}
	I0816 17:11:46.234724   32603 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:11:46.234878   32603 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHPort
	I0816 17:11:46.235064   32603 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHKeyPath
	I0816 17:11:46.235268   32603 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHUsername
	I0816 17:11:46.235452   32603 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m02/id_rsa Username:docker}
	W0816 17:11:49.292907   32603 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.184:22: connect: no route to host
	W0816 17:11:49.293011   32603 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.184:22: connect: no route to host
	E0816 17:11:49.293035   32603 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.184:22: connect: no route to host
	I0816 17:11:49.293045   32603 status.go:257] ha-764617-m02 status: &{Name:ha-764617-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0816 17:11:49.293070   32603 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.184:22: connect: no route to host
	I0816 17:11:49.293081   32603 status.go:255] checking status of ha-764617-m03 ...
	I0816 17:11:49.293410   32603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:49.293460   32603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:49.308133   32603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46777
	I0816 17:11:49.308573   32603 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:49.309067   32603 main.go:141] libmachine: Using API Version  1
	I0816 17:11:49.309086   32603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:49.309414   32603 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:49.309579   32603 main.go:141] libmachine: (ha-764617-m03) Calling .GetState
	I0816 17:11:49.311321   32603 status.go:330] ha-764617-m03 host status = "Running" (err=<nil>)
	I0816 17:11:49.311339   32603 host.go:66] Checking if "ha-764617-m03" exists ...
	I0816 17:11:49.311654   32603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:49.311705   32603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:49.326153   32603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42119
	I0816 17:11:49.326582   32603 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:49.327011   32603 main.go:141] libmachine: Using API Version  1
	I0816 17:11:49.327031   32603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:49.327340   32603 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:49.327532   32603 main.go:141] libmachine: (ha-764617-m03) Calling .GetIP
	I0816 17:11:49.330309   32603 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:11:49.330783   32603 main.go:141] libmachine: (ha-764617-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:4e:81", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:06:25 +0000 UTC Type:0 Mac:52:54:00:b2:4e:81 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-764617-m03 Clientid:01:52:54:00:b2:4e:81}
	I0816 17:11:49.330802   32603 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:11:49.330963   32603 host.go:66] Checking if "ha-764617-m03" exists ...
	I0816 17:11:49.331405   32603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:49.331449   32603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:49.346372   32603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45855
	I0816 17:11:49.346713   32603 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:49.347186   32603 main.go:141] libmachine: Using API Version  1
	I0816 17:11:49.347218   32603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:49.347509   32603 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:49.347669   32603 main.go:141] libmachine: (ha-764617-m03) Calling .DriverName
	I0816 17:11:49.347848   32603 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 17:11:49.347864   32603 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHHostname
	I0816 17:11:49.350369   32603 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:11:49.350764   32603 main.go:141] libmachine: (ha-764617-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:4e:81", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:06:25 +0000 UTC Type:0 Mac:52:54:00:b2:4e:81 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-764617-m03 Clientid:01:52:54:00:b2:4e:81}
	I0816 17:11:49.350788   32603 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:11:49.350970   32603 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHPort
	I0816 17:11:49.351126   32603 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHKeyPath
	I0816 17:11:49.351259   32603 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHUsername
	I0816 17:11:49.351370   32603 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m03/id_rsa Username:docker}
	I0816 17:11:49.427826   32603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 17:11:49.442320   32603 kubeconfig.go:125] found "ha-764617" server: "https://192.168.39.254:8443"
	I0816 17:11:49.442347   32603 api_server.go:166] Checking apiserver status ...
	I0816 17:11:49.442379   32603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 17:11:49.457718   32603 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1510/cgroup
	W0816 17:11:49.467932   32603 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1510/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 17:11:49.467977   32603 ssh_runner.go:195] Run: ls
	I0816 17:11:49.471999   32603 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0816 17:11:49.478188   32603 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0816 17:11:49.478206   32603 status.go:422] ha-764617-m03 apiserver status = Running (err=<nil>)
	I0816 17:11:49.478213   32603 status.go:257] ha-764617-m03 status: &{Name:ha-764617-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 17:11:49.478227   32603 status.go:255] checking status of ha-764617-m04 ...
	I0816 17:11:49.478598   32603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:49.478645   32603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:49.493749   32603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42393
	I0816 17:11:49.494125   32603 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:49.494654   32603 main.go:141] libmachine: Using API Version  1
	I0816 17:11:49.494672   32603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:49.495013   32603 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:49.495257   32603 main.go:141] libmachine: (ha-764617-m04) Calling .GetState
	I0816 17:11:49.496963   32603 status.go:330] ha-764617-m04 host status = "Running" (err=<nil>)
	I0816 17:11:49.496977   32603 host.go:66] Checking if "ha-764617-m04" exists ...
	I0816 17:11:49.497246   32603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:49.497278   32603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:49.513015   32603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33815
	I0816 17:11:49.513414   32603 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:49.513893   32603 main.go:141] libmachine: Using API Version  1
	I0816 17:11:49.513920   32603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:49.514199   32603 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:49.514376   32603 main.go:141] libmachine: (ha-764617-m04) Calling .GetIP
	I0816 17:11:49.516893   32603 main.go:141] libmachine: (ha-764617-m04) DBG | domain ha-764617-m04 has defined MAC address 52:54:00:61:8e:ba in network mk-ha-764617
	I0816 17:11:49.517285   32603 main.go:141] libmachine: (ha-764617-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:8e:ba", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:07:50 +0000 UTC Type:0 Mac:52:54:00:61:8e:ba Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-764617-m04 Clientid:01:52:54:00:61:8e:ba}
	I0816 17:11:49.517317   32603 main.go:141] libmachine: (ha-764617-m04) DBG | domain ha-764617-m04 has defined IP address 192.168.39.137 and MAC address 52:54:00:61:8e:ba in network mk-ha-764617
	I0816 17:11:49.517440   32603 host.go:66] Checking if "ha-764617-m04" exists ...
	I0816 17:11:49.517740   32603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:49.517770   32603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:49.532186   32603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44579
	I0816 17:11:49.532532   32603 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:49.533008   32603 main.go:141] libmachine: Using API Version  1
	I0816 17:11:49.533025   32603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:49.533330   32603 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:49.533507   32603 main.go:141] libmachine: (ha-764617-m04) Calling .DriverName
	I0816 17:11:49.533709   32603 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 17:11:49.533726   32603 main.go:141] libmachine: (ha-764617-m04) Calling .GetSSHHostname
	I0816 17:11:49.536458   32603 main.go:141] libmachine: (ha-764617-m04) DBG | domain ha-764617-m04 has defined MAC address 52:54:00:61:8e:ba in network mk-ha-764617
	I0816 17:11:49.536969   32603 main.go:141] libmachine: (ha-764617-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:8e:ba", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:07:50 +0000 UTC Type:0 Mac:52:54:00:61:8e:ba Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-764617-m04 Clientid:01:52:54:00:61:8e:ba}
	I0816 17:11:49.537007   32603 main.go:141] libmachine: (ha-764617-m04) DBG | domain ha-764617-m04 has defined IP address 192.168.39.137 and MAC address 52:54:00:61:8e:ba in network mk-ha-764617
	I0816 17:11:49.537148   32603 main.go:141] libmachine: (ha-764617-m04) Calling .GetSSHPort
	I0816 17:11:49.537329   32603 main.go:141] libmachine: (ha-764617-m04) Calling .GetSSHKeyPath
	I0816 17:11:49.537496   32603 main.go:141] libmachine: (ha-764617-m04) Calling .GetSSHUsername
	I0816 17:11:49.537637   32603 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m04/id_rsa Username:docker}
	I0816 17:11:49.619433   32603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 17:11:49.634807   32603 status.go:257] ha-764617-m04 status: &{Name:ha-764617-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-764617 status -v=7 --alsologtostderr: exit status 7 (608.945657ms)

                                                
                                                
-- stdout --
	ha-764617
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-764617-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-764617-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-764617-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 17:11:53.727433   32723 out.go:345] Setting OutFile to fd 1 ...
	I0816 17:11:53.727579   32723 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:11:53.727590   32723 out.go:358] Setting ErrFile to fd 2...
	I0816 17:11:53.727596   32723 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:11:53.727771   32723 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-9545/.minikube/bin
	I0816 17:11:53.727954   32723 out.go:352] Setting JSON to false
	I0816 17:11:53.727984   32723 mustload.go:65] Loading cluster: ha-764617
	I0816 17:11:53.728090   32723 notify.go:220] Checking for updates...
	I0816 17:11:53.728389   32723 config.go:182] Loaded profile config "ha-764617": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 17:11:53.728406   32723 status.go:255] checking status of ha-764617 ...
	I0816 17:11:53.728864   32723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:53.728935   32723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:53.748431   32723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37307
	I0816 17:11:53.748934   32723 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:53.749577   32723 main.go:141] libmachine: Using API Version  1
	I0816 17:11:53.749605   32723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:53.750000   32723 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:53.750209   32723 main.go:141] libmachine: (ha-764617) Calling .GetState
	I0816 17:11:53.751885   32723 status.go:330] ha-764617 host status = "Running" (err=<nil>)
	I0816 17:11:53.751899   32723 host.go:66] Checking if "ha-764617" exists ...
	I0816 17:11:53.752179   32723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:53.752216   32723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:53.767935   32723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37357
	I0816 17:11:53.768387   32723 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:53.768907   32723 main.go:141] libmachine: Using API Version  1
	I0816 17:11:53.768933   32723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:53.769277   32723 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:53.769494   32723 main.go:141] libmachine: (ha-764617) Calling .GetIP
	I0816 17:11:53.772554   32723 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:11:53.773051   32723 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:11:53.773080   32723 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:11:53.773216   32723 host.go:66] Checking if "ha-764617" exists ...
	I0816 17:11:53.773575   32723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:53.773619   32723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:53.788254   32723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35841
	I0816 17:11:53.788671   32723 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:53.789119   32723 main.go:141] libmachine: Using API Version  1
	I0816 17:11:53.789143   32723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:53.789388   32723 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:53.789607   32723 main.go:141] libmachine: (ha-764617) Calling .DriverName
	I0816 17:11:53.789799   32723 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 17:11:53.789825   32723 main.go:141] libmachine: (ha-764617) Calling .GetSSHHostname
	I0816 17:11:53.792589   32723 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:11:53.793031   32723 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:11:53.793056   32723 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:11:53.793214   32723 main.go:141] libmachine: (ha-764617) Calling .GetSSHPort
	I0816 17:11:53.793483   32723 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:11:53.793658   32723 main.go:141] libmachine: (ha-764617) Calling .GetSSHUsername
	I0816 17:11:53.793793   32723 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617/id_rsa Username:docker}
	I0816 17:11:53.880020   32723 ssh_runner.go:195] Run: systemctl --version
	I0816 17:11:53.886564   32723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 17:11:53.902852   32723 kubeconfig.go:125] found "ha-764617" server: "https://192.168.39.254:8443"
	I0816 17:11:53.902893   32723 api_server.go:166] Checking apiserver status ...
	I0816 17:11:53.902947   32723 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 17:11:53.916936   32723 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1104/cgroup
	W0816 17:11:53.926350   32723 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1104/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 17:11:53.926442   32723 ssh_runner.go:195] Run: ls
	I0816 17:11:53.930938   32723 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0816 17:11:53.934968   32723 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0816 17:11:53.934995   32723 status.go:422] ha-764617 apiserver status = Running (err=<nil>)
	I0816 17:11:53.935007   32723 status.go:257] ha-764617 status: &{Name:ha-764617 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 17:11:53.935027   32723 status.go:255] checking status of ha-764617-m02 ...
	I0816 17:11:53.935342   32723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:53.935376   32723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:53.950339   32723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45733
	I0816 17:11:53.950725   32723 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:53.951187   32723 main.go:141] libmachine: Using API Version  1
	I0816 17:11:53.951202   32723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:53.951544   32723 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:53.951756   32723 main.go:141] libmachine: (ha-764617-m02) Calling .GetState
	I0816 17:11:53.953434   32723 status.go:330] ha-764617-m02 host status = "Stopped" (err=<nil>)
	I0816 17:11:53.953452   32723 status.go:343] host is not running, skipping remaining checks
	I0816 17:11:53.953461   32723 status.go:257] ha-764617-m02 status: &{Name:ha-764617-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 17:11:53.953479   32723 status.go:255] checking status of ha-764617-m03 ...
	I0816 17:11:53.953772   32723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:53.953808   32723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:53.968435   32723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34529
	I0816 17:11:53.968891   32723 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:53.969355   32723 main.go:141] libmachine: Using API Version  1
	I0816 17:11:53.969384   32723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:53.969704   32723 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:53.969901   32723 main.go:141] libmachine: (ha-764617-m03) Calling .GetState
	I0816 17:11:53.971407   32723 status.go:330] ha-764617-m03 host status = "Running" (err=<nil>)
	I0816 17:11:53.971423   32723 host.go:66] Checking if "ha-764617-m03" exists ...
	I0816 17:11:53.971723   32723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:53.971778   32723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:53.986359   32723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39825
	I0816 17:11:53.986771   32723 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:53.987208   32723 main.go:141] libmachine: Using API Version  1
	I0816 17:11:53.987223   32723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:53.987601   32723 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:53.987802   32723 main.go:141] libmachine: (ha-764617-m03) Calling .GetIP
	I0816 17:11:53.990757   32723 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:11:53.991188   32723 main.go:141] libmachine: (ha-764617-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:4e:81", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:06:25 +0000 UTC Type:0 Mac:52:54:00:b2:4e:81 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-764617-m03 Clientid:01:52:54:00:b2:4e:81}
	I0816 17:11:53.991205   32723 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:11:53.991367   32723 host.go:66] Checking if "ha-764617-m03" exists ...
	I0816 17:11:53.991671   32723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:53.991703   32723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:54.006527   32723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35725
	I0816 17:11:54.006955   32723 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:54.007391   32723 main.go:141] libmachine: Using API Version  1
	I0816 17:11:54.007418   32723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:54.007721   32723 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:54.007914   32723 main.go:141] libmachine: (ha-764617-m03) Calling .DriverName
	I0816 17:11:54.008094   32723 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 17:11:54.008113   32723 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHHostname
	I0816 17:11:54.011595   32723 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:11:54.012154   32723 main.go:141] libmachine: (ha-764617-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:4e:81", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:06:25 +0000 UTC Type:0 Mac:52:54:00:b2:4e:81 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-764617-m03 Clientid:01:52:54:00:b2:4e:81}
	I0816 17:11:54.012191   32723 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:11:54.012369   32723 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHPort
	I0816 17:11:54.012556   32723 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHKeyPath
	I0816 17:11:54.012765   32723 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHUsername
	I0816 17:11:54.012907   32723 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m03/id_rsa Username:docker}
	I0816 17:11:54.095735   32723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 17:11:54.109942   32723 kubeconfig.go:125] found "ha-764617" server: "https://192.168.39.254:8443"
	I0816 17:11:54.109966   32723 api_server.go:166] Checking apiserver status ...
	I0816 17:11:54.109994   32723 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 17:11:54.124639   32723 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1510/cgroup
	W0816 17:11:54.133563   32723 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1510/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 17:11:54.133621   32723 ssh_runner.go:195] Run: ls
	I0816 17:11:54.137625   32723 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0816 17:11:54.142166   32723 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0816 17:11:54.142192   32723 status.go:422] ha-764617-m03 apiserver status = Running (err=<nil>)
	I0816 17:11:54.142201   32723 status.go:257] ha-764617-m03 status: &{Name:ha-764617-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 17:11:54.142215   32723 status.go:255] checking status of ha-764617-m04 ...
	I0816 17:11:54.142533   32723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:54.142575   32723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:54.157371   32723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42775
	I0816 17:11:54.157823   32723 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:54.158280   32723 main.go:141] libmachine: Using API Version  1
	I0816 17:11:54.158301   32723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:54.158597   32723 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:54.158809   32723 main.go:141] libmachine: (ha-764617-m04) Calling .GetState
	I0816 17:11:54.160283   32723 status.go:330] ha-764617-m04 host status = "Running" (err=<nil>)
	I0816 17:11:54.160300   32723 host.go:66] Checking if "ha-764617-m04" exists ...
	I0816 17:11:54.160590   32723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:54.160641   32723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:54.175721   32723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46629
	I0816 17:11:54.176084   32723 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:54.176571   32723 main.go:141] libmachine: Using API Version  1
	I0816 17:11:54.176595   32723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:54.177004   32723 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:54.177193   32723 main.go:141] libmachine: (ha-764617-m04) Calling .GetIP
	I0816 17:11:54.179812   32723 main.go:141] libmachine: (ha-764617-m04) DBG | domain ha-764617-m04 has defined MAC address 52:54:00:61:8e:ba in network mk-ha-764617
	I0816 17:11:54.180243   32723 main.go:141] libmachine: (ha-764617-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:8e:ba", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:07:50 +0000 UTC Type:0 Mac:52:54:00:61:8e:ba Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-764617-m04 Clientid:01:52:54:00:61:8e:ba}
	I0816 17:11:54.180261   32723 main.go:141] libmachine: (ha-764617-m04) DBG | domain ha-764617-m04 has defined IP address 192.168.39.137 and MAC address 52:54:00:61:8e:ba in network mk-ha-764617
	I0816 17:11:54.180413   32723 host.go:66] Checking if "ha-764617-m04" exists ...
	I0816 17:11:54.180782   32723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:11:54.180827   32723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:11:54.196370   32723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42959
	I0816 17:11:54.196840   32723 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:11:54.197414   32723 main.go:141] libmachine: Using API Version  1
	I0816 17:11:54.197442   32723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:11:54.197786   32723 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:11:54.197979   32723 main.go:141] libmachine: (ha-764617-m04) Calling .DriverName
	I0816 17:11:54.198187   32723 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 17:11:54.198222   32723 main.go:141] libmachine: (ha-764617-m04) Calling .GetSSHHostname
	I0816 17:11:54.200951   32723 main.go:141] libmachine: (ha-764617-m04) DBG | domain ha-764617-m04 has defined MAC address 52:54:00:61:8e:ba in network mk-ha-764617
	I0816 17:11:54.201367   32723 main.go:141] libmachine: (ha-764617-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:8e:ba", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:07:50 +0000 UTC Type:0 Mac:52:54:00:61:8e:ba Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-764617-m04 Clientid:01:52:54:00:61:8e:ba}
	I0816 17:11:54.201397   32723 main.go:141] libmachine: (ha-764617-m04) DBG | domain ha-764617-m04 has defined IP address 192.168.39.137 and MAC address 52:54:00:61:8e:ba in network mk-ha-764617
	I0816 17:11:54.201517   32723 main.go:141] libmachine: (ha-764617-m04) Calling .GetSSHPort
	I0816 17:11:54.201709   32723 main.go:141] libmachine: (ha-764617-m04) Calling .GetSSHKeyPath
	I0816 17:11:54.201839   32723 main.go:141] libmachine: (ha-764617-m04) Calling .GetSSHUsername
	I0816 17:11:54.201983   32723 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m04/id_rsa Username:docker}
	I0816 17:11:54.279758   32723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 17:11:54.294854   32723 status.go:257] ha-764617-m04 status: &{Name:ha-764617-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-764617 status -v=7 --alsologtostderr: exit status 7 (609.799866ms)

                                                
                                                
-- stdout --
	ha-764617
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-764617-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-764617-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-764617-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 17:12:07.693800   32845 out.go:345] Setting OutFile to fd 1 ...
	I0816 17:12:07.693909   32845 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:12:07.693917   32845 out.go:358] Setting ErrFile to fd 2...
	I0816 17:12:07.693922   32845 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:12:07.694136   32845 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-9545/.minikube/bin
	I0816 17:12:07.694291   32845 out.go:352] Setting JSON to false
	I0816 17:12:07.694317   32845 mustload.go:65] Loading cluster: ha-764617
	I0816 17:12:07.694451   32845 notify.go:220] Checking for updates...
	I0816 17:12:07.694702   32845 config.go:182] Loaded profile config "ha-764617": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 17:12:07.694715   32845 status.go:255] checking status of ha-764617 ...
	I0816 17:12:07.695048   32845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:12:07.695098   32845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:12:07.714793   32845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34453
	I0816 17:12:07.715568   32845 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:12:07.716083   32845 main.go:141] libmachine: Using API Version  1
	I0816 17:12:07.716136   32845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:12:07.716554   32845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:12:07.716774   32845 main.go:141] libmachine: (ha-764617) Calling .GetState
	I0816 17:12:07.718525   32845 status.go:330] ha-764617 host status = "Running" (err=<nil>)
	I0816 17:12:07.718543   32845 host.go:66] Checking if "ha-764617" exists ...
	I0816 17:12:07.718866   32845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:12:07.718906   32845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:12:07.734823   32845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46213
	I0816 17:12:07.735222   32845 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:12:07.735678   32845 main.go:141] libmachine: Using API Version  1
	I0816 17:12:07.735699   32845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:12:07.736021   32845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:12:07.736195   32845 main.go:141] libmachine: (ha-764617) Calling .GetIP
	I0816 17:12:07.739288   32845 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:12:07.739749   32845 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:12:07.739783   32845 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:12:07.739934   32845 host.go:66] Checking if "ha-764617" exists ...
	I0816 17:12:07.740331   32845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:12:07.740378   32845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:12:07.754991   32845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40873
	I0816 17:12:07.755379   32845 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:12:07.755808   32845 main.go:141] libmachine: Using API Version  1
	I0816 17:12:07.755833   32845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:12:07.756131   32845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:12:07.756309   32845 main.go:141] libmachine: (ha-764617) Calling .DriverName
	I0816 17:12:07.756514   32845 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 17:12:07.756540   32845 main.go:141] libmachine: (ha-764617) Calling .GetSSHHostname
	I0816 17:12:07.759220   32845 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:12:07.759654   32845 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:12:07.759689   32845 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:12:07.759878   32845 main.go:141] libmachine: (ha-764617) Calling .GetSSHPort
	I0816 17:12:07.760039   32845 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:12:07.760201   32845 main.go:141] libmachine: (ha-764617) Calling .GetSSHUsername
	I0816 17:12:07.760326   32845 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617/id_rsa Username:docker}
	I0816 17:12:07.848041   32845 ssh_runner.go:195] Run: systemctl --version
	I0816 17:12:07.854011   32845 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 17:12:07.868911   32845 kubeconfig.go:125] found "ha-764617" server: "https://192.168.39.254:8443"
	I0816 17:12:07.868943   32845 api_server.go:166] Checking apiserver status ...
	I0816 17:12:07.868975   32845 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 17:12:07.882904   32845 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1104/cgroup
	W0816 17:12:07.894246   32845 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1104/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 17:12:07.894303   32845 ssh_runner.go:195] Run: ls
	I0816 17:12:07.899302   32845 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0816 17:12:07.903539   32845 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0816 17:12:07.903562   32845 status.go:422] ha-764617 apiserver status = Running (err=<nil>)
	I0816 17:12:07.903572   32845 status.go:257] ha-764617 status: &{Name:ha-764617 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 17:12:07.903587   32845 status.go:255] checking status of ha-764617-m02 ...
	I0816 17:12:07.903896   32845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:12:07.903928   32845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:12:07.919090   32845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38337
	I0816 17:12:07.919542   32845 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:12:07.920048   32845 main.go:141] libmachine: Using API Version  1
	I0816 17:12:07.920069   32845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:12:07.920380   32845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:12:07.920549   32845 main.go:141] libmachine: (ha-764617-m02) Calling .GetState
	I0816 17:12:07.922102   32845 status.go:330] ha-764617-m02 host status = "Stopped" (err=<nil>)
	I0816 17:12:07.922115   32845 status.go:343] host is not running, skipping remaining checks
	I0816 17:12:07.922121   32845 status.go:257] ha-764617-m02 status: &{Name:ha-764617-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 17:12:07.922146   32845 status.go:255] checking status of ha-764617-m03 ...
	I0816 17:12:07.922493   32845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:12:07.922531   32845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:12:07.936933   32845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37959
	I0816 17:12:07.937367   32845 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:12:07.937842   32845 main.go:141] libmachine: Using API Version  1
	I0816 17:12:07.937861   32845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:12:07.938156   32845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:12:07.938338   32845 main.go:141] libmachine: (ha-764617-m03) Calling .GetState
	I0816 17:12:07.939924   32845 status.go:330] ha-764617-m03 host status = "Running" (err=<nil>)
	I0816 17:12:07.939937   32845 host.go:66] Checking if "ha-764617-m03" exists ...
	I0816 17:12:07.940311   32845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:12:07.940374   32845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:12:07.955645   32845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41379
	I0816 17:12:07.955991   32845 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:12:07.956431   32845 main.go:141] libmachine: Using API Version  1
	I0816 17:12:07.956450   32845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:12:07.956752   32845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:12:07.956928   32845 main.go:141] libmachine: (ha-764617-m03) Calling .GetIP
	I0816 17:12:07.959551   32845 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:12:07.960004   32845 main.go:141] libmachine: (ha-764617-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:4e:81", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:06:25 +0000 UTC Type:0 Mac:52:54:00:b2:4e:81 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-764617-m03 Clientid:01:52:54:00:b2:4e:81}
	I0816 17:12:07.960029   32845 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:12:07.960188   32845 host.go:66] Checking if "ha-764617-m03" exists ...
	I0816 17:12:07.960561   32845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:12:07.960599   32845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:12:07.975364   32845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36373
	I0816 17:12:07.975755   32845 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:12:07.976227   32845 main.go:141] libmachine: Using API Version  1
	I0816 17:12:07.976250   32845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:12:07.976563   32845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:12:07.976770   32845 main.go:141] libmachine: (ha-764617-m03) Calling .DriverName
	I0816 17:12:07.976980   32845 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 17:12:07.977003   32845 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHHostname
	I0816 17:12:07.980082   32845 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:12:07.980563   32845 main.go:141] libmachine: (ha-764617-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:4e:81", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:06:25 +0000 UTC Type:0 Mac:52:54:00:b2:4e:81 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-764617-m03 Clientid:01:52:54:00:b2:4e:81}
	I0816 17:12:07.980589   32845 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:12:07.980740   32845 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHPort
	I0816 17:12:07.980967   32845 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHKeyPath
	I0816 17:12:07.981115   32845 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHUsername
	I0816 17:12:07.981232   32845 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m03/id_rsa Username:docker}
	I0816 17:12:08.059773   32845 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 17:12:08.077109   32845 kubeconfig.go:125] found "ha-764617" server: "https://192.168.39.254:8443"
	I0816 17:12:08.077136   32845 api_server.go:166] Checking apiserver status ...
	I0816 17:12:08.077175   32845 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 17:12:08.090304   32845 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1510/cgroup
	W0816 17:12:08.100687   32845 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1510/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 17:12:08.100751   32845 ssh_runner.go:195] Run: ls
	I0816 17:12:08.105068   32845 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0816 17:12:08.109232   32845 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0816 17:12:08.109259   32845 status.go:422] ha-764617-m03 apiserver status = Running (err=<nil>)
	I0816 17:12:08.109269   32845 status.go:257] ha-764617-m03 status: &{Name:ha-764617-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 17:12:08.109288   32845 status.go:255] checking status of ha-764617-m04 ...
	I0816 17:12:08.109657   32845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:12:08.109693   32845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:12:08.125500   32845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41665
	I0816 17:12:08.125914   32845 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:12:08.126370   32845 main.go:141] libmachine: Using API Version  1
	I0816 17:12:08.126390   32845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:12:08.126741   32845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:12:08.126943   32845 main.go:141] libmachine: (ha-764617-m04) Calling .GetState
	I0816 17:12:08.128545   32845 status.go:330] ha-764617-m04 host status = "Running" (err=<nil>)
	I0816 17:12:08.128561   32845 host.go:66] Checking if "ha-764617-m04" exists ...
	I0816 17:12:08.128895   32845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:12:08.128932   32845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:12:08.143540   32845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39177
	I0816 17:12:08.144035   32845 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:12:08.144500   32845 main.go:141] libmachine: Using API Version  1
	I0816 17:12:08.144525   32845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:12:08.144886   32845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:12:08.145080   32845 main.go:141] libmachine: (ha-764617-m04) Calling .GetIP
	I0816 17:12:08.147577   32845 main.go:141] libmachine: (ha-764617-m04) DBG | domain ha-764617-m04 has defined MAC address 52:54:00:61:8e:ba in network mk-ha-764617
	I0816 17:12:08.148043   32845 main.go:141] libmachine: (ha-764617-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:8e:ba", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:07:50 +0000 UTC Type:0 Mac:52:54:00:61:8e:ba Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-764617-m04 Clientid:01:52:54:00:61:8e:ba}
	I0816 17:12:08.148063   32845 main.go:141] libmachine: (ha-764617-m04) DBG | domain ha-764617-m04 has defined IP address 192.168.39.137 and MAC address 52:54:00:61:8e:ba in network mk-ha-764617
	I0816 17:12:08.148226   32845 host.go:66] Checking if "ha-764617-m04" exists ...
	I0816 17:12:08.148507   32845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:12:08.148539   32845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:12:08.163970   32845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37689
	I0816 17:12:08.164412   32845 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:12:08.164943   32845 main.go:141] libmachine: Using API Version  1
	I0816 17:12:08.164966   32845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:12:08.165285   32845 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:12:08.165486   32845 main.go:141] libmachine: (ha-764617-m04) Calling .DriverName
	I0816 17:12:08.165696   32845 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 17:12:08.165715   32845 main.go:141] libmachine: (ha-764617-m04) Calling .GetSSHHostname
	I0816 17:12:08.168773   32845 main.go:141] libmachine: (ha-764617-m04) DBG | domain ha-764617-m04 has defined MAC address 52:54:00:61:8e:ba in network mk-ha-764617
	I0816 17:12:08.169248   32845 main.go:141] libmachine: (ha-764617-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:8e:ba", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:07:50 +0000 UTC Type:0 Mac:52:54:00:61:8e:ba Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-764617-m04 Clientid:01:52:54:00:61:8e:ba}
	I0816 17:12:08.169273   32845 main.go:141] libmachine: (ha-764617-m04) DBG | domain ha-764617-m04 has defined IP address 192.168.39.137 and MAC address 52:54:00:61:8e:ba in network mk-ha-764617
	I0816 17:12:08.169477   32845 main.go:141] libmachine: (ha-764617-m04) Calling .GetSSHPort
	I0816 17:12:08.169656   32845 main.go:141] libmachine: (ha-764617-m04) Calling .GetSSHKeyPath
	I0816 17:12:08.169830   32845 main.go:141] libmachine: (ha-764617-m04) Calling .GetSSHUsername
	I0816 17:12:08.169993   32845 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m04/id_rsa Username:docker}
	I0816 17:12:08.247604   32845 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 17:12:08.261424   32845 status.go:257] ha-764617-m04 status: &{Name:ha-764617-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-764617 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-764617 -n ha-764617
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-764617 logs -n 25: (1.313571165s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-764617 ssh -n                                                                 | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | ha-764617-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-764617 cp ha-764617-m03:/home/docker/cp-test.txt                              | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | ha-764617:/home/docker/cp-test_ha-764617-m03_ha-764617.txt                       |           |         |         |                     |                     |
	| ssh     | ha-764617 ssh -n                                                                 | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | ha-764617-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-764617 ssh -n ha-764617 sudo cat                                              | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | /home/docker/cp-test_ha-764617-m03_ha-764617.txt                                 |           |         |         |                     |                     |
	| cp      | ha-764617 cp ha-764617-m03:/home/docker/cp-test.txt                              | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | ha-764617-m02:/home/docker/cp-test_ha-764617-m03_ha-764617-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-764617 ssh -n                                                                 | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | ha-764617-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-764617 ssh -n ha-764617-m02 sudo cat                                          | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | /home/docker/cp-test_ha-764617-m03_ha-764617-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-764617 cp ha-764617-m03:/home/docker/cp-test.txt                              | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | ha-764617-m04:/home/docker/cp-test_ha-764617-m03_ha-764617-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-764617 ssh -n                                                                 | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | ha-764617-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-764617 ssh -n ha-764617-m04 sudo cat                                          | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | /home/docker/cp-test_ha-764617-m03_ha-764617-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-764617 cp testdata/cp-test.txt                                                | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | ha-764617-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-764617 ssh -n                                                                 | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | ha-764617-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-764617 cp ha-764617-m04:/home/docker/cp-test.txt                              | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1933781201/001/cp-test_ha-764617-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-764617 ssh -n                                                                 | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | ha-764617-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-764617 cp ha-764617-m04:/home/docker/cp-test.txt                              | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | ha-764617:/home/docker/cp-test_ha-764617-m04_ha-764617.txt                       |           |         |         |                     |                     |
	| ssh     | ha-764617 ssh -n                                                                 | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | ha-764617-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-764617 ssh -n ha-764617 sudo cat                                              | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | /home/docker/cp-test_ha-764617-m04_ha-764617.txt                                 |           |         |         |                     |                     |
	| cp      | ha-764617 cp ha-764617-m04:/home/docker/cp-test.txt                              | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | ha-764617-m02:/home/docker/cp-test_ha-764617-m04_ha-764617-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-764617 ssh -n                                                                 | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | ha-764617-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-764617 ssh -n ha-764617-m02 sudo cat                                          | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | /home/docker/cp-test_ha-764617-m04_ha-764617-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-764617 cp ha-764617-m04:/home/docker/cp-test.txt                              | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | ha-764617-m03:/home/docker/cp-test_ha-764617-m04_ha-764617-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-764617 ssh -n                                                                 | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | ha-764617-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-764617 ssh -n ha-764617-m03 sudo cat                                          | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | /home/docker/cp-test_ha-764617-m04_ha-764617-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-764617 node stop m02 -v=7                                                     | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-764617 node start m02 -v=7                                                    | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:11 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 17:04:11
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 17:04:11.174420   27287 out.go:345] Setting OutFile to fd 1 ...
	I0816 17:04:11.174645   27287 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:04:11.174653   27287 out.go:358] Setting ErrFile to fd 2...
	I0816 17:04:11.174657   27287 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:04:11.174805   27287 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-9545/.minikube/bin
	I0816 17:04:11.175400   27287 out.go:352] Setting JSON to false
	I0816 17:04:11.176184   27287 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2749,"bootTime":1723825102,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0816 17:04:11.176238   27287 start.go:139] virtualization: kvm guest
	I0816 17:04:11.178345   27287 out.go:177] * [ha-764617] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0816 17:04:11.179681   27287 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 17:04:11.179713   27287 notify.go:220] Checking for updates...
	I0816 17:04:11.181900   27287 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 17:04:11.183037   27287 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19461-9545/kubeconfig
	I0816 17:04:11.184170   27287 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19461-9545/.minikube
	I0816 17:04:11.185338   27287 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 17:04:11.186327   27287 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 17:04:11.187420   27287 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 17:04:11.221543   27287 out.go:177] * Using the kvm2 driver based on user configuration
	I0816 17:04:11.222682   27287 start.go:297] selected driver: kvm2
	I0816 17:04:11.222697   27287 start.go:901] validating driver "kvm2" against <nil>
	I0816 17:04:11.222710   27287 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 17:04:11.223397   27287 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 17:04:11.223476   27287 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19461-9545/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 17:04:11.238691   27287 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0816 17:04:11.238751   27287 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 17:04:11.238965   27287 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 17:04:11.239001   27287 cni.go:84] Creating CNI manager for ""
	I0816 17:04:11.239010   27287 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0816 17:04:11.239021   27287 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0816 17:04:11.239092   27287 start.go:340] cluster config:
	{Name:ha-764617 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-764617 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0816 17:04:11.239194   27287 iso.go:125] acquiring lock: {Name:mke35866c1d0f078e1f027c2d727692722810e04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 17:04:11.240824   27287 out.go:177] * Starting "ha-764617" primary control-plane node in "ha-764617" cluster
	I0816 17:04:11.241860   27287 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 17:04:11.241899   27287 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0816 17:04:11.241907   27287 cache.go:56] Caching tarball of preloaded images
	I0816 17:04:11.241987   27287 preload.go:172] Found /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 17:04:11.242000   27287 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0816 17:04:11.242295   27287 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/config.json ...
	I0816 17:04:11.242324   27287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/config.json: {Name:mke1f2c51e39699076007c2f0252e975b8439c9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:04:11.242473   27287 start.go:360] acquireMachinesLock for ha-764617: {Name:mke1ffa1f2d6d714bdd85e184816ba8f4dfd08f1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 17:04:11.242514   27287 start.go:364] duration metric: took 25.966µs to acquireMachinesLock for "ha-764617"
	I0816 17:04:11.242535   27287 start.go:93] Provisioning new machine with config: &{Name:ha-764617 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-764617 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 17:04:11.242604   27287 start.go:125] createHost starting for "" (driver="kvm2")
	I0816 17:04:11.244182   27287 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 17:04:11.244317   27287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:04:11.244348   27287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:04:11.258103   27287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33649
	I0816 17:04:11.258510   27287 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:04:11.259028   27287 main.go:141] libmachine: Using API Version  1
	I0816 17:04:11.259044   27287 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:04:11.259383   27287 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:04:11.259556   27287 main.go:141] libmachine: (ha-764617) Calling .GetMachineName
	I0816 17:04:11.259684   27287 main.go:141] libmachine: (ha-764617) Calling .DriverName
	I0816 17:04:11.259825   27287 start.go:159] libmachine.API.Create for "ha-764617" (driver="kvm2")
	I0816 17:04:11.259862   27287 client.go:168] LocalClient.Create starting
	I0816 17:04:11.259890   27287 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem
	I0816 17:04:11.259930   27287 main.go:141] libmachine: Decoding PEM data...
	I0816 17:04:11.259947   27287 main.go:141] libmachine: Parsing certificate...
	I0816 17:04:11.260006   27287 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem
	I0816 17:04:11.260024   27287 main.go:141] libmachine: Decoding PEM data...
	I0816 17:04:11.260035   27287 main.go:141] libmachine: Parsing certificate...
	I0816 17:04:11.260051   27287 main.go:141] libmachine: Running pre-create checks...
	I0816 17:04:11.260060   27287 main.go:141] libmachine: (ha-764617) Calling .PreCreateCheck
	I0816 17:04:11.260370   27287 main.go:141] libmachine: (ha-764617) Calling .GetConfigRaw
	I0816 17:04:11.260758   27287 main.go:141] libmachine: Creating machine...
	I0816 17:04:11.260779   27287 main.go:141] libmachine: (ha-764617) Calling .Create
	I0816 17:04:11.260893   27287 main.go:141] libmachine: (ha-764617) Creating KVM machine...
	I0816 17:04:11.262073   27287 main.go:141] libmachine: (ha-764617) DBG | found existing default KVM network
	I0816 17:04:11.262688   27287 main.go:141] libmachine: (ha-764617) DBG | I0816 17:04:11.262559   27310 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012d980}
	I0816 17:04:11.262705   27287 main.go:141] libmachine: (ha-764617) DBG | created network xml: 
	I0816 17:04:11.262718   27287 main.go:141] libmachine: (ha-764617) DBG | <network>
	I0816 17:04:11.262730   27287 main.go:141] libmachine: (ha-764617) DBG |   <name>mk-ha-764617</name>
	I0816 17:04:11.262737   27287 main.go:141] libmachine: (ha-764617) DBG |   <dns enable='no'/>
	I0816 17:04:11.262751   27287 main.go:141] libmachine: (ha-764617) DBG |   
	I0816 17:04:11.262765   27287 main.go:141] libmachine: (ha-764617) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0816 17:04:11.262779   27287 main.go:141] libmachine: (ha-764617) DBG |     <dhcp>
	I0816 17:04:11.262793   27287 main.go:141] libmachine: (ha-764617) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0816 17:04:11.262804   27287 main.go:141] libmachine: (ha-764617) DBG |     </dhcp>
	I0816 17:04:11.262815   27287 main.go:141] libmachine: (ha-764617) DBG |   </ip>
	I0816 17:04:11.262824   27287 main.go:141] libmachine: (ha-764617) DBG |   
	I0816 17:04:11.262834   27287 main.go:141] libmachine: (ha-764617) DBG | </network>
	I0816 17:04:11.262850   27287 main.go:141] libmachine: (ha-764617) DBG | 
	I0816 17:04:11.267653   27287 main.go:141] libmachine: (ha-764617) DBG | trying to create private KVM network mk-ha-764617 192.168.39.0/24...
	I0816 17:04:11.328268   27287 main.go:141] libmachine: (ha-764617) DBG | private KVM network mk-ha-764617 192.168.39.0/24 created
	I0816 17:04:11.328320   27287 main.go:141] libmachine: (ha-764617) Setting up store path in /home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617 ...
	I0816 17:04:11.328335   27287 main.go:141] libmachine: (ha-764617) DBG | I0816 17:04:11.328212   27310 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19461-9545/.minikube
	I0816 17:04:11.328350   27287 main.go:141] libmachine: (ha-764617) Building disk image from file:///home/jenkins/minikube-integration/19461-9545/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0816 17:04:11.328365   27287 main.go:141] libmachine: (ha-764617) Downloading /home/jenkins/minikube-integration/19461-9545/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19461-9545/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0816 17:04:11.565921   27287 main.go:141] libmachine: (ha-764617) DBG | I0816 17:04:11.565786   27310 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617/id_rsa...
	I0816 17:04:11.665197   27287 main.go:141] libmachine: (ha-764617) DBG | I0816 17:04:11.665075   27310 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617/ha-764617.rawdisk...
	I0816 17:04:11.665230   27287 main.go:141] libmachine: (ha-764617) DBG | Writing magic tar header
	I0816 17:04:11.665244   27287 main.go:141] libmachine: (ha-764617) DBG | Writing SSH key tar header
	I0816 17:04:11.665253   27287 main.go:141] libmachine: (ha-764617) DBG | I0816 17:04:11.665210   27310 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617 ...
	I0816 17:04:11.665346   27287 main.go:141] libmachine: (ha-764617) Setting executable bit set on /home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617 (perms=drwx------)
	I0816 17:04:11.665364   27287 main.go:141] libmachine: (ha-764617) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617
	I0816 17:04:11.665375   27287 main.go:141] libmachine: (ha-764617) Setting executable bit set on /home/jenkins/minikube-integration/19461-9545/.minikube/machines (perms=drwxr-xr-x)
	I0816 17:04:11.665391   27287 main.go:141] libmachine: (ha-764617) Setting executable bit set on /home/jenkins/minikube-integration/19461-9545/.minikube (perms=drwxr-xr-x)
	I0816 17:04:11.665401   27287 main.go:141] libmachine: (ha-764617) Setting executable bit set on /home/jenkins/minikube-integration/19461-9545 (perms=drwxrwxr-x)
	I0816 17:04:11.665413   27287 main.go:141] libmachine: (ha-764617) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0816 17:04:11.665419   27287 main.go:141] libmachine: (ha-764617) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0816 17:04:11.665425   27287 main.go:141] libmachine: (ha-764617) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19461-9545/.minikube/machines
	I0816 17:04:11.665434   27287 main.go:141] libmachine: (ha-764617) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19461-9545/.minikube
	I0816 17:04:11.665440   27287 main.go:141] libmachine: (ha-764617) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19461-9545
	I0816 17:04:11.665474   27287 main.go:141] libmachine: (ha-764617) Creating domain...
	I0816 17:04:11.665498   27287 main.go:141] libmachine: (ha-764617) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0816 17:04:11.665512   27287 main.go:141] libmachine: (ha-764617) DBG | Checking permissions on dir: /home/jenkins
	I0816 17:04:11.665536   27287 main.go:141] libmachine: (ha-764617) DBG | Checking permissions on dir: /home
	I0816 17:04:11.665551   27287 main.go:141] libmachine: (ha-764617) DBG | Skipping /home - not owner
	I0816 17:04:11.666431   27287 main.go:141] libmachine: (ha-764617) define libvirt domain using xml: 
	I0816 17:04:11.666445   27287 main.go:141] libmachine: (ha-764617) <domain type='kvm'>
	I0816 17:04:11.666452   27287 main.go:141] libmachine: (ha-764617)   <name>ha-764617</name>
	I0816 17:04:11.666456   27287 main.go:141] libmachine: (ha-764617)   <memory unit='MiB'>2200</memory>
	I0816 17:04:11.666462   27287 main.go:141] libmachine: (ha-764617)   <vcpu>2</vcpu>
	I0816 17:04:11.666466   27287 main.go:141] libmachine: (ha-764617)   <features>
	I0816 17:04:11.666471   27287 main.go:141] libmachine: (ha-764617)     <acpi/>
	I0816 17:04:11.666475   27287 main.go:141] libmachine: (ha-764617)     <apic/>
	I0816 17:04:11.666480   27287 main.go:141] libmachine: (ha-764617)     <pae/>
	I0816 17:04:11.666485   27287 main.go:141] libmachine: (ha-764617)     
	I0816 17:04:11.666490   27287 main.go:141] libmachine: (ha-764617)   </features>
	I0816 17:04:11.666497   27287 main.go:141] libmachine: (ha-764617)   <cpu mode='host-passthrough'>
	I0816 17:04:11.666502   27287 main.go:141] libmachine: (ha-764617)   
	I0816 17:04:11.666507   27287 main.go:141] libmachine: (ha-764617)   </cpu>
	I0816 17:04:11.666512   27287 main.go:141] libmachine: (ha-764617)   <os>
	I0816 17:04:11.666519   27287 main.go:141] libmachine: (ha-764617)     <type>hvm</type>
	I0816 17:04:11.666524   27287 main.go:141] libmachine: (ha-764617)     <boot dev='cdrom'/>
	I0816 17:04:11.666537   27287 main.go:141] libmachine: (ha-764617)     <boot dev='hd'/>
	I0816 17:04:11.666557   27287 main.go:141] libmachine: (ha-764617)     <bootmenu enable='no'/>
	I0816 17:04:11.666578   27287 main.go:141] libmachine: (ha-764617)   </os>
	I0816 17:04:11.666585   27287 main.go:141] libmachine: (ha-764617)   <devices>
	I0816 17:04:11.666596   27287 main.go:141] libmachine: (ha-764617)     <disk type='file' device='cdrom'>
	I0816 17:04:11.666637   27287 main.go:141] libmachine: (ha-764617)       <source file='/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617/boot2docker.iso'/>
	I0816 17:04:11.666660   27287 main.go:141] libmachine: (ha-764617)       <target dev='hdc' bus='scsi'/>
	I0816 17:04:11.666671   27287 main.go:141] libmachine: (ha-764617)       <readonly/>
	I0816 17:04:11.666684   27287 main.go:141] libmachine: (ha-764617)     </disk>
	I0816 17:04:11.666705   27287 main.go:141] libmachine: (ha-764617)     <disk type='file' device='disk'>
	I0816 17:04:11.666718   27287 main.go:141] libmachine: (ha-764617)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0816 17:04:11.666731   27287 main.go:141] libmachine: (ha-764617)       <source file='/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617/ha-764617.rawdisk'/>
	I0816 17:04:11.666739   27287 main.go:141] libmachine: (ha-764617)       <target dev='hda' bus='virtio'/>
	I0816 17:04:11.666745   27287 main.go:141] libmachine: (ha-764617)     </disk>
	I0816 17:04:11.666752   27287 main.go:141] libmachine: (ha-764617)     <interface type='network'>
	I0816 17:04:11.666759   27287 main.go:141] libmachine: (ha-764617)       <source network='mk-ha-764617'/>
	I0816 17:04:11.666770   27287 main.go:141] libmachine: (ha-764617)       <model type='virtio'/>
	I0816 17:04:11.666782   27287 main.go:141] libmachine: (ha-764617)     </interface>
	I0816 17:04:11.666793   27287 main.go:141] libmachine: (ha-764617)     <interface type='network'>
	I0816 17:04:11.666804   27287 main.go:141] libmachine: (ha-764617)       <source network='default'/>
	I0816 17:04:11.666814   27287 main.go:141] libmachine: (ha-764617)       <model type='virtio'/>
	I0816 17:04:11.666821   27287 main.go:141] libmachine: (ha-764617)     </interface>
	I0816 17:04:11.666831   27287 main.go:141] libmachine: (ha-764617)     <serial type='pty'>
	I0816 17:04:11.666839   27287 main.go:141] libmachine: (ha-764617)       <target port='0'/>
	I0816 17:04:11.666845   27287 main.go:141] libmachine: (ha-764617)     </serial>
	I0816 17:04:11.666855   27287 main.go:141] libmachine: (ha-764617)     <console type='pty'>
	I0816 17:04:11.666867   27287 main.go:141] libmachine: (ha-764617)       <target type='serial' port='0'/>
	I0816 17:04:11.666877   27287 main.go:141] libmachine: (ha-764617)     </console>
	I0816 17:04:11.666890   27287 main.go:141] libmachine: (ha-764617)     <rng model='virtio'>
	I0816 17:04:11.666901   27287 main.go:141] libmachine: (ha-764617)       <backend model='random'>/dev/random</backend>
	I0816 17:04:11.666910   27287 main.go:141] libmachine: (ha-764617)     </rng>
	I0816 17:04:11.666921   27287 main.go:141] libmachine: (ha-764617)     
	I0816 17:04:11.666945   27287 main.go:141] libmachine: (ha-764617)     
	I0816 17:04:11.666965   27287 main.go:141] libmachine: (ha-764617)   </devices>
	I0816 17:04:11.666974   27287 main.go:141] libmachine: (ha-764617) </domain>
	I0816 17:04:11.666979   27287 main.go:141] libmachine: (ha-764617) 
	I0816 17:04:11.672366   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:cf:a5:f4 in network default
	I0816 17:04:11.672928   27287 main.go:141] libmachine: (ha-764617) Ensuring networks are active...
	I0816 17:04:11.672941   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:11.673616   27287 main.go:141] libmachine: (ha-764617) Ensuring network default is active
	I0816 17:04:11.674000   27287 main.go:141] libmachine: (ha-764617) Ensuring network mk-ha-764617 is active
	I0816 17:04:11.674421   27287 main.go:141] libmachine: (ha-764617) Getting domain xml...
	I0816 17:04:11.675137   27287 main.go:141] libmachine: (ha-764617) Creating domain...
	I0816 17:04:12.863675   27287 main.go:141] libmachine: (ha-764617) Waiting to get IP...
	I0816 17:04:12.864442   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:12.864835   27287 main.go:141] libmachine: (ha-764617) DBG | unable to find current IP address of domain ha-764617 in network mk-ha-764617
	I0816 17:04:12.864877   27287 main.go:141] libmachine: (ha-764617) DBG | I0816 17:04:12.864827   27310 retry.go:31] will retry after 238.805759ms: waiting for machine to come up
	I0816 17:04:13.105386   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:13.105864   27287 main.go:141] libmachine: (ha-764617) DBG | unable to find current IP address of domain ha-764617 in network mk-ha-764617
	I0816 17:04:13.105891   27287 main.go:141] libmachine: (ha-764617) DBG | I0816 17:04:13.105825   27310 retry.go:31] will retry after 313.687436ms: waiting for machine to come up
	I0816 17:04:13.421431   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:13.421952   27287 main.go:141] libmachine: (ha-764617) DBG | unable to find current IP address of domain ha-764617 in network mk-ha-764617
	I0816 17:04:13.421974   27287 main.go:141] libmachine: (ha-764617) DBG | I0816 17:04:13.421918   27310 retry.go:31] will retry after 369.042428ms: waiting for machine to come up
	I0816 17:04:13.792398   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:13.792886   27287 main.go:141] libmachine: (ha-764617) DBG | unable to find current IP address of domain ha-764617 in network mk-ha-764617
	I0816 17:04:13.792927   27287 main.go:141] libmachine: (ha-764617) DBG | I0816 17:04:13.792852   27310 retry.go:31] will retry after 568.225467ms: waiting for machine to come up
	I0816 17:04:14.362432   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:14.362828   27287 main.go:141] libmachine: (ha-764617) DBG | unable to find current IP address of domain ha-764617 in network mk-ha-764617
	I0816 17:04:14.362860   27287 main.go:141] libmachine: (ha-764617) DBG | I0816 17:04:14.362777   27310 retry.go:31] will retry after 741.209975ms: waiting for machine to come up
	I0816 17:04:15.105604   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:15.106046   27287 main.go:141] libmachine: (ha-764617) DBG | unable to find current IP address of domain ha-764617 in network mk-ha-764617
	I0816 17:04:15.106073   27287 main.go:141] libmachine: (ha-764617) DBG | I0816 17:04:15.105994   27310 retry.go:31] will retry after 660.568903ms: waiting for machine to come up
	I0816 17:04:15.767780   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:15.768211   27287 main.go:141] libmachine: (ha-764617) DBG | unable to find current IP address of domain ha-764617 in network mk-ha-764617
	I0816 17:04:15.768239   27287 main.go:141] libmachine: (ha-764617) DBG | I0816 17:04:15.768164   27310 retry.go:31] will retry after 894.998278ms: waiting for machine to come up
	I0816 17:04:16.664726   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:16.665143   27287 main.go:141] libmachine: (ha-764617) DBG | unable to find current IP address of domain ha-764617 in network mk-ha-764617
	I0816 17:04:16.665170   27287 main.go:141] libmachine: (ha-764617) DBG | I0816 17:04:16.665101   27310 retry.go:31] will retry after 1.452752003s: waiting for machine to come up
	I0816 17:04:18.119859   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:18.120258   27287 main.go:141] libmachine: (ha-764617) DBG | unable to find current IP address of domain ha-764617 in network mk-ha-764617
	I0816 17:04:18.120286   27287 main.go:141] libmachine: (ha-764617) DBG | I0816 17:04:18.120204   27310 retry.go:31] will retry after 1.178795077s: waiting for machine to come up
	I0816 17:04:19.300517   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:19.300993   27287 main.go:141] libmachine: (ha-764617) DBG | unable to find current IP address of domain ha-764617 in network mk-ha-764617
	I0816 17:04:19.301021   27287 main.go:141] libmachine: (ha-764617) DBG | I0816 17:04:19.300948   27310 retry.go:31] will retry after 2.323538467s: waiting for machine to come up
	I0816 17:04:21.626714   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:21.627179   27287 main.go:141] libmachine: (ha-764617) DBG | unable to find current IP address of domain ha-764617 in network mk-ha-764617
	I0816 17:04:21.627207   27287 main.go:141] libmachine: (ha-764617) DBG | I0816 17:04:21.627095   27310 retry.go:31] will retry after 2.426890051s: waiting for machine to come up
	I0816 17:04:24.056745   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:24.057302   27287 main.go:141] libmachine: (ha-764617) DBG | unable to find current IP address of domain ha-764617 in network mk-ha-764617
	I0816 17:04:24.057325   27287 main.go:141] libmachine: (ha-764617) DBG | I0816 17:04:24.057137   27310 retry.go:31] will retry after 2.310439067s: waiting for machine to come up
	I0816 17:04:26.369421   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:26.369803   27287 main.go:141] libmachine: (ha-764617) DBG | unable to find current IP address of domain ha-764617 in network mk-ha-764617
	I0816 17:04:26.369828   27287 main.go:141] libmachine: (ha-764617) DBG | I0816 17:04:26.369751   27310 retry.go:31] will retry after 4.128642923s: waiting for machine to come up
	I0816 17:04:30.503022   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:30.503484   27287 main.go:141] libmachine: (ha-764617) Found IP for machine: 192.168.39.18
	I0816 17:04:30.503515   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has current primary IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:30.503525   27287 main.go:141] libmachine: (ha-764617) Reserving static IP address...
	I0816 17:04:30.504069   27287 main.go:141] libmachine: (ha-764617) DBG | unable to find host DHCP lease matching {name: "ha-764617", mac: "52:54:00:5b:ba:f5", ip: "192.168.39.18"} in network mk-ha-764617
	I0816 17:04:30.575307   27287 main.go:141] libmachine: (ha-764617) DBG | Getting to WaitForSSH function...
	I0816 17:04:30.575338   27287 main.go:141] libmachine: (ha-764617) Reserved static IP address: 192.168.39.18
	I0816 17:04:30.575351   27287 main.go:141] libmachine: (ha-764617) Waiting for SSH to be available...
	I0816 17:04:30.579341   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:30.579893   27287 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:minikube Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:04:30.579927   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:30.580078   27287 main.go:141] libmachine: (ha-764617) DBG | Using SSH client type: external
	I0816 17:04:30.580094   27287 main.go:141] libmachine: (ha-764617) DBG | Using SSH private key: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617/id_rsa (-rw-------)
	I0816 17:04:30.580129   27287 main.go:141] libmachine: (ha-764617) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.18 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 17:04:30.580202   27287 main.go:141] libmachine: (ha-764617) DBG | About to run SSH command:
	I0816 17:04:30.580223   27287 main.go:141] libmachine: (ha-764617) DBG | exit 0
	I0816 17:04:30.712474   27287 main.go:141] libmachine: (ha-764617) DBG | SSH cmd err, output: <nil>: 
	I0816 17:04:30.712829   27287 main.go:141] libmachine: (ha-764617) KVM machine creation complete!
	I0816 17:04:30.713295   27287 main.go:141] libmachine: (ha-764617) Calling .GetConfigRaw
	I0816 17:04:30.713814   27287 main.go:141] libmachine: (ha-764617) Calling .DriverName
	I0816 17:04:30.713996   27287 main.go:141] libmachine: (ha-764617) Calling .DriverName
	I0816 17:04:30.714230   27287 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0816 17:04:30.714263   27287 main.go:141] libmachine: (ha-764617) Calling .GetState
	I0816 17:04:30.715663   27287 main.go:141] libmachine: Detecting operating system of created instance...
	I0816 17:04:30.715674   27287 main.go:141] libmachine: Waiting for SSH to be available...
	I0816 17:04:30.715679   27287 main.go:141] libmachine: Getting to WaitForSSH function...
	I0816 17:04:30.715685   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHHostname
	I0816 17:04:30.718094   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:30.718477   27287 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:04:30.718504   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:30.718666   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHPort
	I0816 17:04:30.718828   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:04:30.718973   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:04:30.719081   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHUsername
	I0816 17:04:30.719232   27287 main.go:141] libmachine: Using SSH client type: native
	I0816 17:04:30.719569   27287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0816 17:04:30.719582   27287 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0816 17:04:30.831711   27287 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 17:04:30.831738   27287 main.go:141] libmachine: Detecting the provisioner...
	I0816 17:04:30.831749   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHHostname
	I0816 17:04:30.834505   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:30.834918   27287 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:04:30.834939   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:30.835178   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHPort
	I0816 17:04:30.835493   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:04:30.835670   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:04:30.835833   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHUsername
	I0816 17:04:30.835995   27287 main.go:141] libmachine: Using SSH client type: native
	I0816 17:04:30.836186   27287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0816 17:04:30.836203   27287 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0816 17:04:30.949182   27287 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0816 17:04:30.949252   27287 main.go:141] libmachine: found compatible host: buildroot
	I0816 17:04:30.949261   27287 main.go:141] libmachine: Provisioning with buildroot...
	I0816 17:04:30.949268   27287 main.go:141] libmachine: (ha-764617) Calling .GetMachineName
	I0816 17:04:30.949518   27287 buildroot.go:166] provisioning hostname "ha-764617"
	I0816 17:04:30.949539   27287 main.go:141] libmachine: (ha-764617) Calling .GetMachineName
	I0816 17:04:30.949765   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHHostname
	I0816 17:04:30.952461   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:30.952994   27287 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:04:30.953019   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:30.953235   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHPort
	I0816 17:04:30.953404   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:04:30.953580   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:04:30.953729   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHUsername
	I0816 17:04:30.953878   27287 main.go:141] libmachine: Using SSH client type: native
	I0816 17:04:30.954089   27287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0816 17:04:30.954108   27287 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-764617 && echo "ha-764617" | sudo tee /etc/hostname
	I0816 17:04:31.083399   27287 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-764617
	
	I0816 17:04:31.083421   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHHostname
	I0816 17:04:31.086023   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:31.086356   27287 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:04:31.086391   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:31.086566   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHPort
	I0816 17:04:31.086748   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:04:31.086912   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:04:31.087031   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHUsername
	I0816 17:04:31.087185   27287 main.go:141] libmachine: Using SSH client type: native
	I0816 17:04:31.087385   27287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0816 17:04:31.087402   27287 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-764617' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-764617/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-764617' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 17:04:31.209097   27287 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 17:04:31.209120   27287 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19461-9545/.minikube CaCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19461-9545/.minikube}
	I0816 17:04:31.209150   27287 buildroot.go:174] setting up certificates
	I0816 17:04:31.209159   27287 provision.go:84] configureAuth start
	I0816 17:04:31.209168   27287 main.go:141] libmachine: (ha-764617) Calling .GetMachineName
	I0816 17:04:31.209471   27287 main.go:141] libmachine: (ha-764617) Calling .GetIP
	I0816 17:04:31.211993   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:31.212316   27287 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:04:31.212340   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:31.212446   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHHostname
	I0816 17:04:31.214616   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:31.215003   27287 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:04:31.215030   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:31.215167   27287 provision.go:143] copyHostCerts
	I0816 17:04:31.215199   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem
	I0816 17:04:31.215228   27287 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem, removing ...
	I0816 17:04:31.215242   27287 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem
	I0816 17:04:31.215307   27287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem (1123 bytes)
	I0816 17:04:31.215390   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem
	I0816 17:04:31.215407   27287 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem, removing ...
	I0816 17:04:31.215413   27287 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem
	I0816 17:04:31.215442   27287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem (1679 bytes)
	I0816 17:04:31.215485   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem
	I0816 17:04:31.215502   27287 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem, removing ...
	I0816 17:04:31.215508   27287 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem
	I0816 17:04:31.215529   27287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem (1082 bytes)
	I0816 17:04:31.215583   27287 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem org=jenkins.ha-764617 san=[127.0.0.1 192.168.39.18 ha-764617 localhost minikube]
	I0816 17:04:31.373435   27287 provision.go:177] copyRemoteCerts
	I0816 17:04:31.373494   27287 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 17:04:31.373517   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHHostname
	I0816 17:04:31.376138   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:31.376421   27287 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:04:31.376449   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:31.376660   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHPort
	I0816 17:04:31.376859   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:04:31.377015   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHUsername
	I0816 17:04:31.377125   27287 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617/id_rsa Username:docker}
	I0816 17:04:31.462167   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 17:04:31.462266   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 17:04:31.484481   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 17:04:31.484559   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0816 17:04:31.505907   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 17:04:31.505970   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 17:04:31.527200   27287 provision.go:87] duration metric: took 318.030237ms to configureAuth
	I0816 17:04:31.527226   27287 buildroot.go:189] setting minikube options for container-runtime
	I0816 17:04:31.527416   27287 config.go:182] Loaded profile config "ha-764617": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 17:04:31.527489   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHHostname
	I0816 17:04:31.530425   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:31.530833   27287 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:04:31.530857   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:31.531021   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHPort
	I0816 17:04:31.531191   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:04:31.531425   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:04:31.531586   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHUsername
	I0816 17:04:31.531741   27287 main.go:141] libmachine: Using SSH client type: native
	I0816 17:04:31.531914   27287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0816 17:04:31.531930   27287 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 17:04:31.798292   27287 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 17:04:31.798317   27287 main.go:141] libmachine: Checking connection to Docker...
	I0816 17:04:31.798326   27287 main.go:141] libmachine: (ha-764617) Calling .GetURL
	I0816 17:04:31.799912   27287 main.go:141] libmachine: (ha-764617) DBG | Using libvirt version 6000000
	I0816 17:04:31.802124   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:31.802428   27287 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:04:31.802452   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:31.802622   27287 main.go:141] libmachine: Docker is up and running!
	I0816 17:04:31.802636   27287 main.go:141] libmachine: Reticulating splines...
	I0816 17:04:31.802645   27287 client.go:171] duration metric: took 20.542772627s to LocalClient.Create
	I0816 17:04:31.802671   27287 start.go:167] duration metric: took 20.542846204s to libmachine.API.Create "ha-764617"
	I0816 17:04:31.802681   27287 start.go:293] postStartSetup for "ha-764617" (driver="kvm2")
	I0816 17:04:31.802693   27287 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 17:04:31.802714   27287 main.go:141] libmachine: (ha-764617) Calling .DriverName
	I0816 17:04:31.802966   27287 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 17:04:31.802989   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHHostname
	I0816 17:04:31.805134   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:31.805491   27287 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:04:31.805520   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:31.805631   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHPort
	I0816 17:04:31.805843   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:04:31.806002   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHUsername
	I0816 17:04:31.806130   27287 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617/id_rsa Username:docker}
	I0816 17:04:31.890154   27287 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 17:04:31.893837   27287 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 17:04:31.893857   27287 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/addons for local assets ...
	I0816 17:04:31.893923   27287 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/files for local assets ...
	I0816 17:04:31.893990   27287 filesync.go:149] local asset: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem -> 167532.pem in /etc/ssl/certs
	I0816 17:04:31.893999   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem -> /etc/ssl/certs/167532.pem
	I0816 17:04:31.894079   27287 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 17:04:31.902565   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /etc/ssl/certs/167532.pem (1708 bytes)
	I0816 17:04:31.924279   27287 start.go:296] duration metric: took 121.587288ms for postStartSetup
	I0816 17:04:31.924337   27287 main.go:141] libmachine: (ha-764617) Calling .GetConfigRaw
	I0816 17:04:31.924935   27287 main.go:141] libmachine: (ha-764617) Calling .GetIP
	I0816 17:04:31.927607   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:31.927910   27287 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:04:31.927934   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:31.928141   27287 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/config.json ...
	I0816 17:04:31.928305   27287 start.go:128] duration metric: took 20.685691268s to createHost
	I0816 17:04:31.928324   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHHostname
	I0816 17:04:31.930644   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:31.931018   27287 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:04:31.931051   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:31.931135   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHPort
	I0816 17:04:31.931290   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:04:31.931454   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:04:31.931598   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHUsername
	I0816 17:04:31.931777   27287 main.go:141] libmachine: Using SSH client type: native
	I0816 17:04:31.931983   27287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0816 17:04:31.931993   27287 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 17:04:32.044753   27287 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723827872.020223694
	
	I0816 17:04:32.044780   27287 fix.go:216] guest clock: 1723827872.020223694
	I0816 17:04:32.044789   27287 fix.go:229] Guest: 2024-08-16 17:04:32.020223694 +0000 UTC Remote: 2024-08-16 17:04:31.928315094 +0000 UTC m=+20.785775909 (delta=91.9086ms)
	I0816 17:04:32.044835   27287 fix.go:200] guest clock delta is within tolerance: 91.9086ms
	I0816 17:04:32.044843   27287 start.go:83] releasing machines lock for "ha-764617", held for 20.80232118s
	I0816 17:04:32.044876   27287 main.go:141] libmachine: (ha-764617) Calling .DriverName
	I0816 17:04:32.045143   27287 main.go:141] libmachine: (ha-764617) Calling .GetIP
	I0816 17:04:32.047638   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:32.047969   27287 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:04:32.047995   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:32.048104   27287 main.go:141] libmachine: (ha-764617) Calling .DriverName
	I0816 17:04:32.048560   27287 main.go:141] libmachine: (ha-764617) Calling .DriverName
	I0816 17:04:32.048743   27287 main.go:141] libmachine: (ha-764617) Calling .DriverName
	I0816 17:04:32.048837   27287 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 17:04:32.048891   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHHostname
	I0816 17:04:32.048954   27287 ssh_runner.go:195] Run: cat /version.json
	I0816 17:04:32.048976   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHHostname
	I0816 17:04:32.051572   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:32.051819   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:32.051849   27287 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:04:32.051871   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:32.052025   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHPort
	I0816 17:04:32.052186   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:04:32.052230   27287 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:04:32.052258   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:32.052334   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHUsername
	I0816 17:04:32.052403   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHPort
	I0816 17:04:32.052472   27287 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617/id_rsa Username:docker}
	I0816 17:04:32.052557   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:04:32.052666   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHUsername
	I0816 17:04:32.052755   27287 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617/id_rsa Username:docker}
	I0816 17:04:32.133532   27287 ssh_runner.go:195] Run: systemctl --version
	I0816 17:04:32.166489   27287 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 17:04:32.321880   27287 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 17:04:32.327144   27287 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 17:04:32.327210   27287 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 17:04:32.342225   27287 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 17:04:32.342252   27287 start.go:495] detecting cgroup driver to use...
	I0816 17:04:32.342315   27287 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 17:04:32.359528   27287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 17:04:32.372483   27287 docker.go:217] disabling cri-docker service (if available) ...
	I0816 17:04:32.372545   27287 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 17:04:32.385946   27287 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 17:04:32.398731   27287 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 17:04:32.510965   27287 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 17:04:32.669176   27287 docker.go:233] disabling docker service ...
	I0816 17:04:32.669247   27287 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 17:04:32.682954   27287 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 17:04:32.694779   27287 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 17:04:32.824420   27287 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 17:04:32.938035   27287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 17:04:32.951141   27287 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 17:04:32.968389   27287 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 17:04:32.968457   27287 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:04:32.978033   27287 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 17:04:32.978103   27287 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:04:32.987902   27287 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:04:32.997597   27287 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:04:33.007383   27287 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 17:04:33.017246   27287 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:04:33.026596   27287 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:04:33.042318   27287 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:04:33.051714   27287 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 17:04:33.060974   27287 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 17:04:33.061018   27287 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 17:04:33.073318   27287 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 17:04:33.082041   27287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 17:04:33.188184   27287 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 17:04:33.325270   27287 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 17:04:33.325343   27287 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 17:04:33.330234   27287 start.go:563] Will wait 60s for crictl version
	I0816 17:04:33.330290   27287 ssh_runner.go:195] Run: which crictl
	I0816 17:04:33.333608   27287 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 17:04:33.370836   27287 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 17:04:33.370940   27287 ssh_runner.go:195] Run: crio --version
	I0816 17:04:33.397234   27287 ssh_runner.go:195] Run: crio --version
	I0816 17:04:33.423894   27287 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 17:04:33.424869   27287 main.go:141] libmachine: (ha-764617) Calling .GetIP
	I0816 17:04:33.427349   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:33.427640   27287 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:04:33.427672   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:33.427821   27287 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0816 17:04:33.431601   27287 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 17:04:33.444115   27287 kubeadm.go:883] updating cluster {Name:ha-764617 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-764617 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.18 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 17:04:33.444354   27287 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 17:04:33.444479   27287 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 17:04:33.475671   27287 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0816 17:04:33.475753   27287 ssh_runner.go:195] Run: which lz4
	I0816 17:04:33.479653   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0816 17:04:33.479732   27287 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 17:04:33.483534   27287 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 17:04:33.483560   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0816 17:04:34.625529   27287 crio.go:462] duration metric: took 1.14581672s to copy over tarball
	I0816 17:04:34.625604   27287 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 17:04:36.603204   27287 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.977567495s)
	I0816 17:04:36.603231   27287 crio.go:469] duration metric: took 1.977674917s to extract the tarball
	I0816 17:04:36.603238   27287 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 17:04:36.639542   27287 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 17:04:36.685580   27287 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 17:04:36.685600   27287 cache_images.go:84] Images are preloaded, skipping loading
	I0816 17:04:36.685607   27287 kubeadm.go:934] updating node { 192.168.39.18 8443 v1.31.0 crio true true} ...
	I0816 17:04:36.685701   27287 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-764617 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.18
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-764617 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 17:04:36.685778   27287 ssh_runner.go:195] Run: crio config
	I0816 17:04:36.729932   27287 cni.go:84] Creating CNI manager for ""
	I0816 17:04:36.729949   27287 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0816 17:04:36.729958   27287 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 17:04:36.729979   27287 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.18 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-764617 NodeName:ha-764617 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.18"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.18 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 17:04:36.730114   27287 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.18
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-764617"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.18
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.18"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 17:04:36.730136   27287 kube-vip.go:115] generating kube-vip config ...
	I0816 17:04:36.730175   27287 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0816 17:04:36.745310   27287 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0816 17:04:36.745443   27287 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0816 17:04:36.745505   27287 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 17:04:36.754575   27287 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 17:04:36.754650   27287 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0816 17:04:36.763403   27287 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0816 17:04:36.779161   27287 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 17:04:36.794117   27287 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0816 17:04:36.809831   27287 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0816 17:04:36.825108   27287 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0816 17:04:36.828513   27287 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 17:04:36.840109   27287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 17:04:36.945366   27287 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 17:04:36.960672   27287 certs.go:68] Setting up /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617 for IP: 192.168.39.18
	I0816 17:04:36.960693   27287 certs.go:194] generating shared ca certs ...
	I0816 17:04:36.960711   27287 certs.go:226] acquiring lock for ca certs: {Name:mk1e000575c94c138513704c2900b8a68810eb65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:04:36.960862   27287 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key
	I0816 17:04:36.960920   27287 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key
	I0816 17:04:36.960933   27287 certs.go:256] generating profile certs ...
	I0816 17:04:36.960997   27287 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/client.key
	I0816 17:04:36.961014   27287 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/client.crt with IP's: []
	I0816 17:04:37.176726   27287 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/client.crt ...
	I0816 17:04:37.176760   27287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/client.crt: {Name:mk29d5c77bd5773d8bf6de36574a6e04d0236cc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:04:37.176962   27287 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/client.key ...
	I0816 17:04:37.176979   27287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/client.key: {Name:mk6489e419fcaef7b92be41faf0bb734efb07372 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:04:37.177094   27287 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.key.48dc9881
	I0816 17:04:37.177117   27287 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.crt.48dc9881 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.18 192.168.39.254]
	I0816 17:04:37.290736   27287 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.crt.48dc9881 ...
	I0816 17:04:37.290770   27287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.crt.48dc9881: {Name:mkb16c0a15ab305065c0248cc0b7d908e1c729bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:04:37.290951   27287 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.key.48dc9881 ...
	I0816 17:04:37.290968   27287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.key.48dc9881: {Name:mk149993b661876c649e1091e4e9fb3fe6eb5c6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:04:37.291061   27287 certs.go:381] copying /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.crt.48dc9881 -> /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.crt
	I0816 17:04:37.291169   27287 certs.go:385] copying /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.key.48dc9881 -> /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.key
	I0816 17:04:37.291252   27287 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/proxy-client.key
	I0816 17:04:37.291273   27287 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/proxy-client.crt with IP's: []
	I0816 17:04:37.458550   27287 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/proxy-client.crt ...
	I0816 17:04:37.458580   27287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/proxy-client.crt: {Name:mk27f9575b8fc72d6b583bd1d3945d7bdb054f4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:04:37.458749   27287 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/proxy-client.key ...
	I0816 17:04:37.458764   27287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/proxy-client.key: {Name:mk84ca222215bfb6433b5f26a0008fbd0ef2ecde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:04:37.458854   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0816 17:04:37.458876   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0816 17:04:37.458891   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0816 17:04:37.458908   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0816 17:04:37.458927   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0816 17:04:37.458944   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0816 17:04:37.458960   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0816 17:04:37.458993   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0816 17:04:37.459070   27287 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem (1338 bytes)
	W0816 17:04:37.459118   27287 certs.go:480] ignoring /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753_empty.pem, impossibly tiny 0 bytes
	I0816 17:04:37.459134   27287 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 17:04:37.459168   27287 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem (1082 bytes)
	I0816 17:04:37.459204   27287 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem (1123 bytes)
	I0816 17:04:37.459236   27287 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem (1679 bytes)
	I0816 17:04:37.459291   27287 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem (1708 bytes)
	I0816 17:04:37.459331   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem -> /usr/share/ca-certificates/167532.pem
	I0816 17:04:37.459353   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0816 17:04:37.459378   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem -> /usr/share/ca-certificates/16753.pem
	I0816 17:04:37.459910   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 17:04:37.482899   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 17:04:37.504048   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 17:04:37.525208   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 17:04:37.546815   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0816 17:04:37.568681   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 17:04:37.590215   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 17:04:37.611561   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 17:04:37.633430   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /usr/share/ca-certificates/167532.pem (1708 bytes)
	I0816 17:04:37.654743   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 17:04:37.676033   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem --> /usr/share/ca-certificates/16753.pem (1338 bytes)
	I0816 17:04:37.699254   27287 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 17:04:37.736664   27287 ssh_runner.go:195] Run: openssl version
	I0816 17:04:37.745980   27287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 17:04:37.756383   27287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 17:04:37.760494   27287 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:49 /usr/share/ca-certificates/minikubeCA.pem
	I0816 17:04:37.760538   27287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 17:04:37.765914   27287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 17:04:37.775960   27287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16753.pem && ln -fs /usr/share/ca-certificates/16753.pem /etc/ssl/certs/16753.pem"
	I0816 17:04:37.785868   27287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16753.pem
	I0816 17:04:37.789900   27287 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 17:00 /usr/share/ca-certificates/16753.pem
	I0816 17:04:37.789951   27287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16753.pem
	I0816 17:04:37.795013   27287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16753.pem /etc/ssl/certs/51391683.0"
	I0816 17:04:37.804845   27287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167532.pem && ln -fs /usr/share/ca-certificates/167532.pem /etc/ssl/certs/167532.pem"
	I0816 17:04:37.814710   27287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167532.pem
	I0816 17:04:37.818676   27287 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 17:00 /usr/share/ca-certificates/167532.pem
	I0816 17:04:37.818726   27287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167532.pem
	I0816 17:04:37.823779   27287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167532.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 17:04:37.833390   27287 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 17:04:37.837091   27287 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0816 17:04:37.837148   27287 kubeadm.go:392] StartCluster: {Name:ha-764617 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-764617 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.18 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 17:04:37.837218   27287 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 17:04:37.837271   27287 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 17:04:37.872744   27287 cri.go:89] found id: ""
	I0816 17:04:37.872806   27287 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 17:04:37.882348   27287 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 17:04:37.891808   27287 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 17:04:37.901344   27287 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 17:04:37.901360   27287 kubeadm.go:157] found existing configuration files:
	
	I0816 17:04:37.901399   27287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 17:04:37.910156   27287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 17:04:37.910215   27287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 17:04:37.919131   27287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 17:04:37.927578   27287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 17:04:37.927651   27287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 17:04:37.936765   27287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 17:04:37.945292   27287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 17:04:37.945367   27287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 17:04:37.954326   27287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 17:04:37.962804   27287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 17:04:37.962864   27287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 17:04:37.971386   27287 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 17:04:38.058317   27287 kubeadm.go:310] W0816 17:04:38.040644     845 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 17:04:38.059090   27287 kubeadm.go:310] W0816 17:04:38.041564     845 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 17:04:38.155015   27287 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 17:04:49.272689   27287 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0816 17:04:49.272761   27287 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 17:04:49.272877   27287 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 17:04:49.273019   27287 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 17:04:49.273139   27287 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0816 17:04:49.273208   27287 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 17:04:49.274913   27287 out.go:235]   - Generating certificates and keys ...
	I0816 17:04:49.275005   27287 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 17:04:49.275070   27287 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 17:04:49.275135   27287 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0816 17:04:49.275194   27287 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0816 17:04:49.275252   27287 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0816 17:04:49.275294   27287 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0816 17:04:49.275343   27287 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0816 17:04:49.275437   27287 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-764617 localhost] and IPs [192.168.39.18 127.0.0.1 ::1]
	I0816 17:04:49.275491   27287 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0816 17:04:49.275598   27287 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-764617 localhost] and IPs [192.168.39.18 127.0.0.1 ::1]
	I0816 17:04:49.275653   27287 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0816 17:04:49.275710   27287 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0816 17:04:49.275751   27287 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0816 17:04:49.275797   27287 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 17:04:49.275843   27287 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 17:04:49.275894   27287 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0816 17:04:49.275951   27287 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 17:04:49.276020   27287 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 17:04:49.276070   27287 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 17:04:49.276138   27287 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 17:04:49.276193   27287 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 17:04:49.277491   27287 out.go:235]   - Booting up control plane ...
	I0816 17:04:49.277585   27287 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 17:04:49.277682   27287 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 17:04:49.277767   27287 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 17:04:49.277904   27287 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 17:04:49.278017   27287 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 17:04:49.278072   27287 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 17:04:49.278261   27287 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0816 17:04:49.278399   27287 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0816 17:04:49.278456   27287 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.459421ms
	I0816 17:04:49.278542   27287 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0816 17:04:49.278631   27287 kubeadm.go:310] [api-check] The API server is healthy after 6.002832221s
	I0816 17:04:49.278750   27287 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0816 17:04:49.278889   27287 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0816 17:04:49.278959   27287 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0816 17:04:49.279105   27287 kubeadm.go:310] [mark-control-plane] Marking the node ha-764617 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0816 17:04:49.279155   27287 kubeadm.go:310] [bootstrap-token] Using token: okdxih.5xmh1by8w9juwakw
	I0816 17:04:49.280296   27287 out.go:235]   - Configuring RBAC rules ...
	I0816 17:04:49.280383   27287 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0816 17:04:49.280451   27287 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0816 17:04:49.280584   27287 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0816 17:04:49.280734   27287 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0816 17:04:49.280846   27287 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0816 17:04:49.280946   27287 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0816 17:04:49.281122   27287 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0816 17:04:49.281198   27287 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0816 17:04:49.281252   27287 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0816 17:04:49.281259   27287 kubeadm.go:310] 
	I0816 17:04:49.281307   27287 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0816 17:04:49.281312   27287 kubeadm.go:310] 
	I0816 17:04:49.281431   27287 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0816 17:04:49.281440   27287 kubeadm.go:310] 
	I0816 17:04:49.281475   27287 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0816 17:04:49.281563   27287 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0816 17:04:49.281646   27287 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0816 17:04:49.281655   27287 kubeadm.go:310] 
	I0816 17:04:49.281733   27287 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0816 17:04:49.281743   27287 kubeadm.go:310] 
	I0816 17:04:49.281824   27287 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0816 17:04:49.281833   27287 kubeadm.go:310] 
	I0816 17:04:49.281903   27287 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0816 17:04:49.282006   27287 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0816 17:04:49.282108   27287 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0816 17:04:49.282117   27287 kubeadm.go:310] 
	I0816 17:04:49.282261   27287 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0816 17:04:49.282408   27287 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0816 17:04:49.282422   27287 kubeadm.go:310] 
	I0816 17:04:49.282542   27287 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token okdxih.5xmh1by8w9juwakw \
	I0816 17:04:49.282693   27287 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:340cd7e6f4afd769ffe0b97bb40a910498795aaf29fab06cd6b8c9a3957ccd57 \
	I0816 17:04:49.282726   27287 kubeadm.go:310] 	--control-plane 
	I0816 17:04:49.282735   27287 kubeadm.go:310] 
	I0816 17:04:49.282819   27287 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0816 17:04:49.282826   27287 kubeadm.go:310] 
	I0816 17:04:49.282937   27287 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token okdxih.5xmh1by8w9juwakw \
	I0816 17:04:49.283065   27287 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:340cd7e6f4afd769ffe0b97bb40a910498795aaf29fab06cd6b8c9a3957ccd57 
	I0816 17:04:49.283081   27287 cni.go:84] Creating CNI manager for ""
	I0816 17:04:49.283089   27287 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0816 17:04:49.284435   27287 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0816 17:04:49.285374   27287 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0816 17:04:49.290271   27287 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0816 17:04:49.290284   27287 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0816 17:04:49.310395   27287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0816 17:04:49.731837   27287 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 17:04:49.731919   27287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 17:04:49.731967   27287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-764617 minikube.k8s.io/updated_at=2024_08_16T17_04_49_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd minikube.k8s.io/name=ha-764617 minikube.k8s.io/primary=true
	I0816 17:04:49.759424   27287 ops.go:34] apiserver oom_adj: -16
	I0816 17:04:49.929739   27287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 17:04:50.430106   27287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 17:04:50.929879   27287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 17:04:51.430115   27287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 17:04:51.929910   27287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 17:04:52.430781   27287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 17:04:52.930148   27287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 17:04:53.430816   27287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 17:04:53.568437   27287 kubeadm.go:1113] duration metric: took 3.836568007s to wait for elevateKubeSystemPrivileges
	I0816 17:04:53.568470   27287 kubeadm.go:394] duration metric: took 15.731325614s to StartCluster
	I0816 17:04:53.568485   27287 settings.go:142] acquiring lock: {Name:mk7d6bbff611e99a9222156e58c7ea3b0a5541f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:04:53.568549   27287 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19461-9545/kubeconfig
	I0816 17:04:53.569221   27287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/kubeconfig: {Name:mk7d5abb3a1b9b654c5d7490d32aeae2c9a1bdd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:04:53.569405   27287 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.18 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 17:04:53.569423   27287 start.go:241] waiting for startup goroutines ...
	I0816 17:04:53.569436   27287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 17:04:53.569429   27287 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 17:04:53.569482   27287 addons.go:69] Setting storage-provisioner=true in profile "ha-764617"
	I0816 17:04:53.569500   27287 addons.go:69] Setting default-storageclass=true in profile "ha-764617"
	I0816 17:04:53.569513   27287 addons.go:234] Setting addon storage-provisioner=true in "ha-764617"
	I0816 17:04:53.569533   27287 host.go:66] Checking if "ha-764617" exists ...
	I0816 17:04:53.569555   27287 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-764617"
	I0816 17:04:53.569636   27287 config.go:182] Loaded profile config "ha-764617": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 17:04:53.569915   27287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:04:53.569935   27287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:04:53.569950   27287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:04:53.569978   27287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:04:53.585332   27287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40619
	I0816 17:04:53.585337   27287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44019
	I0816 17:04:53.585860   27287 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:04:53.585959   27287 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:04:53.586355   27287 main.go:141] libmachine: Using API Version  1
	I0816 17:04:53.586374   27287 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:04:53.586487   27287 main.go:141] libmachine: Using API Version  1
	I0816 17:04:53.586508   27287 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:04:53.586714   27287 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:04:53.586906   27287 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:04:53.587088   27287 main.go:141] libmachine: (ha-764617) Calling .GetState
	I0816 17:04:53.587271   27287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:04:53.587295   27287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:04:53.589361   27287 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19461-9545/kubeconfig
	I0816 17:04:53.589725   27287 kapi.go:59] client config for ha-764617: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/client.crt", KeyFile:"/home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/client.key", CAFile:"/home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f189a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0816 17:04:53.590222   27287 cert_rotation.go:140] Starting client certificate rotation controller
	I0816 17:04:53.590492   27287 addons.go:234] Setting addon default-storageclass=true in "ha-764617"
	I0816 17:04:53.590533   27287 host.go:66] Checking if "ha-764617" exists ...
	I0816 17:04:53.590896   27287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:04:53.590928   27287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:04:53.603076   27287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40165
	I0816 17:04:53.603549   27287 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:04:53.604052   27287 main.go:141] libmachine: Using API Version  1
	I0816 17:04:53.604070   27287 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:04:53.604493   27287 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:04:53.604697   27287 main.go:141] libmachine: (ha-764617) Calling .GetState
	I0816 17:04:53.606306   27287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37781
	I0816 17:04:53.606807   27287 main.go:141] libmachine: (ha-764617) Calling .DriverName
	I0816 17:04:53.606861   27287 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:04:53.607374   27287 main.go:141] libmachine: Using API Version  1
	I0816 17:04:53.607419   27287 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:04:53.607765   27287 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:04:53.608235   27287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:04:53.608260   27287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:04:53.608587   27287 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 17:04:53.610269   27287 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 17:04:53.610286   27287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 17:04:53.610301   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHHostname
	I0816 17:04:53.613537   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:53.613969   27287 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:04:53.613998   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:53.614096   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHPort
	I0816 17:04:53.614295   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:04:53.614485   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHUsername
	I0816 17:04:53.614662   27287 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617/id_rsa Username:docker}
	I0816 17:04:53.624062   27287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46679
	I0816 17:04:53.624481   27287 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:04:53.625005   27287 main.go:141] libmachine: Using API Version  1
	I0816 17:04:53.625031   27287 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:04:53.625342   27287 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:04:53.625621   27287 main.go:141] libmachine: (ha-764617) Calling .GetState
	I0816 17:04:53.627483   27287 main.go:141] libmachine: (ha-764617) Calling .DriverName
	I0816 17:04:53.627770   27287 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 17:04:53.627787   27287 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 17:04:53.627805   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHHostname
	I0816 17:04:53.630290   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:53.630708   27287 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:04:53.630732   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:04:53.630936   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHPort
	I0816 17:04:53.631090   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:04:53.631243   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHUsername
	I0816 17:04:53.631362   27287 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617/id_rsa Username:docker}
	I0816 17:04:53.696311   27287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0816 17:04:53.759477   27287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 17:04:53.789447   27287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 17:04:54.163504   27287 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0816 17:04:54.432951   27287 main.go:141] libmachine: Making call to close driver server
	I0816 17:04:54.432979   27287 main.go:141] libmachine: (ha-764617) Calling .Close
	I0816 17:04:54.433014   27287 main.go:141] libmachine: Making call to close driver server
	I0816 17:04:54.433033   27287 main.go:141] libmachine: (ha-764617) Calling .Close
	I0816 17:04:54.433264   27287 main.go:141] libmachine: Successfully made call to close driver server
	I0816 17:04:54.433278   27287 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 17:04:54.433289   27287 main.go:141] libmachine: Making call to close driver server
	I0816 17:04:54.433297   27287 main.go:141] libmachine: (ha-764617) Calling .Close
	I0816 17:04:54.433340   27287 main.go:141] libmachine: (ha-764617) DBG | Closing plugin on server side
	I0816 17:04:54.433553   27287 main.go:141] libmachine: Successfully made call to close driver server
	I0816 17:04:54.433568   27287 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 17:04:54.433624   27287 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0816 17:04:54.433640   27287 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0816 17:04:54.433721   27287 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0816 17:04:54.433728   27287 round_trippers.go:469] Request Headers:
	I0816 17:04:54.433739   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:04:54.433744   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:04:54.434163   27287 main.go:141] libmachine: Successfully made call to close driver server
	I0816 17:04:54.434183   27287 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 17:04:54.434193   27287 main.go:141] libmachine: Making call to close driver server
	I0816 17:04:54.434202   27287 main.go:141] libmachine: (ha-764617) Calling .Close
	I0816 17:04:54.434404   27287 main.go:141] libmachine: Successfully made call to close driver server
	I0816 17:04:54.434417   27287 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 17:04:54.446734   27287 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0816 17:04:54.447515   27287 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0816 17:04:54.447535   27287 round_trippers.go:469] Request Headers:
	I0816 17:04:54.447546   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:04:54.447555   27287 round_trippers.go:473]     Content-Type: application/json
	I0816 17:04:54.447560   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:04:54.450522   27287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 17:04:54.450680   27287 main.go:141] libmachine: Making call to close driver server
	I0816 17:04:54.450695   27287 main.go:141] libmachine: (ha-764617) Calling .Close
	I0816 17:04:54.450967   27287 main.go:141] libmachine: Successfully made call to close driver server
	I0816 17:04:54.450980   27287 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 17:04:54.452662   27287 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0816 17:04:54.453868   27287 addons.go:510] duration metric: took 884.435075ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0816 17:04:54.453898   27287 start.go:246] waiting for cluster config update ...
	I0816 17:04:54.453907   27287 start.go:255] writing updated cluster config ...
	I0816 17:04:54.455355   27287 out.go:201] 
	I0816 17:04:54.456729   27287 config.go:182] Loaded profile config "ha-764617": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 17:04:54.456801   27287 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/config.json ...
	I0816 17:04:54.458188   27287 out.go:177] * Starting "ha-764617-m02" control-plane node in "ha-764617" cluster
	I0816 17:04:54.459321   27287 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 17:04:54.459338   27287 cache.go:56] Caching tarball of preloaded images
	I0816 17:04:54.459424   27287 preload.go:172] Found /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 17:04:54.459438   27287 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0816 17:04:54.459514   27287 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/config.json ...
	I0816 17:04:54.459659   27287 start.go:360] acquireMachinesLock for ha-764617-m02: {Name:mke1ffa1f2d6d714bdd85e184816ba8f4dfd08f1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 17:04:54.459700   27287 start.go:364] duration metric: took 23.793µs to acquireMachinesLock for "ha-764617-m02"
	I0816 17:04:54.459729   27287 start.go:93] Provisioning new machine with config: &{Name:ha-764617 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-764617 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.18 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 17:04:54.459788   27287 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0816 17:04:54.461263   27287 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 17:04:54.461335   27287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:04:54.461360   27287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:04:54.475683   27287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43357
	I0816 17:04:54.476121   27287 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:04:54.476668   27287 main.go:141] libmachine: Using API Version  1
	I0816 17:04:54.476694   27287 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:04:54.477067   27287 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:04:54.477289   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetMachineName
	I0816 17:04:54.477529   27287 main.go:141] libmachine: (ha-764617-m02) Calling .DriverName
	I0816 17:04:54.477753   27287 start.go:159] libmachine.API.Create for "ha-764617" (driver="kvm2")
	I0816 17:04:54.477778   27287 client.go:168] LocalClient.Create starting
	I0816 17:04:54.477809   27287 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem
	I0816 17:04:54.477844   27287 main.go:141] libmachine: Decoding PEM data...
	I0816 17:04:54.477860   27287 main.go:141] libmachine: Parsing certificate...
	I0816 17:04:54.477905   27287 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem
	I0816 17:04:54.477922   27287 main.go:141] libmachine: Decoding PEM data...
	I0816 17:04:54.477933   27287 main.go:141] libmachine: Parsing certificate...
	I0816 17:04:54.477949   27287 main.go:141] libmachine: Running pre-create checks...
	I0816 17:04:54.477957   27287 main.go:141] libmachine: (ha-764617-m02) Calling .PreCreateCheck
	I0816 17:04:54.478121   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetConfigRaw
	I0816 17:04:54.478601   27287 main.go:141] libmachine: Creating machine...
	I0816 17:04:54.478613   27287 main.go:141] libmachine: (ha-764617-m02) Calling .Create
	I0816 17:04:54.478746   27287 main.go:141] libmachine: (ha-764617-m02) Creating KVM machine...
	I0816 17:04:54.480066   27287 main.go:141] libmachine: (ha-764617-m02) DBG | found existing default KVM network
	I0816 17:04:54.480120   27287 main.go:141] libmachine: (ha-764617-m02) DBG | found existing private KVM network mk-ha-764617
	I0816 17:04:54.480315   27287 main.go:141] libmachine: (ha-764617-m02) Setting up store path in /home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m02 ...
	I0816 17:04:54.480338   27287 main.go:141] libmachine: (ha-764617-m02) Building disk image from file:///home/jenkins/minikube-integration/19461-9545/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0816 17:04:54.480423   27287 main.go:141] libmachine: (ha-764617-m02) DBG | I0816 17:04:54.480315   27641 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19461-9545/.minikube
	I0816 17:04:54.480524   27287 main.go:141] libmachine: (ha-764617-m02) Downloading /home/jenkins/minikube-integration/19461-9545/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19461-9545/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0816 17:04:54.739664   27287 main.go:141] libmachine: (ha-764617-m02) DBG | I0816 17:04:54.739505   27641 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m02/id_rsa...
	I0816 17:04:54.905076   27287 main.go:141] libmachine: (ha-764617-m02) DBG | I0816 17:04:54.904937   27641 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m02/ha-764617-m02.rawdisk...
	I0816 17:04:54.905097   27287 main.go:141] libmachine: (ha-764617-m02) DBG | Writing magic tar header
	I0816 17:04:54.905107   27287 main.go:141] libmachine: (ha-764617-m02) DBG | Writing SSH key tar header
	I0816 17:04:54.905115   27287 main.go:141] libmachine: (ha-764617-m02) DBG | I0816 17:04:54.905072   27641 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m02 ...
	I0816 17:04:54.905225   27287 main.go:141] libmachine: (ha-764617-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m02
	I0816 17:04:54.905287   27287 main.go:141] libmachine: (ha-764617-m02) Setting executable bit set on /home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m02 (perms=drwx------)
	I0816 17:04:54.905317   27287 main.go:141] libmachine: (ha-764617-m02) Setting executable bit set on /home/jenkins/minikube-integration/19461-9545/.minikube/machines (perms=drwxr-xr-x)
	I0816 17:04:54.905338   27287 main.go:141] libmachine: (ha-764617-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19461-9545/.minikube/machines
	I0816 17:04:54.905351   27287 main.go:141] libmachine: (ha-764617-m02) Setting executable bit set on /home/jenkins/minikube-integration/19461-9545/.minikube (perms=drwxr-xr-x)
	I0816 17:04:54.905373   27287 main.go:141] libmachine: (ha-764617-m02) Setting executable bit set on /home/jenkins/minikube-integration/19461-9545 (perms=drwxrwxr-x)
	I0816 17:04:54.905386   27287 main.go:141] libmachine: (ha-764617-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0816 17:04:54.905397   27287 main.go:141] libmachine: (ha-764617-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0816 17:04:54.905412   27287 main.go:141] libmachine: (ha-764617-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19461-9545/.minikube
	I0816 17:04:54.905425   27287 main.go:141] libmachine: (ha-764617-m02) Creating domain...
	I0816 17:04:54.905446   27287 main.go:141] libmachine: (ha-764617-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19461-9545
	I0816 17:04:54.905459   27287 main.go:141] libmachine: (ha-764617-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0816 17:04:54.905473   27287 main.go:141] libmachine: (ha-764617-m02) DBG | Checking permissions on dir: /home/jenkins
	I0816 17:04:54.905509   27287 main.go:141] libmachine: (ha-764617-m02) DBG | Checking permissions on dir: /home
	I0816 17:04:54.905537   27287 main.go:141] libmachine: (ha-764617-m02) DBG | Skipping /home - not owner
	I0816 17:04:54.906422   27287 main.go:141] libmachine: (ha-764617-m02) define libvirt domain using xml: 
	I0816 17:04:54.906449   27287 main.go:141] libmachine: (ha-764617-m02) <domain type='kvm'>
	I0816 17:04:54.906460   27287 main.go:141] libmachine: (ha-764617-m02)   <name>ha-764617-m02</name>
	I0816 17:04:54.906472   27287 main.go:141] libmachine: (ha-764617-m02)   <memory unit='MiB'>2200</memory>
	I0816 17:04:54.906483   27287 main.go:141] libmachine: (ha-764617-m02)   <vcpu>2</vcpu>
	I0816 17:04:54.906491   27287 main.go:141] libmachine: (ha-764617-m02)   <features>
	I0816 17:04:54.906499   27287 main.go:141] libmachine: (ha-764617-m02)     <acpi/>
	I0816 17:04:54.906509   27287 main.go:141] libmachine: (ha-764617-m02)     <apic/>
	I0816 17:04:54.906520   27287 main.go:141] libmachine: (ha-764617-m02)     <pae/>
	I0816 17:04:54.906528   27287 main.go:141] libmachine: (ha-764617-m02)     
	I0816 17:04:54.906534   27287 main.go:141] libmachine: (ha-764617-m02)   </features>
	I0816 17:04:54.906541   27287 main.go:141] libmachine: (ha-764617-m02)   <cpu mode='host-passthrough'>
	I0816 17:04:54.906561   27287 main.go:141] libmachine: (ha-764617-m02)   
	I0816 17:04:54.906580   27287 main.go:141] libmachine: (ha-764617-m02)   </cpu>
	I0816 17:04:54.906586   27287 main.go:141] libmachine: (ha-764617-m02)   <os>
	I0816 17:04:54.906602   27287 main.go:141] libmachine: (ha-764617-m02)     <type>hvm</type>
	I0816 17:04:54.906611   27287 main.go:141] libmachine: (ha-764617-m02)     <boot dev='cdrom'/>
	I0816 17:04:54.906615   27287 main.go:141] libmachine: (ha-764617-m02)     <boot dev='hd'/>
	I0816 17:04:54.906624   27287 main.go:141] libmachine: (ha-764617-m02)     <bootmenu enable='no'/>
	I0816 17:04:54.906628   27287 main.go:141] libmachine: (ha-764617-m02)   </os>
	I0816 17:04:54.906636   27287 main.go:141] libmachine: (ha-764617-m02)   <devices>
	I0816 17:04:54.906641   27287 main.go:141] libmachine: (ha-764617-m02)     <disk type='file' device='cdrom'>
	I0816 17:04:54.906666   27287 main.go:141] libmachine: (ha-764617-m02)       <source file='/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m02/boot2docker.iso'/>
	I0816 17:04:54.906684   27287 main.go:141] libmachine: (ha-764617-m02)       <target dev='hdc' bus='scsi'/>
	I0816 17:04:54.906697   27287 main.go:141] libmachine: (ha-764617-m02)       <readonly/>
	I0816 17:04:54.906706   27287 main.go:141] libmachine: (ha-764617-m02)     </disk>
	I0816 17:04:54.906713   27287 main.go:141] libmachine: (ha-764617-m02)     <disk type='file' device='disk'>
	I0816 17:04:54.906722   27287 main.go:141] libmachine: (ha-764617-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0816 17:04:54.906730   27287 main.go:141] libmachine: (ha-764617-m02)       <source file='/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m02/ha-764617-m02.rawdisk'/>
	I0816 17:04:54.906739   27287 main.go:141] libmachine: (ha-764617-m02)       <target dev='hda' bus='virtio'/>
	I0816 17:04:54.906748   27287 main.go:141] libmachine: (ha-764617-m02)     </disk>
	I0816 17:04:54.906772   27287 main.go:141] libmachine: (ha-764617-m02)     <interface type='network'>
	I0816 17:04:54.906804   27287 main.go:141] libmachine: (ha-764617-m02)       <source network='mk-ha-764617'/>
	I0816 17:04:54.906844   27287 main.go:141] libmachine: (ha-764617-m02)       <model type='virtio'/>
	I0816 17:04:54.906860   27287 main.go:141] libmachine: (ha-764617-m02)     </interface>
	I0816 17:04:54.906869   27287 main.go:141] libmachine: (ha-764617-m02)     <interface type='network'>
	I0816 17:04:54.906897   27287 main.go:141] libmachine: (ha-764617-m02)       <source network='default'/>
	I0816 17:04:54.906924   27287 main.go:141] libmachine: (ha-764617-m02)       <model type='virtio'/>
	I0816 17:04:54.906937   27287 main.go:141] libmachine: (ha-764617-m02)     </interface>
	I0816 17:04:54.906948   27287 main.go:141] libmachine: (ha-764617-m02)     <serial type='pty'>
	I0816 17:04:54.906958   27287 main.go:141] libmachine: (ha-764617-m02)       <target port='0'/>
	I0816 17:04:54.906970   27287 main.go:141] libmachine: (ha-764617-m02)     </serial>
	I0816 17:04:54.906984   27287 main.go:141] libmachine: (ha-764617-m02)     <console type='pty'>
	I0816 17:04:54.906999   27287 main.go:141] libmachine: (ha-764617-m02)       <target type='serial' port='0'/>
	I0816 17:04:54.907012   27287 main.go:141] libmachine: (ha-764617-m02)     </console>
	I0816 17:04:54.907026   27287 main.go:141] libmachine: (ha-764617-m02)     <rng model='virtio'>
	I0816 17:04:54.907041   27287 main.go:141] libmachine: (ha-764617-m02)       <backend model='random'>/dev/random</backend>
	I0816 17:04:54.907054   27287 main.go:141] libmachine: (ha-764617-m02)     </rng>
	I0816 17:04:54.907079   27287 main.go:141] libmachine: (ha-764617-m02)     
	I0816 17:04:54.907099   27287 main.go:141] libmachine: (ha-764617-m02)     
	I0816 17:04:54.907127   27287 main.go:141] libmachine: (ha-764617-m02)   </devices>
	I0816 17:04:54.907138   27287 main.go:141] libmachine: (ha-764617-m02) </domain>
	I0816 17:04:54.907151   27287 main.go:141] libmachine: (ha-764617-m02) 
	I0816 17:04:54.913591   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:07:50:41 in network default
	I0816 17:04:54.914164   27287 main.go:141] libmachine: (ha-764617-m02) Ensuring networks are active...
	I0816 17:04:54.914182   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:04:54.914860   27287 main.go:141] libmachine: (ha-764617-m02) Ensuring network default is active
	I0816 17:04:54.915188   27287 main.go:141] libmachine: (ha-764617-m02) Ensuring network mk-ha-764617 is active
	I0816 17:04:54.915654   27287 main.go:141] libmachine: (ha-764617-m02) Getting domain xml...
	I0816 17:04:54.916475   27287 main.go:141] libmachine: (ha-764617-m02) Creating domain...
	I0816 17:04:56.120910   27287 main.go:141] libmachine: (ha-764617-m02) Waiting to get IP...
	I0816 17:04:56.123112   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:04:56.123563   27287 main.go:141] libmachine: (ha-764617-m02) DBG | unable to find current IP address of domain ha-764617-m02 in network mk-ha-764617
	I0816 17:04:56.123587   27287 main.go:141] libmachine: (ha-764617-m02) DBG | I0816 17:04:56.123545   27641 retry.go:31] will retry after 262.894322ms: waiting for machine to come up
	I0816 17:04:56.388173   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:04:56.388595   27287 main.go:141] libmachine: (ha-764617-m02) DBG | unable to find current IP address of domain ha-764617-m02 in network mk-ha-764617
	I0816 17:04:56.388640   27287 main.go:141] libmachine: (ha-764617-m02) DBG | I0816 17:04:56.388547   27641 retry.go:31] will retry after 331.429254ms: waiting for machine to come up
	I0816 17:04:56.722096   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:04:56.722532   27287 main.go:141] libmachine: (ha-764617-m02) DBG | unable to find current IP address of domain ha-764617-m02 in network mk-ha-764617
	I0816 17:04:56.722555   27287 main.go:141] libmachine: (ha-764617-m02) DBG | I0816 17:04:56.722498   27641 retry.go:31] will retry after 356.120471ms: waiting for machine to come up
	I0816 17:04:57.079691   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:04:57.080201   27287 main.go:141] libmachine: (ha-764617-m02) DBG | unable to find current IP address of domain ha-764617-m02 in network mk-ha-764617
	I0816 17:04:57.080229   27287 main.go:141] libmachine: (ha-764617-m02) DBG | I0816 17:04:57.080136   27641 retry.go:31] will retry after 514.370488ms: waiting for machine to come up
	I0816 17:04:57.596018   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:04:57.596594   27287 main.go:141] libmachine: (ha-764617-m02) DBG | unable to find current IP address of domain ha-764617-m02 in network mk-ha-764617
	I0816 17:04:57.596636   27287 main.go:141] libmachine: (ha-764617-m02) DBG | I0816 17:04:57.596541   27641 retry.go:31] will retry after 552.829899ms: waiting for machine to come up
	I0816 17:04:58.150731   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:04:58.151261   27287 main.go:141] libmachine: (ha-764617-m02) DBG | unable to find current IP address of domain ha-764617-m02 in network mk-ha-764617
	I0816 17:04:58.151283   27287 main.go:141] libmachine: (ha-764617-m02) DBG | I0816 17:04:58.151189   27641 retry.go:31] will retry after 611.263778ms: waiting for machine to come up
	I0816 17:04:58.763791   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:04:58.764307   27287 main.go:141] libmachine: (ha-764617-m02) DBG | unable to find current IP address of domain ha-764617-m02 in network mk-ha-764617
	I0816 17:04:58.764332   27287 main.go:141] libmachine: (ha-764617-m02) DBG | I0816 17:04:58.764275   27641 retry.go:31] will retry after 1.056287332s: waiting for machine to come up
	I0816 17:04:59.822389   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:04:59.822774   27287 main.go:141] libmachine: (ha-764617-m02) DBG | unable to find current IP address of domain ha-764617-m02 in network mk-ha-764617
	I0816 17:04:59.822803   27287 main.go:141] libmachine: (ha-764617-m02) DBG | I0816 17:04:59.822739   27641 retry.go:31] will retry after 1.157897358s: waiting for machine to come up
	I0816 17:05:00.981939   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:00.982458   27287 main.go:141] libmachine: (ha-764617-m02) DBG | unable to find current IP address of domain ha-764617-m02 in network mk-ha-764617
	I0816 17:05:00.982487   27287 main.go:141] libmachine: (ha-764617-m02) DBG | I0816 17:05:00.982406   27641 retry.go:31] will retry after 1.380933513s: waiting for machine to come up
	I0816 17:05:02.364965   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:02.365510   27287 main.go:141] libmachine: (ha-764617-m02) DBG | unable to find current IP address of domain ha-764617-m02 in network mk-ha-764617
	I0816 17:05:02.365532   27287 main.go:141] libmachine: (ha-764617-m02) DBG | I0816 17:05:02.365457   27641 retry.go:31] will retry after 2.011545615s: waiting for machine to come up
	I0816 17:05:04.379865   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:04.380325   27287 main.go:141] libmachine: (ha-764617-m02) DBG | unable to find current IP address of domain ha-764617-m02 in network mk-ha-764617
	I0816 17:05:04.380351   27287 main.go:141] libmachine: (ha-764617-m02) DBG | I0816 17:05:04.380277   27641 retry.go:31] will retry after 2.507828277s: waiting for machine to come up
	I0816 17:05:06.891550   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:06.891913   27287 main.go:141] libmachine: (ha-764617-m02) DBG | unable to find current IP address of domain ha-764617-m02 in network mk-ha-764617
	I0816 17:05:06.891933   27287 main.go:141] libmachine: (ha-764617-m02) DBG | I0816 17:05:06.891886   27641 retry.go:31] will retry after 2.791745221s: waiting for machine to come up
	I0816 17:05:09.685124   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:09.685567   27287 main.go:141] libmachine: (ha-764617-m02) DBG | unable to find current IP address of domain ha-764617-m02 in network mk-ha-764617
	I0816 17:05:09.685612   27287 main.go:141] libmachine: (ha-764617-m02) DBG | I0816 17:05:09.685517   27641 retry.go:31] will retry after 4.387344822s: waiting for machine to come up
	I0816 17:05:14.077676   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:14.078051   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has current primary IP address 192.168.39.184 and MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:14.078083   27287 main.go:141] libmachine: (ha-764617-m02) Found IP for machine: 192.168.39.184
	I0816 17:05:14.078097   27287 main.go:141] libmachine: (ha-764617-m02) Reserving static IP address...
	I0816 17:05:14.078416   27287 main.go:141] libmachine: (ha-764617-m02) DBG | unable to find host DHCP lease matching {name: "ha-764617-m02", mac: "52:54:00:cf:3e:7f", ip: "192.168.39.184"} in network mk-ha-764617
	I0816 17:05:14.151792   27287 main.go:141] libmachine: (ha-764617-m02) Reserved static IP address: 192.168.39.184
	I0816 17:05:14.151825   27287 main.go:141] libmachine: (ha-764617-m02) DBG | Getting to WaitForSSH function...
	I0816 17:05:14.151835   27287 main.go:141] libmachine: (ha-764617-m02) Waiting for SSH to be available...
	I0816 17:05:14.154304   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:14.154742   27287 main.go:141] libmachine: (ha-764617-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:3e:7f", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:05:08 +0000 UTC Type:0 Mac:52:54:00:cf:3e:7f Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:minikube Clientid:01:52:54:00:cf:3e:7f}
	I0816 17:05:14.154765   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:14.154942   27287 main.go:141] libmachine: (ha-764617-m02) DBG | Using SSH client type: external
	I0816 17:05:14.154964   27287 main.go:141] libmachine: (ha-764617-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m02/id_rsa (-rw-------)
	I0816 17:05:14.154991   27287 main.go:141] libmachine: (ha-764617-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.184 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 17:05:14.155000   27287 main.go:141] libmachine: (ha-764617-m02) DBG | About to run SSH command:
	I0816 17:05:14.155063   27287 main.go:141] libmachine: (ha-764617-m02) DBG | exit 0
	I0816 17:05:14.276467   27287 main.go:141] libmachine: (ha-764617-m02) DBG | SSH cmd err, output: <nil>: 
	I0816 17:05:14.276757   27287 main.go:141] libmachine: (ha-764617-m02) KVM machine creation complete!
	I0816 17:05:14.277004   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetConfigRaw
	I0816 17:05:14.277523   27287 main.go:141] libmachine: (ha-764617-m02) Calling .DriverName
	I0816 17:05:14.277727   27287 main.go:141] libmachine: (ha-764617-m02) Calling .DriverName
	I0816 17:05:14.277913   27287 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0816 17:05:14.277925   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetState
	I0816 17:05:14.279235   27287 main.go:141] libmachine: Detecting operating system of created instance...
	I0816 17:05:14.279250   27287 main.go:141] libmachine: Waiting for SSH to be available...
	I0816 17:05:14.279258   27287 main.go:141] libmachine: Getting to WaitForSSH function...
	I0816 17:05:14.279267   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHHostname
	I0816 17:05:14.281382   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:14.281636   27287 main.go:141] libmachine: (ha-764617-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:3e:7f", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:05:08 +0000 UTC Type:0 Mac:52:54:00:cf:3e:7f Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-764617-m02 Clientid:01:52:54:00:cf:3e:7f}
	I0816 17:05:14.281666   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:14.281808   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHPort
	I0816 17:05:14.281955   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHKeyPath
	I0816 17:05:14.282111   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHKeyPath
	I0816 17:05:14.282212   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHUsername
	I0816 17:05:14.282368   27287 main.go:141] libmachine: Using SSH client type: native
	I0816 17:05:14.282621   27287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.184 22 <nil> <nil>}
	I0816 17:05:14.282638   27287 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0816 17:05:14.379575   27287 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 17:05:14.379602   27287 main.go:141] libmachine: Detecting the provisioner...
	I0816 17:05:14.379612   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHHostname
	I0816 17:05:14.382527   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:14.382891   27287 main.go:141] libmachine: (ha-764617-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:3e:7f", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:05:08 +0000 UTC Type:0 Mac:52:54:00:cf:3e:7f Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-764617-m02 Clientid:01:52:54:00:cf:3e:7f}
	I0816 17:05:14.382923   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:14.383040   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHPort
	I0816 17:05:14.383210   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHKeyPath
	I0816 17:05:14.383406   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHKeyPath
	I0816 17:05:14.383542   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHUsername
	I0816 17:05:14.383795   27287 main.go:141] libmachine: Using SSH client type: native
	I0816 17:05:14.383969   27287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.184 22 <nil> <nil>}
	I0816 17:05:14.383981   27287 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0816 17:05:14.480969   27287 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0816 17:05:14.481075   27287 main.go:141] libmachine: found compatible host: buildroot
	I0816 17:05:14.481088   27287 main.go:141] libmachine: Provisioning with buildroot...
	I0816 17:05:14.481099   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetMachineName
	I0816 17:05:14.481349   27287 buildroot.go:166] provisioning hostname "ha-764617-m02"
	I0816 17:05:14.481379   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetMachineName
	I0816 17:05:14.481574   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHHostname
	I0816 17:05:14.484312   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:14.484718   27287 main.go:141] libmachine: (ha-764617-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:3e:7f", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:05:08 +0000 UTC Type:0 Mac:52:54:00:cf:3e:7f Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-764617-m02 Clientid:01:52:54:00:cf:3e:7f}
	I0816 17:05:14.484743   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:14.484911   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHPort
	I0816 17:05:14.485089   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHKeyPath
	I0816 17:05:14.485264   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHKeyPath
	I0816 17:05:14.485418   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHUsername
	I0816 17:05:14.485636   27287 main.go:141] libmachine: Using SSH client type: native
	I0816 17:05:14.485815   27287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.184 22 <nil> <nil>}
	I0816 17:05:14.485829   27287 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-764617-m02 && echo "ha-764617-m02" | sudo tee /etc/hostname
	I0816 17:05:14.598108   27287 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-764617-m02
	
	I0816 17:05:14.598133   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHHostname
	I0816 17:05:14.601493   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:14.601919   27287 main.go:141] libmachine: (ha-764617-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:3e:7f", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:05:08 +0000 UTC Type:0 Mac:52:54:00:cf:3e:7f Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-764617-m02 Clientid:01:52:54:00:cf:3e:7f}
	I0816 17:05:14.601951   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:14.602152   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHPort
	I0816 17:05:14.602347   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHKeyPath
	I0816 17:05:14.602499   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHKeyPath
	I0816 17:05:14.602619   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHUsername
	I0816 17:05:14.602763   27287 main.go:141] libmachine: Using SSH client type: native
	I0816 17:05:14.602976   27287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.184 22 <nil> <nil>}
	I0816 17:05:14.603001   27287 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-764617-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-764617-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-764617-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 17:05:14.710138   27287 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 17:05:14.710174   27287 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19461-9545/.minikube CaCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19461-9545/.minikube}
	I0816 17:05:14.710194   27287 buildroot.go:174] setting up certificates
	I0816 17:05:14.710205   27287 provision.go:84] configureAuth start
	I0816 17:05:14.710217   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetMachineName
	I0816 17:05:14.710524   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetIP
	I0816 17:05:14.713732   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:14.714158   27287 main.go:141] libmachine: (ha-764617-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:3e:7f", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:05:08 +0000 UTC Type:0 Mac:52:54:00:cf:3e:7f Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-764617-m02 Clientid:01:52:54:00:cf:3e:7f}
	I0816 17:05:14.714191   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:14.714354   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHHostname
	I0816 17:05:14.716407   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:14.716766   27287 main.go:141] libmachine: (ha-764617-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:3e:7f", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:05:08 +0000 UTC Type:0 Mac:52:54:00:cf:3e:7f Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-764617-m02 Clientid:01:52:54:00:cf:3e:7f}
	I0816 17:05:14.716796   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:14.716917   27287 provision.go:143] copyHostCerts
	I0816 17:05:14.716949   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem
	I0816 17:05:14.716990   27287 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem, removing ...
	I0816 17:05:14.717002   27287 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem
	I0816 17:05:14.717079   27287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem (1082 bytes)
	I0816 17:05:14.717184   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem
	I0816 17:05:14.717211   27287 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem, removing ...
	I0816 17:05:14.717220   27287 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem
	I0816 17:05:14.717261   27287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem (1123 bytes)
	I0816 17:05:14.717340   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem
	I0816 17:05:14.717364   27287 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem, removing ...
	I0816 17:05:14.717374   27287 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem
	I0816 17:05:14.717410   27287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem (1679 bytes)
	I0816 17:05:14.717489   27287 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem org=jenkins.ha-764617-m02 san=[127.0.0.1 192.168.39.184 ha-764617-m02 localhost minikube]
	I0816 17:05:15.172415   27287 provision.go:177] copyRemoteCerts
	I0816 17:05:15.172467   27287 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 17:05:15.172488   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHHostname
	I0816 17:05:15.175218   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:15.175574   27287 main.go:141] libmachine: (ha-764617-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:3e:7f", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:05:08 +0000 UTC Type:0 Mac:52:54:00:cf:3e:7f Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-764617-m02 Clientid:01:52:54:00:cf:3e:7f}
	I0816 17:05:15.175596   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:15.175818   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHPort
	I0816 17:05:15.176028   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHKeyPath
	I0816 17:05:15.176205   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHUsername
	I0816 17:05:15.176337   27287 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m02/id_rsa Username:docker}
	I0816 17:05:15.255011   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 17:05:15.255084   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0816 17:05:15.280457   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 17:05:15.280534   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 17:05:15.309859   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 17:05:15.309917   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 17:05:15.334358   27287 provision.go:87] duration metric: took 624.139811ms to configureAuth
	I0816 17:05:15.334386   27287 buildroot.go:189] setting minikube options for container-runtime
	I0816 17:05:15.334544   27287 config.go:182] Loaded profile config "ha-764617": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 17:05:15.334639   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHHostname
	I0816 17:05:15.337113   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:15.337604   27287 main.go:141] libmachine: (ha-764617-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:3e:7f", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:05:08 +0000 UTC Type:0 Mac:52:54:00:cf:3e:7f Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-764617-m02 Clientid:01:52:54:00:cf:3e:7f}
	I0816 17:05:15.337635   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:15.337772   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHPort
	I0816 17:05:15.337980   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHKeyPath
	I0816 17:05:15.338161   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHKeyPath
	I0816 17:05:15.338290   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHUsername
	I0816 17:05:15.338443   27287 main.go:141] libmachine: Using SSH client type: native
	I0816 17:05:15.338647   27287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.184 22 <nil> <nil>}
	I0816 17:05:15.338663   27287 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 17:05:15.594238   27287 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 17:05:15.594260   27287 main.go:141] libmachine: Checking connection to Docker...
	I0816 17:05:15.594268   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetURL
	I0816 17:05:15.595833   27287 main.go:141] libmachine: (ha-764617-m02) DBG | Using libvirt version 6000000
	I0816 17:05:15.598037   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:15.598365   27287 main.go:141] libmachine: (ha-764617-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:3e:7f", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:05:08 +0000 UTC Type:0 Mac:52:54:00:cf:3e:7f Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-764617-m02 Clientid:01:52:54:00:cf:3e:7f}
	I0816 17:05:15.598392   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:15.598609   27287 main.go:141] libmachine: Docker is up and running!
	I0816 17:05:15.598626   27287 main.go:141] libmachine: Reticulating splines...
	I0816 17:05:15.598632   27287 client.go:171] duration metric: took 21.12084563s to LocalClient.Create
	I0816 17:05:15.598652   27287 start.go:167] duration metric: took 21.12090112s to libmachine.API.Create "ha-764617"
	I0816 17:05:15.598661   27287 start.go:293] postStartSetup for "ha-764617-m02" (driver="kvm2")
	I0816 17:05:15.598670   27287 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 17:05:15.598693   27287 main.go:141] libmachine: (ha-764617-m02) Calling .DriverName
	I0816 17:05:15.598897   27287 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 17:05:15.598919   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHHostname
	I0816 17:05:15.601355   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:15.601756   27287 main.go:141] libmachine: (ha-764617-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:3e:7f", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:05:08 +0000 UTC Type:0 Mac:52:54:00:cf:3e:7f Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-764617-m02 Clientid:01:52:54:00:cf:3e:7f}
	I0816 17:05:15.601787   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:15.601977   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHPort
	I0816 17:05:15.602157   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHKeyPath
	I0816 17:05:15.602357   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHUsername
	I0816 17:05:15.602513   27287 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m02/id_rsa Username:docker}
	I0816 17:05:15.678515   27287 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 17:05:15.682486   27287 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 17:05:15.682513   27287 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/addons for local assets ...
	I0816 17:05:15.682605   27287 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/files for local assets ...
	I0816 17:05:15.682708   27287 filesync.go:149] local asset: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem -> 167532.pem in /etc/ssl/certs
	I0816 17:05:15.682721   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem -> /etc/ssl/certs/167532.pem
	I0816 17:05:15.682837   27287 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 17:05:15.691786   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /etc/ssl/certs/167532.pem (1708 bytes)
	I0816 17:05:15.714314   27287 start.go:296] duration metric: took 115.641935ms for postStartSetup
	I0816 17:05:15.714368   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetConfigRaw
	I0816 17:05:15.714977   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetIP
	I0816 17:05:15.717734   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:15.718053   27287 main.go:141] libmachine: (ha-764617-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:3e:7f", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:05:08 +0000 UTC Type:0 Mac:52:54:00:cf:3e:7f Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-764617-m02 Clientid:01:52:54:00:cf:3e:7f}
	I0816 17:05:15.718074   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:15.718364   27287 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/config.json ...
	I0816 17:05:15.718577   27287 start.go:128] duration metric: took 21.258778684s to createHost
	I0816 17:05:15.718598   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHHostname
	I0816 17:05:15.721229   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:15.721603   27287 main.go:141] libmachine: (ha-764617-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:3e:7f", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:05:08 +0000 UTC Type:0 Mac:52:54:00:cf:3e:7f Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-764617-m02 Clientid:01:52:54:00:cf:3e:7f}
	I0816 17:05:15.721633   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:15.721787   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHPort
	I0816 17:05:15.721954   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHKeyPath
	I0816 17:05:15.722168   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHKeyPath
	I0816 17:05:15.722332   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHUsername
	I0816 17:05:15.722508   27287 main.go:141] libmachine: Using SSH client type: native
	I0816 17:05:15.722688   27287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.184 22 <nil> <nil>}
	I0816 17:05:15.722701   27287 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 17:05:15.821238   27287 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723827915.794612205
	
	I0816 17:05:15.821257   27287 fix.go:216] guest clock: 1723827915.794612205
	I0816 17:05:15.821267   27287 fix.go:229] Guest: 2024-08-16 17:05:15.794612205 +0000 UTC Remote: 2024-08-16 17:05:15.718589053 +0000 UTC m=+64.576049869 (delta=76.023152ms)
	I0816 17:05:15.821285   27287 fix.go:200] guest clock delta is within tolerance: 76.023152ms
	I0816 17:05:15.821314   27287 start.go:83] releasing machines lock for "ha-764617-m02", held for 21.36157963s
	I0816 17:05:15.821341   27287 main.go:141] libmachine: (ha-764617-m02) Calling .DriverName
	I0816 17:05:15.821626   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetIP
	I0816 17:05:15.824154   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:15.824543   27287 main.go:141] libmachine: (ha-764617-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:3e:7f", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:05:08 +0000 UTC Type:0 Mac:52:54:00:cf:3e:7f Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-764617-m02 Clientid:01:52:54:00:cf:3e:7f}
	I0816 17:05:15.824576   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:15.827172   27287 out.go:177] * Found network options:
	I0816 17:05:15.828431   27287 out.go:177]   - NO_PROXY=192.168.39.18
	W0816 17:05:15.829535   27287 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 17:05:15.829578   27287 main.go:141] libmachine: (ha-764617-m02) Calling .DriverName
	I0816 17:05:15.830199   27287 main.go:141] libmachine: (ha-764617-m02) Calling .DriverName
	I0816 17:05:15.830386   27287 main.go:141] libmachine: (ha-764617-m02) Calling .DriverName
	I0816 17:05:15.830468   27287 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 17:05:15.830514   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHHostname
	W0816 17:05:15.830612   27287 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 17:05:15.830691   27287 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 17:05:15.830714   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHHostname
	I0816 17:05:15.833788   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:15.834027   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:15.834178   27287 main.go:141] libmachine: (ha-764617-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:3e:7f", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:05:08 +0000 UTC Type:0 Mac:52:54:00:cf:3e:7f Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-764617-m02 Clientid:01:52:54:00:cf:3e:7f}
	I0816 17:05:15.834203   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:15.834382   27287 main.go:141] libmachine: (ha-764617-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:3e:7f", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:05:08 +0000 UTC Type:0 Mac:52:54:00:cf:3e:7f Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-764617-m02 Clientid:01:52:54:00:cf:3e:7f}
	I0816 17:05:15.834413   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:15.834383   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHPort
	I0816 17:05:15.834584   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHPort
	I0816 17:05:15.834661   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHKeyPath
	I0816 17:05:15.834734   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHKeyPath
	I0816 17:05:15.834813   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHUsername
	I0816 17:05:15.834915   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHUsername
	I0816 17:05:15.834932   27287 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m02/id_rsa Username:docker}
	I0816 17:05:15.835044   27287 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m02/id_rsa Username:docker}
	I0816 17:05:16.068814   27287 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 17:05:16.075538   27287 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 17:05:16.075601   27287 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 17:05:16.091482   27287 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 17:05:16.091504   27287 start.go:495] detecting cgroup driver to use...
	I0816 17:05:16.091561   27287 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 17:05:16.110257   27287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 17:05:16.126323   27287 docker.go:217] disabling cri-docker service (if available) ...
	I0816 17:05:16.126375   27287 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 17:05:16.141399   27287 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 17:05:16.154490   27287 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 17:05:16.269157   27287 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 17:05:16.407657   27287 docker.go:233] disabling docker service ...
	I0816 17:05:16.407721   27287 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 17:05:16.421434   27287 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 17:05:16.433516   27287 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 17:05:16.567272   27287 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 17:05:16.689387   27287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 17:05:16.703189   27287 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 17:05:16.721006   27287 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 17:05:16.721072   27287 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:05:16.731367   27287 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 17:05:16.731440   27287 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:05:16.741272   27287 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:05:16.751083   27287 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:05:16.761289   27287 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 17:05:16.771215   27287 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:05:16.781163   27287 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:05:16.797739   27287 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:05:16.808133   27287 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 17:05:16.817377   27287 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 17:05:16.817434   27287 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 17:05:16.829184   27287 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 17:05:16.838635   27287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 17:05:16.952234   27287 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 17:05:17.091683   27287 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 17:05:17.091750   27287 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 17:05:17.096146   27287 start.go:563] Will wait 60s for crictl version
	I0816 17:05:17.096190   27287 ssh_runner.go:195] Run: which crictl
	I0816 17:05:17.099403   27287 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 17:05:17.135869   27287 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 17:05:17.135939   27287 ssh_runner.go:195] Run: crio --version
	I0816 17:05:17.164105   27287 ssh_runner.go:195] Run: crio --version
	I0816 17:05:17.191765   27287 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 17:05:17.192910   27287 out.go:177]   - env NO_PROXY=192.168.39.18
	I0816 17:05:17.193933   27287 main.go:141] libmachine: (ha-764617-m02) Calling .GetIP
	I0816 17:05:17.197050   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:17.197441   27287 main.go:141] libmachine: (ha-764617-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:3e:7f", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:05:08 +0000 UTC Type:0 Mac:52:54:00:cf:3e:7f Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-764617-m02 Clientid:01:52:54:00:cf:3e:7f}
	I0816 17:05:17.197469   27287 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:05:17.197706   27287 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0816 17:05:17.202722   27287 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 17:05:17.215145   27287 mustload.go:65] Loading cluster: ha-764617
	I0816 17:05:17.215352   27287 config.go:182] Loaded profile config "ha-764617": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 17:05:17.215607   27287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:05:17.215647   27287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:05:17.230152   27287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38425
	I0816 17:05:17.230644   27287 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:05:17.231066   27287 main.go:141] libmachine: Using API Version  1
	I0816 17:05:17.231083   27287 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:05:17.231367   27287 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:05:17.231514   27287 main.go:141] libmachine: (ha-764617) Calling .GetState
	I0816 17:05:17.232989   27287 host.go:66] Checking if "ha-764617" exists ...
	I0816 17:05:17.233254   27287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:05:17.233282   27287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:05:17.247698   27287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45357
	I0816 17:05:17.248105   27287 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:05:17.248561   27287 main.go:141] libmachine: Using API Version  1
	I0816 17:05:17.248587   27287 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:05:17.248858   27287 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:05:17.249057   27287 main.go:141] libmachine: (ha-764617) Calling .DriverName
	I0816 17:05:17.249234   27287 certs.go:68] Setting up /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617 for IP: 192.168.39.184
	I0816 17:05:17.249245   27287 certs.go:194] generating shared ca certs ...
	I0816 17:05:17.249257   27287 certs.go:226] acquiring lock for ca certs: {Name:mk1e000575c94c138513704c2900b8a68810eb65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:05:17.249376   27287 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key
	I0816 17:05:17.249423   27287 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key
	I0816 17:05:17.249432   27287 certs.go:256] generating profile certs ...
	I0816 17:05:17.249502   27287 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/client.key
	I0816 17:05:17.249525   27287 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.key.f9174157
	I0816 17:05:17.249556   27287 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.crt.f9174157 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.18 192.168.39.184 192.168.39.254]
	I0816 17:05:17.330711   27287 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.crt.f9174157 ...
	I0816 17:05:17.330737   27287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.crt.f9174157: {Name:mk01e6747a8590487bd79267069b868aeffb68c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:05:17.330890   27287 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.key.f9174157 ...
	I0816 17:05:17.330903   27287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.key.f9174157: {Name:mke5c2cbeaef23a1785ed59c672deb9d987932b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:05:17.330968   27287 certs.go:381] copying /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.crt.f9174157 -> /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.crt
	I0816 17:05:17.331088   27287 certs.go:385] copying /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.key.f9174157 -> /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.key
	I0816 17:05:17.331208   27287 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/proxy-client.key
	I0816 17:05:17.331223   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0816 17:05:17.331235   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0816 17:05:17.331248   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0816 17:05:17.331260   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0816 17:05:17.331272   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0816 17:05:17.331284   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0816 17:05:17.331302   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0816 17:05:17.331314   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0816 17:05:17.331358   27287 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem (1338 bytes)
	W0816 17:05:17.331382   27287 certs.go:480] ignoring /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753_empty.pem, impossibly tiny 0 bytes
	I0816 17:05:17.331388   27287 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 17:05:17.331412   27287 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem (1082 bytes)
	I0816 17:05:17.331433   27287 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem (1123 bytes)
	I0816 17:05:17.331455   27287 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem (1679 bytes)
	I0816 17:05:17.331497   27287 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem (1708 bytes)
	I0816 17:05:17.331521   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem -> /usr/share/ca-certificates/16753.pem
	I0816 17:05:17.331534   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem -> /usr/share/ca-certificates/167532.pem
	I0816 17:05:17.331546   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0816 17:05:17.331577   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHHostname
	I0816 17:05:17.334829   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:05:17.335193   27287 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:05:17.335215   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:05:17.335399   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHPort
	I0816 17:05:17.335609   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:05:17.335758   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHUsername
	I0816 17:05:17.335881   27287 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617/id_rsa Username:docker}
	I0816 17:05:17.413021   27287 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0816 17:05:17.417918   27287 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0816 17:05:17.428202   27287 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0816 17:05:17.431990   27287 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0816 17:05:17.441799   27287 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0816 17:05:17.446073   27287 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0816 17:05:17.457196   27287 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0816 17:05:17.461127   27287 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0816 17:05:17.470431   27287 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0816 17:05:17.474049   27287 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0816 17:05:17.484125   27287 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0816 17:05:17.488044   27287 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0816 17:05:17.497695   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 17:05:17.521837   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 17:05:17.545316   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 17:05:17.568540   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 17:05:17.593049   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0816 17:05:17.614659   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 17:05:17.637248   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 17:05:17.660202   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 17:05:17.685154   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem --> /usr/share/ca-certificates/16753.pem (1338 bytes)
	I0816 17:05:17.709667   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /usr/share/ca-certificates/167532.pem (1708 bytes)
	I0816 17:05:17.734027   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 17:05:17.758301   27287 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0816 17:05:17.774810   27287 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0816 17:05:17.791619   27287 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0816 17:05:17.808482   27287 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0816 17:05:17.824086   27287 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0816 17:05:17.839536   27287 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0816 17:05:17.856346   27287 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0816 17:05:17.872671   27287 ssh_runner.go:195] Run: openssl version
	I0816 17:05:17.878303   27287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16753.pem && ln -fs /usr/share/ca-certificates/16753.pem /etc/ssl/certs/16753.pem"
	I0816 17:05:17.888775   27287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16753.pem
	I0816 17:05:17.892983   27287 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 17:00 /usr/share/ca-certificates/16753.pem
	I0816 17:05:17.893042   27287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16753.pem
	I0816 17:05:17.898619   27287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16753.pem /etc/ssl/certs/51391683.0"
	I0816 17:05:17.909254   27287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167532.pem && ln -fs /usr/share/ca-certificates/167532.pem /etc/ssl/certs/167532.pem"
	I0816 17:05:17.920282   27287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167532.pem
	I0816 17:05:17.924763   27287 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 17:00 /usr/share/ca-certificates/167532.pem
	I0816 17:05:17.924828   27287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167532.pem
	I0816 17:05:17.930414   27287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167532.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 17:05:17.941647   27287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 17:05:17.952642   27287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 17:05:17.957203   27287 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:49 /usr/share/ca-certificates/minikubeCA.pem
	I0816 17:05:17.957264   27287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 17:05:17.963130   27287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 17:05:17.973871   27287 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 17:05:17.977962   27287 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0816 17:05:17.978020   27287 kubeadm.go:934] updating node {m02 192.168.39.184 8443 v1.31.0 crio true true} ...
	I0816 17:05:17.978119   27287 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-764617-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.184
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-764617 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 17:05:17.978153   27287 kube-vip.go:115] generating kube-vip config ...
	I0816 17:05:17.978198   27287 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0816 17:05:17.993064   27287 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0816 17:05:17.993141   27287 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0816 17:05:17.993203   27287 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 17:05:18.002372   27287 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0816 17:05:18.002434   27287 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0816 17:05:18.012001   27287 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0816 17:05:18.012025   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0816 17:05:18.012100   27287 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0816 17:05:18.012101   27287 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19461-9545/.minikube/cache/linux/amd64/v1.31.0/kubelet
	I0816 17:05:18.012133   27287 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19461-9545/.minikube/cache/linux/amd64/v1.31.0/kubeadm
	I0816 17:05:18.015936   27287 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0816 17:05:18.015959   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0816 17:05:18.945188   27287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 17:05:18.959054   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0816 17:05:18.959177   27287 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0816 17:05:18.963226   27287 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0816 17:05:18.963265   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0816 17:05:19.011628   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0816 17:05:19.011722   27287 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0816 17:05:19.039543   27287 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0816 17:05:19.039587   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0816 17:05:19.448167   27287 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0816 17:05:19.456878   27287 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0816 17:05:19.472028   27287 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 17:05:19.487352   27287 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0816 17:05:19.503229   27287 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0816 17:05:19.506741   27287 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 17:05:19.518344   27287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 17:05:19.633708   27287 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 17:05:19.649364   27287 host.go:66] Checking if "ha-764617" exists ...
	I0816 17:05:19.649821   27287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:05:19.649869   27287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:05:19.665777   27287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37987
	I0816 17:05:19.666191   27287 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:05:19.666695   27287 main.go:141] libmachine: Using API Version  1
	I0816 17:05:19.666719   27287 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:05:19.667010   27287 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:05:19.667214   27287 main.go:141] libmachine: (ha-764617) Calling .DriverName
	I0816 17:05:19.667340   27287 start.go:317] joinCluster: &{Name:ha-764617 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-764617 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.18 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.184 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 17:05:19.667431   27287 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0816 17:05:19.667450   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHHostname
	I0816 17:05:19.670648   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:05:19.671071   27287 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:05:19.671101   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:05:19.671264   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHPort
	I0816 17:05:19.671411   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:05:19.671568   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHUsername
	I0816 17:05:19.671714   27287 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617/id_rsa Username:docker}
	I0816 17:05:19.819563   27287 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.184 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 17:05:19.819617   27287 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 5eeya2.4dclp2q50i3hu1c0 --discovery-token-ca-cert-hash sha256:340cd7e6f4afd769ffe0b97bb40a910498795aaf29fab06cd6b8c9a3957ccd57 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-764617-m02 --control-plane --apiserver-advertise-address=192.168.39.184 --apiserver-bind-port=8443"
	I0816 17:05:47.562480   27287 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 5eeya2.4dclp2q50i3hu1c0 --discovery-token-ca-cert-hash sha256:340cd7e6f4afd769ffe0b97bb40a910498795aaf29fab06cd6b8c9a3957ccd57 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-764617-m02 --control-plane --apiserver-advertise-address=192.168.39.184 --apiserver-bind-port=8443": (27.742838871s)
	I0816 17:05:47.562514   27287 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0816 17:05:48.085888   27287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-764617-m02 minikube.k8s.io/updated_at=2024_08_16T17_05_48_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd minikube.k8s.io/name=ha-764617 minikube.k8s.io/primary=false
	I0816 17:05:48.193572   27287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-764617-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0816 17:05:48.317060   27287 start.go:319] duration metric: took 28.649716421s to joinCluster
	I0816 17:05:48.317133   27287 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.184 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 17:05:48.317412   27287 config.go:182] Loaded profile config "ha-764617": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 17:05:48.318876   27287 out.go:177] * Verifying Kubernetes components...
	I0816 17:05:48.320297   27287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 17:05:48.576351   27287 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 17:05:48.624479   27287 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19461-9545/kubeconfig
	I0816 17:05:48.624847   27287 kapi.go:59] client config for ha-764617: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/client.crt", KeyFile:"/home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/client.key", CAFile:"/home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f189a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0816 17:05:48.624934   27287 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.18:8443
	I0816 17:05:48.625243   27287 node_ready.go:35] waiting up to 6m0s for node "ha-764617-m02" to be "Ready" ...
	I0816 17:05:48.625361   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:05:48.625373   27287 round_trippers.go:469] Request Headers:
	I0816 17:05:48.625384   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:05:48.625395   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:05:48.635759   27287 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0816 17:05:49.125849   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:05:49.125873   27287 round_trippers.go:469] Request Headers:
	I0816 17:05:49.125882   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:05:49.125891   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:05:49.129669   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:05:49.625957   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:05:49.625985   27287 round_trippers.go:469] Request Headers:
	I0816 17:05:49.625996   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:05:49.626002   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:05:49.631635   27287 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0816 17:05:50.126287   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:05:50.126314   27287 round_trippers.go:469] Request Headers:
	I0816 17:05:50.126325   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:05:50.126333   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:05:50.129386   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:05:50.626242   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:05:50.626267   27287 round_trippers.go:469] Request Headers:
	I0816 17:05:50.626277   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:05:50.626282   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:05:50.629996   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:05:50.630684   27287 node_ready.go:53] node "ha-764617-m02" has status "Ready":"False"
	I0816 17:05:51.125895   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:05:51.125920   27287 round_trippers.go:469] Request Headers:
	I0816 17:05:51.125932   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:05:51.125940   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:05:51.129303   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:05:51.625899   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:05:51.625918   27287 round_trippers.go:469] Request Headers:
	I0816 17:05:51.625926   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:05:51.625929   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:05:51.629226   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:05:52.125938   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:05:52.125959   27287 round_trippers.go:469] Request Headers:
	I0816 17:05:52.125971   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:05:52.125977   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:05:52.129480   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:05:52.625960   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:05:52.625988   27287 round_trippers.go:469] Request Headers:
	I0816 17:05:52.626001   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:05:52.626010   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:05:52.629829   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:05:53.125688   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:05:53.125715   27287 round_trippers.go:469] Request Headers:
	I0816 17:05:53.125728   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:05:53.125737   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:05:53.129371   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:05:53.129815   27287 node_ready.go:53] node "ha-764617-m02" has status "Ready":"False"
	I0816 17:05:53.625804   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:05:53.625824   27287 round_trippers.go:469] Request Headers:
	I0816 17:05:53.625832   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:05:53.625837   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:05:53.629053   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:05:54.125891   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:05:54.125911   27287 round_trippers.go:469] Request Headers:
	I0816 17:05:54.125918   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:05:54.125922   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:05:54.129059   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:05:54.626345   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:05:54.626368   27287 round_trippers.go:469] Request Headers:
	I0816 17:05:54.626378   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:05:54.626383   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:05:54.629930   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:05:55.126447   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:05:55.126466   27287 round_trippers.go:469] Request Headers:
	I0816 17:05:55.126475   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:05:55.126479   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:05:55.130021   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:05:55.130693   27287 node_ready.go:53] node "ha-764617-m02" has status "Ready":"False"
	I0816 17:05:55.626194   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:05:55.626219   27287 round_trippers.go:469] Request Headers:
	I0816 17:05:55.626231   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:05:55.626238   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:05:55.629738   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:05:56.125942   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:05:56.125966   27287 round_trippers.go:469] Request Headers:
	I0816 17:05:56.125976   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:05:56.125980   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:05:56.129149   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:05:56.625647   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:05:56.625670   27287 round_trippers.go:469] Request Headers:
	I0816 17:05:56.625685   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:05:56.625690   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:05:56.629084   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:05:57.126223   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:05:57.126248   27287 round_trippers.go:469] Request Headers:
	I0816 17:05:57.126256   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:05:57.126260   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:05:57.129302   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:05:57.625891   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:05:57.625914   27287 round_trippers.go:469] Request Headers:
	I0816 17:05:57.625922   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:05:57.625926   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:05:57.629144   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:05:57.629735   27287 node_ready.go:53] node "ha-764617-m02" has status "Ready":"False"
	I0816 17:05:58.125853   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:05:58.125871   27287 round_trippers.go:469] Request Headers:
	I0816 17:05:58.125879   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:05:58.125882   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:05:58.129672   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:05:58.625546   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:05:58.625570   27287 round_trippers.go:469] Request Headers:
	I0816 17:05:58.625579   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:05:58.625584   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:05:58.639274   27287 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0816 17:05:59.125870   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:05:59.125892   27287 round_trippers.go:469] Request Headers:
	I0816 17:05:59.125900   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:05:59.125904   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:05:59.129682   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:05:59.626254   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:05:59.626306   27287 round_trippers.go:469] Request Headers:
	I0816 17:05:59.626317   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:05:59.626325   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:05:59.630113   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:05:59.630646   27287 node_ready.go:53] node "ha-764617-m02" has status "Ready":"False"
	I0816 17:06:00.125423   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:06:00.125445   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:00.125456   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:00.125461   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:00.128282   27287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 17:06:00.625883   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:06:00.625908   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:00.625916   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:00.625920   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:00.629761   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:06:01.125630   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:06:01.125653   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:01.125662   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:01.125669   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:01.128763   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:06:01.625534   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:06:01.625559   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:01.625579   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:01.625585   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:01.628446   27287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 17:06:02.126446   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:06:02.126466   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:02.126474   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:02.126479   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:02.130362   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:06:02.131056   27287 node_ready.go:53] node "ha-764617-m02" has status "Ready":"False"
	I0816 17:06:02.626468   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:06:02.626493   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:02.626502   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:02.626506   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:02.629586   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:06:03.125618   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:06:03.125642   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:03.125650   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:03.125654   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:03.128720   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:06:03.625485   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:06:03.625510   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:03.625516   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:03.625520   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:03.628843   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:06:04.125808   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:06:04.125831   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:04.125838   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:04.125842   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:04.129078   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:06:04.626408   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:06:04.626430   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:04.626438   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:04.626442   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:04.629697   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:06:04.630263   27287 node_ready.go:53] node "ha-764617-m02" has status "Ready":"False"
	I0816 17:06:05.126111   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:06:05.126133   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:05.126141   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:05.126147   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:05.129626   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:06:05.625626   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:06:05.625649   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:05.625657   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:05.625660   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:05.629302   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:06:06.125886   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:06:06.125908   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:06.125915   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:06.125919   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:06.129037   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:06:06.625492   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:06:06.625514   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:06.625523   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:06.625527   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:06.629059   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:06:07.126088   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:06:07.126120   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:07.126129   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:07.126133   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:07.130295   27287 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 17:06:07.130922   27287 node_ready.go:49] node "ha-764617-m02" has status "Ready":"True"
	I0816 17:06:07.130940   27287 node_ready.go:38] duration metric: took 18.50566774s for node "ha-764617-m02" to be "Ready" ...
	I0816 17:06:07.130947   27287 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 17:06:07.131007   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods
	I0816 17:06:07.131017   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:07.131024   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:07.131027   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:07.136228   27287 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0816 17:06:07.141833   27287 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-d6c7g" in "kube-system" namespace to be "Ready" ...
	I0816 17:06:07.141903   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d6c7g
	I0816 17:06:07.141909   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:07.141922   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:07.141929   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:07.145327   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:06:07.146083   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617
	I0816 17:06:07.146096   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:07.146103   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:07.146106   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:07.149233   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:06:07.149830   27287 pod_ready.go:93] pod "coredns-6f6b679f8f-d6c7g" in "kube-system" namespace has status "Ready":"True"
	I0816 17:06:07.149846   27287 pod_ready.go:82] duration metric: took 7.989214ms for pod "coredns-6f6b679f8f-d6c7g" in "kube-system" namespace to be "Ready" ...
	I0816 17:06:07.149857   27287 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-rhb6h" in "kube-system" namespace to be "Ready" ...
	I0816 17:06:07.149910   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-rhb6h
	I0816 17:06:07.149920   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:07.149929   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:07.149936   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:07.154058   27287 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 17:06:07.154960   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617
	I0816 17:06:07.154974   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:07.154983   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:07.154987   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:07.157780   27287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 17:06:07.158442   27287 pod_ready.go:93] pod "coredns-6f6b679f8f-rhb6h" in "kube-system" namespace has status "Ready":"True"
	I0816 17:06:07.158456   27287 pod_ready.go:82] duration metric: took 8.592818ms for pod "coredns-6f6b679f8f-rhb6h" in "kube-system" namespace to be "Ready" ...
	I0816 17:06:07.158465   27287 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-764617" in "kube-system" namespace to be "Ready" ...
	I0816 17:06:07.158511   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/etcd-ha-764617
	I0816 17:06:07.158518   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:07.158525   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:07.158529   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:07.161185   27287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 17:06:07.161743   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617
	I0816 17:06:07.161756   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:07.161764   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:07.161769   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:07.164153   27287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 17:06:07.164684   27287 pod_ready.go:93] pod "etcd-ha-764617" in "kube-system" namespace has status "Ready":"True"
	I0816 17:06:07.164700   27287 pod_ready.go:82] duration metric: took 6.229555ms for pod "etcd-ha-764617" in "kube-system" namespace to be "Ready" ...
	I0816 17:06:07.164708   27287 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-764617-m02" in "kube-system" namespace to be "Ready" ...
	I0816 17:06:07.164749   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/etcd-ha-764617-m02
	I0816 17:06:07.164756   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:07.164763   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:07.164767   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:07.167071   27287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 17:06:07.167532   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:06:07.167545   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:07.167554   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:07.167559   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:07.170156   27287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 17:06:07.170924   27287 pod_ready.go:93] pod "etcd-ha-764617-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 17:06:07.170938   27287 pod_ready.go:82] duration metric: took 6.224878ms for pod "etcd-ha-764617-m02" in "kube-system" namespace to be "Ready" ...
	I0816 17:06:07.170950   27287 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-764617" in "kube-system" namespace to be "Ready" ...
	I0816 17:06:07.326885   27287 request.go:632] Waited for 155.886265ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-764617
	I0816 17:06:07.326971   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-764617
	I0816 17:06:07.326983   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:07.326995   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:07.327007   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:07.331545   27287 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 17:06:07.526808   27287 request.go:632] Waited for 194.414508ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/nodes/ha-764617
	I0816 17:06:07.526869   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617
	I0816 17:06:07.526880   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:07.526888   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:07.526895   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:07.529997   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:06:07.530407   27287 pod_ready.go:93] pod "kube-apiserver-ha-764617" in "kube-system" namespace has status "Ready":"True"
	I0816 17:06:07.530424   27287 pod_ready.go:82] duration metric: took 359.467581ms for pod "kube-apiserver-ha-764617" in "kube-system" namespace to be "Ready" ...
	I0816 17:06:07.530433   27287 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-764617-m02" in "kube-system" namespace to be "Ready" ...
	I0816 17:06:07.726626   27287 request.go:632] Waited for 196.114068ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-764617-m02
	I0816 17:06:07.726680   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-764617-m02
	I0816 17:06:07.726685   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:07.726695   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:07.726700   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:07.729960   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:06:07.927084   27287 request.go:632] Waited for 196.35442ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:06:07.927140   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:06:07.927146   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:07.927153   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:07.927157   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:07.930674   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:06:07.931268   27287 pod_ready.go:93] pod "kube-apiserver-ha-764617-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 17:06:07.931286   27287 pod_ready.go:82] duration metric: took 400.847633ms for pod "kube-apiserver-ha-764617-m02" in "kube-system" namespace to be "Ready" ...
	I0816 17:06:07.931295   27287 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-764617" in "kube-system" namespace to be "Ready" ...
	I0816 17:06:08.126382   27287 request.go:632] Waited for 195.016683ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-764617
	I0816 17:06:08.126448   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-764617
	I0816 17:06:08.126456   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:08.126493   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:08.126505   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:08.130005   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:06:08.326983   27287 request.go:632] Waited for 196.407146ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/nodes/ha-764617
	I0816 17:06:08.327035   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617
	I0816 17:06:08.327040   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:08.327050   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:08.327055   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:08.330358   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:06:08.331167   27287 pod_ready.go:93] pod "kube-controller-manager-ha-764617" in "kube-system" namespace has status "Ready":"True"
	I0816 17:06:08.331187   27287 pod_ready.go:82] duration metric: took 399.883787ms for pod "kube-controller-manager-ha-764617" in "kube-system" namespace to be "Ready" ...
	I0816 17:06:08.331197   27287 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-764617-m02" in "kube-system" namespace to be "Ready" ...
	I0816 17:06:08.526198   27287 request.go:632] Waited for 194.936804ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-764617-m02
	I0816 17:06:08.526271   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-764617-m02
	I0816 17:06:08.526282   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:08.526290   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:08.526296   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:08.529885   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:06:08.726890   27287 request.go:632] Waited for 196.397476ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:06:08.726937   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:06:08.726942   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:08.726950   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:08.726956   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:08.730426   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:06:08.730891   27287 pod_ready.go:93] pod "kube-controller-manager-ha-764617-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 17:06:08.730908   27287 pod_ready.go:82] duration metric: took 399.705397ms for pod "kube-controller-manager-ha-764617-m02" in "kube-system" namespace to be "Ready" ...
	I0816 17:06:08.730920   27287 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-g5szr" in "kube-system" namespace to be "Ready" ...
	I0816 17:06:08.926092   27287 request.go:632] Waited for 195.101826ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g5szr
	I0816 17:06:08.926174   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g5szr
	I0816 17:06:08.926185   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:08.926196   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:08.926205   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:08.929724   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:06:09.126742   27287 request.go:632] Waited for 196.364545ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:06:09.126820   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:06:09.126828   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:09.126839   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:09.126846   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:09.130173   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:06:09.130824   27287 pod_ready.go:93] pod "kube-proxy-g5szr" in "kube-system" namespace has status "Ready":"True"
	I0816 17:06:09.130842   27287 pod_ready.go:82] duration metric: took 399.914041ms for pod "kube-proxy-g5szr" in "kube-system" namespace to be "Ready" ...
	I0816 17:06:09.130853   27287 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-j75vc" in "kube-system" namespace to be "Ready" ...
	I0816 17:06:09.326977   27287 request.go:632] Waited for 196.050409ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j75vc
	I0816 17:06:09.327040   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j75vc
	I0816 17:06:09.327049   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:09.327057   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:09.327067   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:09.330384   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:06:09.526672   27287 request.go:632] Waited for 195.249789ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/nodes/ha-764617
	I0816 17:06:09.526748   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617
	I0816 17:06:09.526759   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:09.526771   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:09.526780   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:09.530244   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:06:09.530754   27287 pod_ready.go:93] pod "kube-proxy-j75vc" in "kube-system" namespace has status "Ready":"True"
	I0816 17:06:09.530772   27287 pod_ready.go:82] duration metric: took 399.912331ms for pod "kube-proxy-j75vc" in "kube-system" namespace to be "Ready" ...
	I0816 17:06:09.530780   27287 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-764617" in "kube-system" namespace to be "Ready" ...
	I0816 17:06:09.726271   27287 request.go:632] Waited for 195.417063ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-764617
	I0816 17:06:09.726348   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-764617
	I0816 17:06:09.726354   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:09.726362   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:09.726367   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:09.729273   27287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 17:06:09.926217   27287 request.go:632] Waited for 196.280639ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/nodes/ha-764617
	I0816 17:06:09.926274   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617
	I0816 17:06:09.926279   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:09.926286   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:09.926290   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:09.929573   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:06:09.930500   27287 pod_ready.go:93] pod "kube-scheduler-ha-764617" in "kube-system" namespace has status "Ready":"True"
	I0816 17:06:09.930521   27287 pod_ready.go:82] duration metric: took 399.733691ms for pod "kube-scheduler-ha-764617" in "kube-system" namespace to be "Ready" ...
	I0816 17:06:09.930532   27287 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-764617-m02" in "kube-system" namespace to be "Ready" ...
	I0816 17:06:10.126599   27287 request.go:632] Waited for 195.994963ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-764617-m02
	I0816 17:06:10.126685   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-764617-m02
	I0816 17:06:10.126692   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:10.126709   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:10.126715   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:10.130573   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:06:10.326558   27287 request.go:632] Waited for 195.354006ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:06:10.326618   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:06:10.326624   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:10.326634   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:10.326638   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:10.330414   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:06:10.331169   27287 pod_ready.go:93] pod "kube-scheduler-ha-764617-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 17:06:10.331188   27287 pod_ready.go:82] duration metric: took 400.644815ms for pod "kube-scheduler-ha-764617-m02" in "kube-system" namespace to be "Ready" ...
	I0816 17:06:10.331201   27287 pod_ready.go:39] duration metric: took 3.200242246s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 17:06:10.331218   27287 api_server.go:52] waiting for apiserver process to appear ...
	I0816 17:06:10.331273   27287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 17:06:10.346906   27287 api_server.go:72] duration metric: took 22.029737745s to wait for apiserver process to appear ...
	I0816 17:06:10.346937   27287 api_server.go:88] waiting for apiserver healthz status ...
	I0816 17:06:10.346960   27287 api_server.go:253] Checking apiserver healthz at https://192.168.39.18:8443/healthz ...
	I0816 17:06:10.353559   27287 api_server.go:279] https://192.168.39.18:8443/healthz returned 200:
	ok
	I0816 17:06:10.353633   27287 round_trippers.go:463] GET https://192.168.39.18:8443/version
	I0816 17:06:10.353643   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:10.353650   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:10.353656   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:10.354592   27287 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0816 17:06:10.354683   27287 api_server.go:141] control plane version: v1.31.0
	I0816 17:06:10.354697   27287 api_server.go:131] duration metric: took 7.75392ms to wait for apiserver health ...
	I0816 17:06:10.354704   27287 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 17:06:10.526994   27287 request.go:632] Waited for 172.221674ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods
	I0816 17:06:10.527062   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods
	I0816 17:06:10.527067   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:10.527075   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:10.527081   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:10.532825   27287 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0816 17:06:10.537771   27287 system_pods.go:59] 17 kube-system pods found
	I0816 17:06:10.537798   27287 system_pods.go:61] "coredns-6f6b679f8f-d6c7g" [255004b9-d05e-4686-9e9c-6ec6f7aae439] Running
	I0816 17:06:10.537804   27287 system_pods.go:61] "coredns-6f6b679f8f-rhb6h" [ea20ec0a-a16e-4703-bb54-2e54c31acd40] Running
	I0816 17:06:10.537808   27287 system_pods.go:61] "etcd-ha-764617" [3dcae246-5101-4a41-9f28-a6a1740644d4] Running
	I0816 17:06:10.537812   27287 system_pods.go:61] "etcd-ha-764617-m02" [650d9e63-004f-414b-8a2a-e97bf4d38065] Running
	I0816 17:06:10.537816   27287 system_pods.go:61] "kindnet-7l8xt" [ee8130fb-5347-4f22-849f-ebb68e6fc48e] Running
	I0816 17:06:10.537820   27287 system_pods.go:61] "kindnet-94vkj" [a1ce0b8c-c2c8-400a-a013-6eb89e550cd9] Running
	I0816 17:06:10.537823   27287 system_pods.go:61] "kube-apiserver-ha-764617" [85909d10-ec15-4749-9972-40ededb0e610] Running
	I0816 17:06:10.537826   27287 system_pods.go:61] "kube-apiserver-ha-764617-m02" [adc1ab1c-c514-4e9a-bd9f-4458dbe442b4] Running
	I0816 17:06:10.537829   27287 system_pods.go:61] "kube-controller-manager-ha-764617" [31c5a5d2-e4a5-4405-8f99-f13c12763055] Running
	I0816 17:06:10.537832   27287 system_pods.go:61] "kube-controller-manager-ha-764617-m02" [8d094585-050e-49b9-b2f3-ffa45eadb25b] Running
	I0816 17:06:10.537835   27287 system_pods.go:61] "kube-proxy-g5szr" [6adedbcf-cd3b-4a09-8759-c0e9e4d5ddb5] Running
	I0816 17:06:10.537838   27287 system_pods.go:61] "kube-proxy-j75vc" [50262aeb-9d97-4093-a43f-cb24a5515abb] Running
	I0816 17:06:10.537842   27287 system_pods.go:61] "kube-scheduler-ha-764617" [4c45b1dc-cc6e-41e2-a059-955fa9fd79aa] Running
	I0816 17:06:10.537845   27287 system_pods.go:61] "kube-scheduler-ha-764617-m02" [bb3e6b70-5a60-49f8-a1c3-08690fda371d] Running
	I0816 17:06:10.537848   27287 system_pods.go:61] "kube-vip-ha-764617" [a30deffd-45c9-4685-ae4c-0c0f113f3bd7] Running
	I0816 17:06:10.537851   27287 system_pods.go:61] "kube-vip-ha-764617-m02" [869da559-ebdf-417f-9494-eb1cacbeab97] Running
	I0816 17:06:10.537854   27287 system_pods.go:61] "storage-provisioner" [15a0a2d4-69d6-4a6b-9199-f8785e015c3b] Running
	I0816 17:06:10.537860   27287 system_pods.go:74] duration metric: took 183.150927ms to wait for pod list to return data ...
	I0816 17:06:10.537869   27287 default_sa.go:34] waiting for default service account to be created ...
	I0816 17:06:10.726208   27287 request.go:632] Waited for 188.25ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/default/serviceaccounts
	I0816 17:06:10.726268   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/default/serviceaccounts
	I0816 17:06:10.726273   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:10.726280   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:10.726285   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:10.730022   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:06:10.730207   27287 default_sa.go:45] found service account: "default"
	I0816 17:06:10.730221   27287 default_sa.go:55] duration metric: took 192.346564ms for default service account to be created ...
	I0816 17:06:10.730228   27287 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 17:06:10.926666   27287 request.go:632] Waited for 196.354803ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods
	I0816 17:06:10.926718   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods
	I0816 17:06:10.926723   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:10.926730   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:10.926734   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:10.931197   27287 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 17:06:10.935789   27287 system_pods.go:86] 17 kube-system pods found
	I0816 17:06:10.935816   27287 system_pods.go:89] "coredns-6f6b679f8f-d6c7g" [255004b9-d05e-4686-9e9c-6ec6f7aae439] Running
	I0816 17:06:10.935821   27287 system_pods.go:89] "coredns-6f6b679f8f-rhb6h" [ea20ec0a-a16e-4703-bb54-2e54c31acd40] Running
	I0816 17:06:10.935825   27287 system_pods.go:89] "etcd-ha-764617" [3dcae246-5101-4a41-9f28-a6a1740644d4] Running
	I0816 17:06:10.935829   27287 system_pods.go:89] "etcd-ha-764617-m02" [650d9e63-004f-414b-8a2a-e97bf4d38065] Running
	I0816 17:06:10.935833   27287 system_pods.go:89] "kindnet-7l8xt" [ee8130fb-5347-4f22-849f-ebb68e6fc48e] Running
	I0816 17:06:10.935836   27287 system_pods.go:89] "kindnet-94vkj" [a1ce0b8c-c2c8-400a-a013-6eb89e550cd9] Running
	I0816 17:06:10.935839   27287 system_pods.go:89] "kube-apiserver-ha-764617" [85909d10-ec15-4749-9972-40ededb0e610] Running
	I0816 17:06:10.935842   27287 system_pods.go:89] "kube-apiserver-ha-764617-m02" [adc1ab1c-c514-4e9a-bd9f-4458dbe442b4] Running
	I0816 17:06:10.935846   27287 system_pods.go:89] "kube-controller-manager-ha-764617" [31c5a5d2-e4a5-4405-8f99-f13c12763055] Running
	I0816 17:06:10.935848   27287 system_pods.go:89] "kube-controller-manager-ha-764617-m02" [8d094585-050e-49b9-b2f3-ffa45eadb25b] Running
	I0816 17:06:10.935851   27287 system_pods.go:89] "kube-proxy-g5szr" [6adedbcf-cd3b-4a09-8759-c0e9e4d5ddb5] Running
	I0816 17:06:10.935854   27287 system_pods.go:89] "kube-proxy-j75vc" [50262aeb-9d97-4093-a43f-cb24a5515abb] Running
	I0816 17:06:10.935857   27287 system_pods.go:89] "kube-scheduler-ha-764617" [4c45b1dc-cc6e-41e2-a059-955fa9fd79aa] Running
	I0816 17:06:10.935860   27287 system_pods.go:89] "kube-scheduler-ha-764617-m02" [bb3e6b70-5a60-49f8-a1c3-08690fda371d] Running
	I0816 17:06:10.935862   27287 system_pods.go:89] "kube-vip-ha-764617" [a30deffd-45c9-4685-ae4c-0c0f113f3bd7] Running
	I0816 17:06:10.935865   27287 system_pods.go:89] "kube-vip-ha-764617-m02" [869da559-ebdf-417f-9494-eb1cacbeab97] Running
	I0816 17:06:10.935868   27287 system_pods.go:89] "storage-provisioner" [15a0a2d4-69d6-4a6b-9199-f8785e015c3b] Running
	I0816 17:06:10.935874   27287 system_pods.go:126] duration metric: took 205.640857ms to wait for k8s-apps to be running ...
	I0816 17:06:10.935880   27287 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 17:06:10.935936   27287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 17:06:10.950115   27287 system_svc.go:56] duration metric: took 14.228019ms WaitForService to wait for kubelet
	I0816 17:06:10.950139   27287 kubeadm.go:582] duration metric: took 22.632976027s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 17:06:10.950155   27287 node_conditions.go:102] verifying NodePressure condition ...
	I0816 17:06:11.126606   27287 request.go:632] Waited for 176.366577ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/nodes
	I0816 17:06:11.126656   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes
	I0816 17:06:11.126661   27287 round_trippers.go:469] Request Headers:
	I0816 17:06:11.126672   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:06:11.126675   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:06:11.130382   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:06:11.131338   27287 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 17:06:11.131363   27287 node_conditions.go:123] node cpu capacity is 2
	I0816 17:06:11.131375   27287 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 17:06:11.131379   27287 node_conditions.go:123] node cpu capacity is 2
	I0816 17:06:11.131385   27287 node_conditions.go:105] duration metric: took 181.224588ms to run NodePressure ...
	I0816 17:06:11.131398   27287 start.go:241] waiting for startup goroutines ...
	I0816 17:06:11.131428   27287 start.go:255] writing updated cluster config ...
	I0816 17:06:11.133606   27287 out.go:201] 
	I0816 17:06:11.135100   27287 config.go:182] Loaded profile config "ha-764617": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 17:06:11.135243   27287 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/config.json ...
	I0816 17:06:11.136943   27287 out.go:177] * Starting "ha-764617-m03" control-plane node in "ha-764617" cluster
	I0816 17:06:11.138071   27287 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 17:06:11.138100   27287 cache.go:56] Caching tarball of preloaded images
	I0816 17:06:11.138215   27287 preload.go:172] Found /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 17:06:11.138234   27287 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0816 17:06:11.138351   27287 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/config.json ...
	I0816 17:06:11.138617   27287 start.go:360] acquireMachinesLock for ha-764617-m03: {Name:mke1ffa1f2d6d714bdd85e184816ba8f4dfd08f1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 17:06:11.138678   27287 start.go:364] duration metric: took 35.792µs to acquireMachinesLock for "ha-764617-m03"
	I0816 17:06:11.138700   27287 start.go:93] Provisioning new machine with config: &{Name:ha-764617 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-764617 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.18 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.184 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 17:06:11.138787   27287 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0816 17:06:11.140278   27287 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 17:06:11.140389   27287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:06:11.140435   27287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:06:11.156921   27287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33897
	I0816 17:06:11.157298   27287 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:06:11.157696   27287 main.go:141] libmachine: Using API Version  1
	I0816 17:06:11.157714   27287 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:06:11.157989   27287 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:06:11.158175   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetMachineName
	I0816 17:06:11.158308   27287 main.go:141] libmachine: (ha-764617-m03) Calling .DriverName
	I0816 17:06:11.158480   27287 start.go:159] libmachine.API.Create for "ha-764617" (driver="kvm2")
	I0816 17:06:11.158505   27287 client.go:168] LocalClient.Create starting
	I0816 17:06:11.158532   27287 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem
	I0816 17:06:11.158564   27287 main.go:141] libmachine: Decoding PEM data...
	I0816 17:06:11.158579   27287 main.go:141] libmachine: Parsing certificate...
	I0816 17:06:11.158623   27287 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem
	I0816 17:06:11.158649   27287 main.go:141] libmachine: Decoding PEM data...
	I0816 17:06:11.158662   27287 main.go:141] libmachine: Parsing certificate...
	I0816 17:06:11.158678   27287 main.go:141] libmachine: Running pre-create checks...
	I0816 17:06:11.158686   27287 main.go:141] libmachine: (ha-764617-m03) Calling .PreCreateCheck
	I0816 17:06:11.158867   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetConfigRaw
	I0816 17:06:11.159187   27287 main.go:141] libmachine: Creating machine...
	I0816 17:06:11.159198   27287 main.go:141] libmachine: (ha-764617-m03) Calling .Create
	I0816 17:06:11.159342   27287 main.go:141] libmachine: (ha-764617-m03) Creating KVM machine...
	I0816 17:06:11.160569   27287 main.go:141] libmachine: (ha-764617-m03) DBG | found existing default KVM network
	I0816 17:06:11.160698   27287 main.go:141] libmachine: (ha-764617-m03) DBG | found existing private KVM network mk-ha-764617
	I0816 17:06:11.160819   27287 main.go:141] libmachine: (ha-764617-m03) Setting up store path in /home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m03 ...
	I0816 17:06:11.160844   27287 main.go:141] libmachine: (ha-764617-m03) Building disk image from file:///home/jenkins/minikube-integration/19461-9545/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0816 17:06:11.160882   27287 main.go:141] libmachine: (ha-764617-m03) DBG | I0816 17:06:11.160812   28044 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19461-9545/.minikube
	I0816 17:06:11.160983   27287 main.go:141] libmachine: (ha-764617-m03) Downloading /home/jenkins/minikube-integration/19461-9545/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19461-9545/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0816 17:06:11.412790   27287 main.go:141] libmachine: (ha-764617-m03) DBG | I0816 17:06:11.412650   28044 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m03/id_rsa...
	I0816 17:06:11.668182   27287 main.go:141] libmachine: (ha-764617-m03) DBG | I0816 17:06:11.668074   28044 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m03/ha-764617-m03.rawdisk...
	I0816 17:06:11.668206   27287 main.go:141] libmachine: (ha-764617-m03) DBG | Writing magic tar header
	I0816 17:06:11.668216   27287 main.go:141] libmachine: (ha-764617-m03) DBG | Writing SSH key tar header
	I0816 17:06:11.668225   27287 main.go:141] libmachine: (ha-764617-m03) DBG | I0816 17:06:11.668183   28044 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m03 ...
	I0816 17:06:11.668301   27287 main.go:141] libmachine: (ha-764617-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m03
	I0816 17:06:11.668320   27287 main.go:141] libmachine: (ha-764617-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19461-9545/.minikube/machines
	I0816 17:06:11.668329   27287 main.go:141] libmachine: (ha-764617-m03) Setting executable bit set on /home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m03 (perms=drwx------)
	I0816 17:06:11.668339   27287 main.go:141] libmachine: (ha-764617-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19461-9545/.minikube
	I0816 17:06:11.668350   27287 main.go:141] libmachine: (ha-764617-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19461-9545
	I0816 17:06:11.668359   27287 main.go:141] libmachine: (ha-764617-m03) Setting executable bit set on /home/jenkins/minikube-integration/19461-9545/.minikube/machines (perms=drwxr-xr-x)
	I0816 17:06:11.668368   27287 main.go:141] libmachine: (ha-764617-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0816 17:06:11.668378   27287 main.go:141] libmachine: (ha-764617-m03) DBG | Checking permissions on dir: /home/jenkins
	I0816 17:06:11.668388   27287 main.go:141] libmachine: (ha-764617-m03) Setting executable bit set on /home/jenkins/minikube-integration/19461-9545/.minikube (perms=drwxr-xr-x)
	I0816 17:06:11.668399   27287 main.go:141] libmachine: (ha-764617-m03) Setting executable bit set on /home/jenkins/minikube-integration/19461-9545 (perms=drwxrwxr-x)
	I0816 17:06:11.668408   27287 main.go:141] libmachine: (ha-764617-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0816 17:06:11.668414   27287 main.go:141] libmachine: (ha-764617-m03) DBG | Checking permissions on dir: /home
	I0816 17:06:11.668424   27287 main.go:141] libmachine: (ha-764617-m03) DBG | Skipping /home - not owner
	I0816 17:06:11.668433   27287 main.go:141] libmachine: (ha-764617-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0816 17:06:11.668440   27287 main.go:141] libmachine: (ha-764617-m03) Creating domain...
	I0816 17:06:11.669535   27287 main.go:141] libmachine: (ha-764617-m03) define libvirt domain using xml: 
	I0816 17:06:11.669561   27287 main.go:141] libmachine: (ha-764617-m03) <domain type='kvm'>
	I0816 17:06:11.669585   27287 main.go:141] libmachine: (ha-764617-m03)   <name>ha-764617-m03</name>
	I0816 17:06:11.669602   27287 main.go:141] libmachine: (ha-764617-m03)   <memory unit='MiB'>2200</memory>
	I0816 17:06:11.669611   27287 main.go:141] libmachine: (ha-764617-m03)   <vcpu>2</vcpu>
	I0816 17:06:11.669616   27287 main.go:141] libmachine: (ha-764617-m03)   <features>
	I0816 17:06:11.669630   27287 main.go:141] libmachine: (ha-764617-m03)     <acpi/>
	I0816 17:06:11.669637   27287 main.go:141] libmachine: (ha-764617-m03)     <apic/>
	I0816 17:06:11.669642   27287 main.go:141] libmachine: (ha-764617-m03)     <pae/>
	I0816 17:06:11.669647   27287 main.go:141] libmachine: (ha-764617-m03)     
	I0816 17:06:11.669652   27287 main.go:141] libmachine: (ha-764617-m03)   </features>
	I0816 17:06:11.669659   27287 main.go:141] libmachine: (ha-764617-m03)   <cpu mode='host-passthrough'>
	I0816 17:06:11.669664   27287 main.go:141] libmachine: (ha-764617-m03)   
	I0816 17:06:11.669671   27287 main.go:141] libmachine: (ha-764617-m03)   </cpu>
	I0816 17:06:11.669676   27287 main.go:141] libmachine: (ha-764617-m03)   <os>
	I0816 17:06:11.669683   27287 main.go:141] libmachine: (ha-764617-m03)     <type>hvm</type>
	I0816 17:06:11.669689   27287 main.go:141] libmachine: (ha-764617-m03)     <boot dev='cdrom'/>
	I0816 17:06:11.669694   27287 main.go:141] libmachine: (ha-764617-m03)     <boot dev='hd'/>
	I0816 17:06:11.669725   27287 main.go:141] libmachine: (ha-764617-m03)     <bootmenu enable='no'/>
	I0816 17:06:11.669743   27287 main.go:141] libmachine: (ha-764617-m03)   </os>
	I0816 17:06:11.669756   27287 main.go:141] libmachine: (ha-764617-m03)   <devices>
	I0816 17:06:11.669770   27287 main.go:141] libmachine: (ha-764617-m03)     <disk type='file' device='cdrom'>
	I0816 17:06:11.669789   27287 main.go:141] libmachine: (ha-764617-m03)       <source file='/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m03/boot2docker.iso'/>
	I0816 17:06:11.669801   27287 main.go:141] libmachine: (ha-764617-m03)       <target dev='hdc' bus='scsi'/>
	I0816 17:06:11.669813   27287 main.go:141] libmachine: (ha-764617-m03)       <readonly/>
	I0816 17:06:11.669824   27287 main.go:141] libmachine: (ha-764617-m03)     </disk>
	I0816 17:06:11.669838   27287 main.go:141] libmachine: (ha-764617-m03)     <disk type='file' device='disk'>
	I0816 17:06:11.669855   27287 main.go:141] libmachine: (ha-764617-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0816 17:06:11.669872   27287 main.go:141] libmachine: (ha-764617-m03)       <source file='/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m03/ha-764617-m03.rawdisk'/>
	I0816 17:06:11.669881   27287 main.go:141] libmachine: (ha-764617-m03)       <target dev='hda' bus='virtio'/>
	I0816 17:06:11.669894   27287 main.go:141] libmachine: (ha-764617-m03)     </disk>
	I0816 17:06:11.669906   27287 main.go:141] libmachine: (ha-764617-m03)     <interface type='network'>
	I0816 17:06:11.669919   27287 main.go:141] libmachine: (ha-764617-m03)       <source network='mk-ha-764617'/>
	I0816 17:06:11.669934   27287 main.go:141] libmachine: (ha-764617-m03)       <model type='virtio'/>
	I0816 17:06:11.669947   27287 main.go:141] libmachine: (ha-764617-m03)     </interface>
	I0816 17:06:11.669958   27287 main.go:141] libmachine: (ha-764617-m03)     <interface type='network'>
	I0816 17:06:11.669971   27287 main.go:141] libmachine: (ha-764617-m03)       <source network='default'/>
	I0816 17:06:11.669982   27287 main.go:141] libmachine: (ha-764617-m03)       <model type='virtio'/>
	I0816 17:06:11.669992   27287 main.go:141] libmachine: (ha-764617-m03)     </interface>
	I0816 17:06:11.670008   27287 main.go:141] libmachine: (ha-764617-m03)     <serial type='pty'>
	I0816 17:06:11.670020   27287 main.go:141] libmachine: (ha-764617-m03)       <target port='0'/>
	I0816 17:06:11.670031   27287 main.go:141] libmachine: (ha-764617-m03)     </serial>
	I0816 17:06:11.670044   27287 main.go:141] libmachine: (ha-764617-m03)     <console type='pty'>
	I0816 17:06:11.670055   27287 main.go:141] libmachine: (ha-764617-m03)       <target type='serial' port='0'/>
	I0816 17:06:11.670067   27287 main.go:141] libmachine: (ha-764617-m03)     </console>
	I0816 17:06:11.670082   27287 main.go:141] libmachine: (ha-764617-m03)     <rng model='virtio'>
	I0816 17:06:11.670096   27287 main.go:141] libmachine: (ha-764617-m03)       <backend model='random'>/dev/random</backend>
	I0816 17:06:11.670107   27287 main.go:141] libmachine: (ha-764617-m03)     </rng>
	I0816 17:06:11.670118   27287 main.go:141] libmachine: (ha-764617-m03)     
	I0816 17:06:11.670128   27287 main.go:141] libmachine: (ha-764617-m03)     
	I0816 17:06:11.670139   27287 main.go:141] libmachine: (ha-764617-m03)   </devices>
	I0816 17:06:11.670149   27287 main.go:141] libmachine: (ha-764617-m03) </domain>
	I0816 17:06:11.670164   27287 main.go:141] libmachine: (ha-764617-m03) 
	I0816 17:06:11.676575   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:9e:e2:cb in network default
	I0816 17:06:11.677145   27287 main.go:141] libmachine: (ha-764617-m03) Ensuring networks are active...
	I0816 17:06:11.677169   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:11.677817   27287 main.go:141] libmachine: (ha-764617-m03) Ensuring network default is active
	I0816 17:06:11.678098   27287 main.go:141] libmachine: (ha-764617-m03) Ensuring network mk-ha-764617 is active
	I0816 17:06:11.678600   27287 main.go:141] libmachine: (ha-764617-m03) Getting domain xml...
	I0816 17:06:11.679382   27287 main.go:141] libmachine: (ha-764617-m03) Creating domain...
	I0816 17:06:12.915512   27287 main.go:141] libmachine: (ha-764617-m03) Waiting to get IP...
	I0816 17:06:12.916236   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:12.916750   27287 main.go:141] libmachine: (ha-764617-m03) DBG | unable to find current IP address of domain ha-764617-m03 in network mk-ha-764617
	I0816 17:06:12.916774   27287 main.go:141] libmachine: (ha-764617-m03) DBG | I0816 17:06:12.916711   28044 retry.go:31] will retry after 273.815084ms: waiting for machine to come up
	I0816 17:06:13.192119   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:13.192776   27287 main.go:141] libmachine: (ha-764617-m03) DBG | unable to find current IP address of domain ha-764617-m03 in network mk-ha-764617
	I0816 17:06:13.192807   27287 main.go:141] libmachine: (ha-764617-m03) DBG | I0816 17:06:13.192721   28044 retry.go:31] will retry after 272.739513ms: waiting for machine to come up
	I0816 17:06:13.467229   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:13.467817   27287 main.go:141] libmachine: (ha-764617-m03) DBG | unable to find current IP address of domain ha-764617-m03 in network mk-ha-764617
	I0816 17:06:13.467854   27287 main.go:141] libmachine: (ha-764617-m03) DBG | I0816 17:06:13.467745   28044 retry.go:31] will retry after 450.727942ms: waiting for machine to come up
	I0816 17:06:13.920234   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:13.920782   27287 main.go:141] libmachine: (ha-764617-m03) DBG | unable to find current IP address of domain ha-764617-m03 in network mk-ha-764617
	I0816 17:06:13.920818   27287 main.go:141] libmachine: (ha-764617-m03) DBG | I0816 17:06:13.920701   28044 retry.go:31] will retry after 544.193183ms: waiting for machine to come up
	I0816 17:06:14.466229   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:14.466662   27287 main.go:141] libmachine: (ha-764617-m03) DBG | unable to find current IP address of domain ha-764617-m03 in network mk-ha-764617
	I0816 17:06:14.466688   27287 main.go:141] libmachine: (ha-764617-m03) DBG | I0816 17:06:14.466620   28044 retry.go:31] will retry after 511.913006ms: waiting for machine to come up
	I0816 17:06:14.979976   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:14.980459   27287 main.go:141] libmachine: (ha-764617-m03) DBG | unable to find current IP address of domain ha-764617-m03 in network mk-ha-764617
	I0816 17:06:14.980480   27287 main.go:141] libmachine: (ha-764617-m03) DBG | I0816 17:06:14.980401   28044 retry.go:31] will retry after 937.618553ms: waiting for machine to come up
	I0816 17:06:15.919639   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:15.920082   27287 main.go:141] libmachine: (ha-764617-m03) DBG | unable to find current IP address of domain ha-764617-m03 in network mk-ha-764617
	I0816 17:06:15.920117   27287 main.go:141] libmachine: (ha-764617-m03) DBG | I0816 17:06:15.920026   28044 retry.go:31] will retry after 880.489014ms: waiting for machine to come up
	I0816 17:06:16.802468   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:16.802933   27287 main.go:141] libmachine: (ha-764617-m03) DBG | unable to find current IP address of domain ha-764617-m03 in network mk-ha-764617
	I0816 17:06:16.802957   27287 main.go:141] libmachine: (ha-764617-m03) DBG | I0816 17:06:16.802877   28044 retry.go:31] will retry after 1.36764588s: waiting for machine to come up
	I0816 17:06:18.172580   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:18.173085   27287 main.go:141] libmachine: (ha-764617-m03) DBG | unable to find current IP address of domain ha-764617-m03 in network mk-ha-764617
	I0816 17:06:18.173111   27287 main.go:141] libmachine: (ha-764617-m03) DBG | I0816 17:06:18.173046   28044 retry.go:31] will retry after 1.838306763s: waiting for machine to come up
	I0816 17:06:20.013961   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:20.014417   27287 main.go:141] libmachine: (ha-764617-m03) DBG | unable to find current IP address of domain ha-764617-m03 in network mk-ha-764617
	I0816 17:06:20.014444   27287 main.go:141] libmachine: (ha-764617-m03) DBG | I0816 17:06:20.014371   28044 retry.go:31] will retry after 1.673586915s: waiting for machine to come up
	I0816 17:06:21.689665   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:21.690180   27287 main.go:141] libmachine: (ha-764617-m03) DBG | unable to find current IP address of domain ha-764617-m03 in network mk-ha-764617
	I0816 17:06:21.690212   27287 main.go:141] libmachine: (ha-764617-m03) DBG | I0816 17:06:21.690116   28044 retry.go:31] will retry after 2.511086993s: waiting for machine to come up
	I0816 17:06:24.204711   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:24.205193   27287 main.go:141] libmachine: (ha-764617-m03) DBG | unable to find current IP address of domain ha-764617-m03 in network mk-ha-764617
	I0816 17:06:24.205214   27287 main.go:141] libmachine: (ha-764617-m03) DBG | I0816 17:06:24.205157   28044 retry.go:31] will retry after 2.19927087s: waiting for machine to come up
	I0816 17:06:26.405994   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:26.406431   27287 main.go:141] libmachine: (ha-764617-m03) DBG | unable to find current IP address of domain ha-764617-m03 in network mk-ha-764617
	I0816 17:06:26.406451   27287 main.go:141] libmachine: (ha-764617-m03) DBG | I0816 17:06:26.406391   28044 retry.go:31] will retry after 3.745095666s: waiting for machine to come up
	I0816 17:06:30.153573   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:30.154034   27287 main.go:141] libmachine: (ha-764617-m03) DBG | unable to find current IP address of domain ha-764617-m03 in network mk-ha-764617
	I0816 17:06:30.154058   27287 main.go:141] libmachine: (ha-764617-m03) DBG | I0816 17:06:30.153986   28044 retry.go:31] will retry after 4.789795394s: waiting for machine to come up
	I0816 17:06:34.948182   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:34.948661   27287 main.go:141] libmachine: (ha-764617-m03) Found IP for machine: 192.168.39.253
	I0816 17:06:34.948679   27287 main.go:141] libmachine: (ha-764617-m03) Reserving static IP address...
	I0816 17:06:34.948693   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has current primary IP address 192.168.39.253 and MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:34.949121   27287 main.go:141] libmachine: (ha-764617-m03) DBG | unable to find host DHCP lease matching {name: "ha-764617-m03", mac: "52:54:00:b2:4e:81", ip: "192.168.39.253"} in network mk-ha-764617
	I0816 17:06:35.024241   27287 main.go:141] libmachine: (ha-764617-m03) DBG | Getting to WaitForSSH function...
	I0816 17:06:35.024268   27287 main.go:141] libmachine: (ha-764617-m03) Reserved static IP address: 192.168.39.253
	I0816 17:06:35.024281   27287 main.go:141] libmachine: (ha-764617-m03) Waiting for SSH to be available...
	I0816 17:06:35.026795   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:35.027288   27287 main.go:141] libmachine: (ha-764617-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:4e:81", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:06:25 +0000 UTC Type:0 Mac:52:54:00:b2:4e:81 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b2:4e:81}
	I0816 17:06:35.027315   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:35.027506   27287 main.go:141] libmachine: (ha-764617-m03) DBG | Using SSH client type: external
	I0816 17:06:35.027538   27287 main.go:141] libmachine: (ha-764617-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m03/id_rsa (-rw-------)
	I0816 17:06:35.027570   27287 main.go:141] libmachine: (ha-764617-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.253 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 17:06:35.027583   27287 main.go:141] libmachine: (ha-764617-m03) DBG | About to run SSH command:
	I0816 17:06:35.027617   27287 main.go:141] libmachine: (ha-764617-m03) DBG | exit 0
	I0816 17:06:35.148697   27287 main.go:141] libmachine: (ha-764617-m03) DBG | SSH cmd err, output: <nil>: 
	I0816 17:06:35.148982   27287 main.go:141] libmachine: (ha-764617-m03) KVM machine creation complete!
	I0816 17:06:35.149291   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetConfigRaw
	I0816 17:06:35.149823   27287 main.go:141] libmachine: (ha-764617-m03) Calling .DriverName
	I0816 17:06:35.149991   27287 main.go:141] libmachine: (ha-764617-m03) Calling .DriverName
	I0816 17:06:35.150229   27287 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0816 17:06:35.150243   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetState
	I0816 17:06:35.151514   27287 main.go:141] libmachine: Detecting operating system of created instance...
	I0816 17:06:35.151531   27287 main.go:141] libmachine: Waiting for SSH to be available...
	I0816 17:06:35.151550   27287 main.go:141] libmachine: Getting to WaitForSSH function...
	I0816 17:06:35.151559   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHHostname
	I0816 17:06:35.154047   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:35.154433   27287 main.go:141] libmachine: (ha-764617-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:4e:81", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:06:25 +0000 UTC Type:0 Mac:52:54:00:b2:4e:81 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-764617-m03 Clientid:01:52:54:00:b2:4e:81}
	I0816 17:06:35.154454   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:35.154636   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHPort
	I0816 17:06:35.154844   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHKeyPath
	I0816 17:06:35.154998   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHKeyPath
	I0816 17:06:35.155145   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHUsername
	I0816 17:06:35.155314   27287 main.go:141] libmachine: Using SSH client type: native
	I0816 17:06:35.155504   27287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0816 17:06:35.155515   27287 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0816 17:06:35.251611   27287 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 17:06:35.251636   27287 main.go:141] libmachine: Detecting the provisioner...
	I0816 17:06:35.251645   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHHostname
	I0816 17:06:35.254692   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:35.255051   27287 main.go:141] libmachine: (ha-764617-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:4e:81", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:06:25 +0000 UTC Type:0 Mac:52:54:00:b2:4e:81 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-764617-m03 Clientid:01:52:54:00:b2:4e:81}
	I0816 17:06:35.255069   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:35.255294   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHPort
	I0816 17:06:35.255515   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHKeyPath
	I0816 17:06:35.255694   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHKeyPath
	I0816 17:06:35.255897   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHUsername
	I0816 17:06:35.256100   27287 main.go:141] libmachine: Using SSH client type: native
	I0816 17:06:35.256260   27287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0816 17:06:35.256271   27287 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0816 17:06:35.352847   27287 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0816 17:06:35.352931   27287 main.go:141] libmachine: found compatible host: buildroot
	I0816 17:06:35.352946   27287 main.go:141] libmachine: Provisioning with buildroot...
	I0816 17:06:35.352960   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetMachineName
	I0816 17:06:35.353217   27287 buildroot.go:166] provisioning hostname "ha-764617-m03"
	I0816 17:06:35.353249   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetMachineName
	I0816 17:06:35.353470   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHHostname
	I0816 17:06:35.356181   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:35.356707   27287 main.go:141] libmachine: (ha-764617-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:4e:81", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:06:25 +0000 UTC Type:0 Mac:52:54:00:b2:4e:81 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-764617-m03 Clientid:01:52:54:00:b2:4e:81}
	I0816 17:06:35.356742   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:35.357178   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHPort
	I0816 17:06:35.357423   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHKeyPath
	I0816 17:06:35.357596   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHKeyPath
	I0816 17:06:35.357756   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHUsername
	I0816 17:06:35.357919   27287 main.go:141] libmachine: Using SSH client type: native
	I0816 17:06:35.358118   27287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0816 17:06:35.358133   27287 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-764617-m03 && echo "ha-764617-m03" | sudo tee /etc/hostname
	I0816 17:06:35.471766   27287 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-764617-m03
	
	I0816 17:06:35.471796   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHHostname
	I0816 17:06:35.474625   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:35.475017   27287 main.go:141] libmachine: (ha-764617-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:4e:81", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:06:25 +0000 UTC Type:0 Mac:52:54:00:b2:4e:81 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-764617-m03 Clientid:01:52:54:00:b2:4e:81}
	I0816 17:06:35.475047   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:35.475203   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHPort
	I0816 17:06:35.475401   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHKeyPath
	I0816 17:06:35.475591   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHKeyPath
	I0816 17:06:35.475735   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHUsername
	I0816 17:06:35.475883   27287 main.go:141] libmachine: Using SSH client type: native
	I0816 17:06:35.476080   27287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0816 17:06:35.476103   27287 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-764617-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-764617-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-764617-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 17:06:35.585652   27287 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 17:06:35.585680   27287 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19461-9545/.minikube CaCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19461-9545/.minikube}
	I0816 17:06:35.585693   27287 buildroot.go:174] setting up certificates
	I0816 17:06:35.585700   27287 provision.go:84] configureAuth start
	I0816 17:06:35.585708   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetMachineName
	I0816 17:06:35.585971   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetIP
	I0816 17:06:35.588524   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:35.588946   27287 main.go:141] libmachine: (ha-764617-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:4e:81", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:06:25 +0000 UTC Type:0 Mac:52:54:00:b2:4e:81 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-764617-m03 Clientid:01:52:54:00:b2:4e:81}
	I0816 17:06:35.588979   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:35.589077   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHHostname
	I0816 17:06:35.591437   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:35.591747   27287 main.go:141] libmachine: (ha-764617-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:4e:81", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:06:25 +0000 UTC Type:0 Mac:52:54:00:b2:4e:81 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-764617-m03 Clientid:01:52:54:00:b2:4e:81}
	I0816 17:06:35.591768   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:35.591924   27287 provision.go:143] copyHostCerts
	I0816 17:06:35.591956   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem
	I0816 17:06:35.591983   27287 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem, removing ...
	I0816 17:06:35.591992   27287 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem
	I0816 17:06:35.592058   27287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem (1082 bytes)
	I0816 17:06:35.592140   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem
	I0816 17:06:35.592158   27287 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem, removing ...
	I0816 17:06:35.592173   27287 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem
	I0816 17:06:35.592219   27287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem (1123 bytes)
	I0816 17:06:35.592280   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem
	I0816 17:06:35.592296   27287 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem, removing ...
	I0816 17:06:35.592303   27287 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem
	I0816 17:06:35.592326   27287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem (1679 bytes)
	I0816 17:06:35.592389   27287 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem org=jenkins.ha-764617-m03 san=[127.0.0.1 192.168.39.253 ha-764617-m03 localhost minikube]
	I0816 17:06:35.662762   27287 provision.go:177] copyRemoteCerts
	I0816 17:06:35.662814   27287 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 17:06:35.662835   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHHostname
	I0816 17:06:35.665701   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:35.666047   27287 main.go:141] libmachine: (ha-764617-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:4e:81", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:06:25 +0000 UTC Type:0 Mac:52:54:00:b2:4e:81 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-764617-m03 Clientid:01:52:54:00:b2:4e:81}
	I0816 17:06:35.666075   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:35.666262   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHPort
	I0816 17:06:35.666438   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHKeyPath
	I0816 17:06:35.666551   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHUsername
	I0816 17:06:35.666656   27287 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m03/id_rsa Username:docker}
	I0816 17:06:35.746127   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 17:06:35.746201   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 17:06:35.769929   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 17:06:35.770012   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0816 17:06:35.794481   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 17:06:35.794571   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 17:06:35.819190   27287 provision.go:87] duration metric: took 233.477927ms to configureAuth
	I0816 17:06:35.819221   27287 buildroot.go:189] setting minikube options for container-runtime
	I0816 17:06:35.819480   27287 config.go:182] Loaded profile config "ha-764617": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 17:06:35.819562   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHHostname
	I0816 17:06:35.822367   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:35.822747   27287 main.go:141] libmachine: (ha-764617-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:4e:81", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:06:25 +0000 UTC Type:0 Mac:52:54:00:b2:4e:81 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-764617-m03 Clientid:01:52:54:00:b2:4e:81}
	I0816 17:06:35.822777   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:35.822929   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHPort
	I0816 17:06:35.823112   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHKeyPath
	I0816 17:06:35.823256   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHKeyPath
	I0816 17:06:35.823376   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHUsername
	I0816 17:06:35.823515   27287 main.go:141] libmachine: Using SSH client type: native
	I0816 17:06:35.823729   27287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0816 17:06:35.823793   27287 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 17:06:36.078096   27287 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 17:06:36.078133   27287 main.go:141] libmachine: Checking connection to Docker...
	I0816 17:06:36.078144   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetURL
	I0816 17:06:36.079497   27287 main.go:141] libmachine: (ha-764617-m03) DBG | Using libvirt version 6000000
	I0816 17:06:36.081628   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:36.081985   27287 main.go:141] libmachine: (ha-764617-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:4e:81", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:06:25 +0000 UTC Type:0 Mac:52:54:00:b2:4e:81 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-764617-m03 Clientid:01:52:54:00:b2:4e:81}
	I0816 17:06:36.082007   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:36.082204   27287 main.go:141] libmachine: Docker is up and running!
	I0816 17:06:36.082219   27287 main.go:141] libmachine: Reticulating splines...
	I0816 17:06:36.082227   27287 client.go:171] duration metric: took 24.923714073s to LocalClient.Create
	I0816 17:06:36.082249   27287 start.go:167] duration metric: took 24.923767974s to libmachine.API.Create "ha-764617"
	I0816 17:06:36.082261   27287 start.go:293] postStartSetup for "ha-764617-m03" (driver="kvm2")
	I0816 17:06:36.082274   27287 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 17:06:36.082295   27287 main.go:141] libmachine: (ha-764617-m03) Calling .DriverName
	I0816 17:06:36.082574   27287 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 17:06:36.082601   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHHostname
	I0816 17:06:36.084986   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:36.085346   27287 main.go:141] libmachine: (ha-764617-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:4e:81", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:06:25 +0000 UTC Type:0 Mac:52:54:00:b2:4e:81 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-764617-m03 Clientid:01:52:54:00:b2:4e:81}
	I0816 17:06:36.085376   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:36.085540   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHPort
	I0816 17:06:36.085739   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHKeyPath
	I0816 17:06:36.085901   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHUsername
	I0816 17:06:36.086073   27287 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m03/id_rsa Username:docker}
	I0816 17:06:36.161891   27287 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 17:06:36.165990   27287 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 17:06:36.166014   27287 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/addons for local assets ...
	I0816 17:06:36.166084   27287 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/files for local assets ...
	I0816 17:06:36.166169   27287 filesync.go:149] local asset: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem -> 167532.pem in /etc/ssl/certs
	I0816 17:06:36.166180   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem -> /etc/ssl/certs/167532.pem
	I0816 17:06:36.166282   27287 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 17:06:36.174715   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /etc/ssl/certs/167532.pem (1708 bytes)
	I0816 17:06:36.197988   27287 start.go:296] duration metric: took 115.714381ms for postStartSetup
	I0816 17:06:36.198032   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetConfigRaw
	I0816 17:06:36.198601   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetIP
	I0816 17:06:36.201489   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:36.201887   27287 main.go:141] libmachine: (ha-764617-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:4e:81", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:06:25 +0000 UTC Type:0 Mac:52:54:00:b2:4e:81 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-764617-m03 Clientid:01:52:54:00:b2:4e:81}
	I0816 17:06:36.201918   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:36.202168   27287 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/config.json ...
	I0816 17:06:36.202411   27287 start.go:128] duration metric: took 25.063611638s to createHost
	I0816 17:06:36.202443   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHHostname
	I0816 17:06:36.205107   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:36.205499   27287 main.go:141] libmachine: (ha-764617-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:4e:81", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:06:25 +0000 UTC Type:0 Mac:52:54:00:b2:4e:81 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-764617-m03 Clientid:01:52:54:00:b2:4e:81}
	I0816 17:06:36.205524   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:36.205684   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHPort
	I0816 17:06:36.205844   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHKeyPath
	I0816 17:06:36.205988   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHKeyPath
	I0816 17:06:36.206109   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHUsername
	I0816 17:06:36.206254   27287 main.go:141] libmachine: Using SSH client type: native
	I0816 17:06:36.206419   27287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0816 17:06:36.206432   27287 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 17:06:36.308842   27287 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723827996.286893525
	
	I0816 17:06:36.308865   27287 fix.go:216] guest clock: 1723827996.286893525
	I0816 17:06:36.308876   27287 fix.go:229] Guest: 2024-08-16 17:06:36.286893525 +0000 UTC Remote: 2024-08-16 17:06:36.202426568 +0000 UTC m=+145.059887392 (delta=84.466957ms)
	I0816 17:06:36.308895   27287 fix.go:200] guest clock delta is within tolerance: 84.466957ms
	I0816 17:06:36.308902   27287 start.go:83] releasing machines lock for "ha-764617-m03", held for 25.170212902s
	I0816 17:06:36.308924   27287 main.go:141] libmachine: (ha-764617-m03) Calling .DriverName
	I0816 17:06:36.309142   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetIP
	I0816 17:06:36.311958   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:36.312372   27287 main.go:141] libmachine: (ha-764617-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:4e:81", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:06:25 +0000 UTC Type:0 Mac:52:54:00:b2:4e:81 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-764617-m03 Clientid:01:52:54:00:b2:4e:81}
	I0816 17:06:36.312398   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:36.314684   27287 out.go:177] * Found network options:
	I0816 17:06:36.316208   27287 out.go:177]   - NO_PROXY=192.168.39.18,192.168.39.184
	W0816 17:06:36.317562   27287 proxy.go:119] fail to check proxy env: Error ip not in block
	W0816 17:06:36.317581   27287 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 17:06:36.317592   27287 main.go:141] libmachine: (ha-764617-m03) Calling .DriverName
	I0816 17:06:36.318048   27287 main.go:141] libmachine: (ha-764617-m03) Calling .DriverName
	I0816 17:06:36.318207   27287 main.go:141] libmachine: (ha-764617-m03) Calling .DriverName
	I0816 17:06:36.318304   27287 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 17:06:36.318340   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHHostname
	W0816 17:06:36.318416   27287 proxy.go:119] fail to check proxy env: Error ip not in block
	W0816 17:06:36.318432   27287 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 17:06:36.318484   27287 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 17:06:36.318503   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHHostname
	I0816 17:06:36.321171   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:36.321384   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:36.321583   27287 main.go:141] libmachine: (ha-764617-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:4e:81", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:06:25 +0000 UTC Type:0 Mac:52:54:00:b2:4e:81 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-764617-m03 Clientid:01:52:54:00:b2:4e:81}
	I0816 17:06:36.321607   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:36.321754   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHPort
	I0816 17:06:36.321868   27287 main.go:141] libmachine: (ha-764617-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:4e:81", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:06:25 +0000 UTC Type:0 Mac:52:54:00:b2:4e:81 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-764617-m03 Clientid:01:52:54:00:b2:4e:81}
	I0816 17:06:36.321892   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:36.321912   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHKeyPath
	I0816 17:06:36.322035   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHPort
	I0816 17:06:36.322131   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHUsername
	I0816 17:06:36.322226   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHKeyPath
	I0816 17:06:36.322296   27287 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m03/id_rsa Username:docker}
	I0816 17:06:36.322375   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHUsername
	I0816 17:06:36.322517   27287 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m03/id_rsa Username:docker}
	I0816 17:06:36.549816   27287 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 17:06:36.555487   27287 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 17:06:36.555545   27287 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 17:06:36.573414   27287 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 17:06:36.573436   27287 start.go:495] detecting cgroup driver to use...
	I0816 17:06:36.573504   27287 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 17:06:36.590169   27287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 17:06:36.603784   27287 docker.go:217] disabling cri-docker service (if available) ...
	I0816 17:06:36.603836   27287 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 17:06:36.617748   27287 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 17:06:36.630805   27287 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 17:06:36.745094   27287 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 17:06:36.898097   27287 docker.go:233] disabling docker service ...
	I0816 17:06:36.898154   27287 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 17:06:36.911588   27287 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 17:06:36.923400   27287 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 17:06:37.066157   27287 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 17:06:37.185218   27287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 17:06:37.199415   27287 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 17:06:37.218994   27287 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 17:06:37.219059   27287 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:06:37.229416   27287 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 17:06:37.229480   27287 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:06:37.239655   27287 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:06:37.249436   27287 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:06:37.259163   27287 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 17:06:37.269306   27287 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:06:37.278899   27287 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:06:37.295570   27287 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:06:37.305152   27287 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 17:06:37.313710   27287 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 17:06:37.313760   27287 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 17:06:37.326116   27287 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 17:06:37.334896   27287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 17:06:37.461973   27287 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 17:06:37.589731   27287 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 17:06:37.589799   27287 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 17:06:37.594349   27287 start.go:563] Will wait 60s for crictl version
	I0816 17:06:37.594404   27287 ssh_runner.go:195] Run: which crictl
	I0816 17:06:37.597876   27287 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 17:06:37.636651   27287 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 17:06:37.636732   27287 ssh_runner.go:195] Run: crio --version
	I0816 17:06:37.663227   27287 ssh_runner.go:195] Run: crio --version
	I0816 17:06:37.691490   27287 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 17:06:37.693121   27287 out.go:177]   - env NO_PROXY=192.168.39.18
	I0816 17:06:37.694722   27287 out.go:177]   - env NO_PROXY=192.168.39.18,192.168.39.184
	I0816 17:06:37.696038   27287 main.go:141] libmachine: (ha-764617-m03) Calling .GetIP
	I0816 17:06:37.698755   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:37.699119   27287 main.go:141] libmachine: (ha-764617-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:4e:81", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:06:25 +0000 UTC Type:0 Mac:52:54:00:b2:4e:81 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-764617-m03 Clientid:01:52:54:00:b2:4e:81}
	I0816 17:06:37.699145   27287 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:06:37.699374   27287 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0816 17:06:37.703276   27287 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 17:06:37.715076   27287 mustload.go:65] Loading cluster: ha-764617
	I0816 17:06:37.715374   27287 config.go:182] Loaded profile config "ha-764617": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 17:06:37.715741   27287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:06:37.715787   27287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:06:37.731775   27287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39509
	I0816 17:06:37.732200   27287 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:06:37.732774   27287 main.go:141] libmachine: Using API Version  1
	I0816 17:06:37.732800   27287 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:06:37.733080   27287 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:06:37.733298   27287 main.go:141] libmachine: (ha-764617) Calling .GetState
	I0816 17:06:37.734638   27287 host.go:66] Checking if "ha-764617" exists ...
	I0816 17:06:37.734910   27287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:06:37.734941   27287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:06:37.750425   27287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38567
	I0816 17:06:37.750936   27287 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:06:37.751428   27287 main.go:141] libmachine: Using API Version  1
	I0816 17:06:37.751452   27287 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:06:37.751774   27287 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:06:37.751981   27287 main.go:141] libmachine: (ha-764617) Calling .DriverName
	I0816 17:06:37.752172   27287 certs.go:68] Setting up /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617 for IP: 192.168.39.253
	I0816 17:06:37.752187   27287 certs.go:194] generating shared ca certs ...
	I0816 17:06:37.752205   27287 certs.go:226] acquiring lock for ca certs: {Name:mk1e000575c94c138513704c2900b8a68810eb65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:06:37.752349   27287 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key
	I0816 17:06:37.752405   27287 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key
	I0816 17:06:37.752423   27287 certs.go:256] generating profile certs ...
	I0816 17:06:37.752526   27287 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/client.key
	I0816 17:06:37.752567   27287 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.key.4d0c836e
	I0816 17:06:37.752588   27287 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.crt.4d0c836e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.18 192.168.39.184 192.168.39.253 192.168.39.254]
	I0816 17:06:37.883447   27287 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.crt.4d0c836e ...
	I0816 17:06:37.883477   27287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.crt.4d0c836e: {Name:mke5ffa004a00b8dc15e1b58cef73083e4ecf103 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:06:37.883643   27287 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.key.4d0c836e ...
	I0816 17:06:37.883655   27287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.key.4d0c836e: {Name:mk866f1ba5180fdb0967c8d90670c43aaf810f15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:06:37.883723   27287 certs.go:381] copying /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.crt.4d0c836e -> /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.crt
	I0816 17:06:37.883852   27287 certs.go:385] copying /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.key.4d0c836e -> /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.key
	I0816 17:06:37.883992   27287 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/proxy-client.key
	I0816 17:06:37.884011   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0816 17:06:37.884062   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0816 17:06:37.884089   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0816 17:06:37.884107   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0816 17:06:37.884129   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0816 17:06:37.884143   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0816 17:06:37.884155   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0816 17:06:37.884174   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0816 17:06:37.884244   27287 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem (1338 bytes)
	W0816 17:06:37.884285   27287 certs.go:480] ignoring /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753_empty.pem, impossibly tiny 0 bytes
	I0816 17:06:37.884299   27287 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 17:06:37.884340   27287 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem (1082 bytes)
	I0816 17:06:37.884367   27287 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem (1123 bytes)
	I0816 17:06:37.884397   27287 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem (1679 bytes)
	I0816 17:06:37.884450   27287 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem (1708 bytes)
	I0816 17:06:37.884527   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem -> /usr/share/ca-certificates/16753.pem
	I0816 17:06:37.884554   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem -> /usr/share/ca-certificates/167532.pem
	I0816 17:06:37.884571   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0816 17:06:37.884610   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHHostname
	I0816 17:06:37.888020   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:06:37.888397   27287 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:06:37.888411   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:06:37.888575   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHPort
	I0816 17:06:37.888809   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:06:37.888963   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHUsername
	I0816 17:06:37.889115   27287 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617/id_rsa Username:docker}
	I0816 17:06:37.968988   27287 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0816 17:06:37.974680   27287 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0816 17:06:37.989666   27287 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0816 17:06:37.994201   27287 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0816 17:06:38.004434   27287 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0816 17:06:38.008441   27287 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0816 17:06:38.021704   27287 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0816 17:06:38.026123   27287 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0816 17:06:38.036326   27287 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0816 17:06:38.040000   27287 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0816 17:06:38.049939   27287 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0816 17:06:38.061869   27287 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0816 17:06:38.072212   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 17:06:38.096256   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 17:06:38.120261   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 17:06:38.143728   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 17:06:38.166144   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0816 17:06:38.188759   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 17:06:38.211236   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 17:06:38.234675   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 17:06:38.257472   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem --> /usr/share/ca-certificates/16753.pem (1338 bytes)
	I0816 17:06:38.280278   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /usr/share/ca-certificates/167532.pem (1708 bytes)
	I0816 17:06:38.303544   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 17:06:38.325759   27287 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0816 17:06:38.340908   27287 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0816 17:06:38.356140   27287 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0816 17:06:38.372500   27287 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0816 17:06:38.389308   27287 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0816 17:06:38.405213   27287 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0816 17:06:38.421405   27287 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0816 17:06:38.437521   27287 ssh_runner.go:195] Run: openssl version
	I0816 17:06:38.443055   27287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16753.pem && ln -fs /usr/share/ca-certificates/16753.pem /etc/ssl/certs/16753.pem"
	I0816 17:06:38.454048   27287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16753.pem
	I0816 17:06:38.458400   27287 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 17:00 /usr/share/ca-certificates/16753.pem
	I0816 17:06:38.458446   27287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16753.pem
	I0816 17:06:38.464066   27287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16753.pem /etc/ssl/certs/51391683.0"
	I0816 17:06:38.473681   27287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167532.pem && ln -fs /usr/share/ca-certificates/167532.pem /etc/ssl/certs/167532.pem"
	I0816 17:06:38.483895   27287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167532.pem
	I0816 17:06:38.488554   27287 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 17:00 /usr/share/ca-certificates/167532.pem
	I0816 17:06:38.488609   27287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167532.pem
	I0816 17:06:38.493699   27287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167532.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 17:06:38.504218   27287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 17:06:38.513940   27287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 17:06:38.518030   27287 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:49 /usr/share/ca-certificates/minikubeCA.pem
	I0816 17:06:38.518087   27287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 17:06:38.523180   27287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 17:06:38.533149   27287 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 17:06:38.536987   27287 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0816 17:06:38.537041   27287 kubeadm.go:934] updating node {m03 192.168.39.253 8443 v1.31.0 crio true true} ...
	I0816 17:06:38.537133   27287 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-764617-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.253
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-764617 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 17:06:38.537159   27287 kube-vip.go:115] generating kube-vip config ...
	I0816 17:06:38.537199   27287 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0816 17:06:38.552759   27287 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0816 17:06:38.552834   27287 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0816 17:06:38.552879   27287 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 17:06:38.561603   27287 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0816 17:06:38.561655   27287 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0816 17:06:38.570815   27287 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0816 17:06:38.570847   27287 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256
	I0816 17:06:38.570864   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0816 17:06:38.570934   27287 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0816 17:06:38.570848   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0816 17:06:38.570819   27287 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256
	I0816 17:06:38.571031   27287 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0816 17:06:38.571056   27287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 17:06:38.578386   27287 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0816 17:06:38.578415   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0816 17:06:38.578432   27287 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0816 17:06:38.578455   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0816 17:06:38.604998   27287 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0816 17:06:38.605083   27287 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0816 17:06:38.720683   27287 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0816 17:06:38.720727   27287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0816 17:06:39.388735   27287 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0816 17:06:39.399345   27287 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0816 17:06:39.416213   27287 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 17:06:39.432846   27287 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0816 17:06:39.450175   27287 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0816 17:06:39.453970   27287 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 17:06:39.466538   27287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 17:06:39.608143   27287 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 17:06:39.626972   27287 host.go:66] Checking if "ha-764617" exists ...
	I0816 17:06:39.627450   27287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:06:39.627509   27287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:06:39.643247   27287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42709
	I0816 17:06:39.643826   27287 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:06:39.644414   27287 main.go:141] libmachine: Using API Version  1
	I0816 17:06:39.644442   27287 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:06:39.644838   27287 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:06:39.645034   27287 main.go:141] libmachine: (ha-764617) Calling .DriverName
	I0816 17:06:39.645198   27287 start.go:317] joinCluster: &{Name:ha-764617 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-764617 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.18 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.184 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.253 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 17:06:39.645348   27287 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0816 17:06:39.645369   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHHostname
	I0816 17:06:39.647997   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:06:39.648435   27287 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:06:39.648465   27287 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:06:39.648656   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHPort
	I0816 17:06:39.648836   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:06:39.649006   27287 main.go:141] libmachine: (ha-764617) Calling .GetSSHUsername
	I0816 17:06:39.649157   27287 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617/id_rsa Username:docker}
	I0816 17:06:39.830980   27287 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.253 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 17:06:39.831031   27287 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ym0jgi.rwnboocl3slfp5fi --discovery-token-ca-cert-hash sha256:340cd7e6f4afd769ffe0b97bb40a910498795aaf29fab06cd6b8c9a3957ccd57 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-764617-m03 --control-plane --apiserver-advertise-address=192.168.39.253 --apiserver-bind-port=8443"
	I0816 17:07:04.251949   27287 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ym0jgi.rwnboocl3slfp5fi --discovery-token-ca-cert-hash sha256:340cd7e6f4afd769ffe0b97bb40a910498795aaf29fab06cd6b8c9a3957ccd57 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-764617-m03 --control-plane --apiserver-advertise-address=192.168.39.253 --apiserver-bind-port=8443": (24.420887637s)
	I0816 17:07:04.251982   27287 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0816 17:07:04.730128   27287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-764617-m03 minikube.k8s.io/updated_at=2024_08_16T17_07_04_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd minikube.k8s.io/name=ha-764617 minikube.k8s.io/primary=false
	I0816 17:07:04.844238   27287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-764617-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0816 17:07:04.984499   27287 start.go:319] duration metric: took 25.339299308s to joinCluster
	I0816 17:07:04.984574   27287 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.253 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 17:07:04.984929   27287 config.go:182] Loaded profile config "ha-764617": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 17:07:04.986175   27287 out.go:177] * Verifying Kubernetes components...
	I0816 17:07:04.987571   27287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 17:07:05.202133   27287 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 17:07:05.220118   27287 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19461-9545/kubeconfig
	I0816 17:07:05.220449   27287 kapi.go:59] client config for ha-764617: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/client.crt", KeyFile:"/home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/client.key", CAFile:"/home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f189a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0816 17:07:05.220532   27287 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.18:8443
	I0816 17:07:05.220857   27287 node_ready.go:35] waiting up to 6m0s for node "ha-764617-m03" to be "Ready" ...
	I0816 17:07:05.220966   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:05.220977   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:05.220987   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:05.220995   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:05.223989   27287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 17:07:05.721388   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:05.721413   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:05.721421   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:05.721425   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:05.725056   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:06.221188   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:06.221210   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:06.221219   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:06.221226   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:06.224292   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:06.721948   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:06.721976   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:06.721988   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:06.721996   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:06.725251   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:07.221989   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:07.222020   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:07.222034   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:07.222040   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:07.226279   27287 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 17:07:07.227118   27287 node_ready.go:53] node "ha-764617-m03" has status "Ready":"False"
	I0816 17:07:07.721116   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:07.721136   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:07.721147   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:07.721153   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:07.724470   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:08.221845   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:08.221867   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:08.221875   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:08.221879   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:08.225220   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:08.721906   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:08.721929   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:08.721936   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:08.721940   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:08.725450   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:09.221819   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:09.221845   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:09.221856   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:09.221864   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:09.224953   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:09.721061   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:09.721080   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:09.721088   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:09.721091   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:09.724384   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:09.724990   27287 node_ready.go:53] node "ha-764617-m03" has status "Ready":"False"
	I0816 17:07:10.221254   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:10.221281   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:10.221292   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:10.221299   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:10.224947   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:10.721865   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:10.721890   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:10.721906   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:10.721913   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:10.727785   27287 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0816 17:07:11.221053   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:11.221074   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:11.221082   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:11.221086   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:11.224379   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:11.721432   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:11.721458   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:11.721467   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:11.721473   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:11.724805   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:11.725248   27287 node_ready.go:53] node "ha-764617-m03" has status "Ready":"False"
	I0816 17:07:12.221664   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:12.221686   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:12.221696   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:12.221703   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:12.224962   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:12.721626   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:12.721647   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:12.721655   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:12.721660   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:12.725440   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:13.221169   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:13.221190   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:13.221197   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:13.221201   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:13.224223   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:13.721321   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:13.721349   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:13.721360   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:13.721367   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:13.724617   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:14.221636   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:14.221657   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:14.221665   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:14.221668   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:14.224668   27287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 17:07:14.225283   27287 node_ready.go:53] node "ha-764617-m03" has status "Ready":"False"
	I0816 17:07:14.722017   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:14.722037   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:14.722046   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:14.722049   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:14.725187   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:15.221869   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:15.221890   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:15.221898   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:15.221903   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:15.225400   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:15.721099   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:15.721125   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:15.721133   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:15.721138   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:15.723985   27287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 17:07:16.221487   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:16.221512   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:16.221524   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:16.221529   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:16.225115   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:16.225567   27287 node_ready.go:53] node "ha-764617-m03" has status "Ready":"False"
	I0816 17:07:16.721129   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:16.721149   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:16.721159   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:16.721167   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:16.724607   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:17.221755   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:17.221777   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:17.221784   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:17.221787   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:17.224849   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:17.721872   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:17.721892   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:17.721899   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:17.721903   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:17.725194   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:18.221870   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:18.221910   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:18.221929   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:18.221936   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:18.225188   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:18.225847   27287 node_ready.go:53] node "ha-764617-m03" has status "Ready":"False"
	I0816 17:07:18.721895   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:18.721919   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:18.721927   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:18.721932   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:18.725279   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:19.221868   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:19.221888   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:19.221897   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:19.221901   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:19.225584   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:19.721838   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:19.721864   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:19.721877   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:19.721884   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:19.725111   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:20.221883   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:20.221910   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:20.221921   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:20.221925   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:20.225402   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:20.225927   27287 node_ready.go:53] node "ha-764617-m03" has status "Ready":"False"
	I0816 17:07:20.721946   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:20.721973   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:20.721981   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:20.721987   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:20.725470   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:21.221077   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:21.221100   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:21.221111   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:21.221115   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:21.224422   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:21.721833   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:21.721853   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:21.721861   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:21.721865   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:21.725192   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:22.221865   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:22.221886   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:22.221894   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:22.221897   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:22.225581   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:22.226057   27287 node_ready.go:49] node "ha-764617-m03" has status "Ready":"True"
	I0816 17:07:22.226073   27287 node_ready.go:38] duration metric: took 17.005191544s for node "ha-764617-m03" to be "Ready" ...
	I0816 17:07:22.226081   27287 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 17:07:22.226140   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods
	I0816 17:07:22.226150   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:22.226157   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:22.226161   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:22.231618   27287 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0816 17:07:22.237858   27287 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-d6c7g" in "kube-system" namespace to be "Ready" ...
	I0816 17:07:22.237926   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d6c7g
	I0816 17:07:22.237934   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:22.237942   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:22.237946   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:22.240591   27287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 17:07:22.241240   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617
	I0816 17:07:22.241257   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:22.241264   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:22.241267   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:22.244062   27287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 17:07:22.244536   27287 pod_ready.go:93] pod "coredns-6f6b679f8f-d6c7g" in "kube-system" namespace has status "Ready":"True"
	I0816 17:07:22.244551   27287 pod_ready.go:82] duration metric: took 6.674274ms for pod "coredns-6f6b679f8f-d6c7g" in "kube-system" namespace to be "Ready" ...
	I0816 17:07:22.244559   27287 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-rhb6h" in "kube-system" namespace to be "Ready" ...
	I0816 17:07:22.244639   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-rhb6h
	I0816 17:07:22.244651   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:22.244659   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:22.244663   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:22.247015   27287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 17:07:22.247522   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617
	I0816 17:07:22.247535   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:22.247542   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:22.247547   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:22.250092   27287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 17:07:22.250520   27287 pod_ready.go:93] pod "coredns-6f6b679f8f-rhb6h" in "kube-system" namespace has status "Ready":"True"
	I0816 17:07:22.250539   27287 pod_ready.go:82] duration metric: took 5.973797ms for pod "coredns-6f6b679f8f-rhb6h" in "kube-system" namespace to be "Ready" ...
	I0816 17:07:22.250550   27287 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-764617" in "kube-system" namespace to be "Ready" ...
	I0816 17:07:22.250600   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/etcd-ha-764617
	I0816 17:07:22.250607   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:22.250614   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:22.250618   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:22.253077   27287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 17:07:22.253728   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617
	I0816 17:07:22.253741   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:22.253748   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:22.253751   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:22.255903   27287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 17:07:22.256324   27287 pod_ready.go:93] pod "etcd-ha-764617" in "kube-system" namespace has status "Ready":"True"
	I0816 17:07:22.256339   27287 pod_ready.go:82] duration metric: took 5.782852ms for pod "etcd-ha-764617" in "kube-system" namespace to be "Ready" ...
	I0816 17:07:22.256348   27287 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-764617-m02" in "kube-system" namespace to be "Ready" ...
	I0816 17:07:22.256393   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/etcd-ha-764617-m02
	I0816 17:07:22.256400   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:22.256406   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:22.256410   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:22.258656   27287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 17:07:22.259145   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:07:22.259160   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:22.259167   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:22.259170   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:22.261391   27287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 17:07:22.261931   27287 pod_ready.go:93] pod "etcd-ha-764617-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 17:07:22.261952   27287 pod_ready.go:82] duration metric: took 5.594854ms for pod "etcd-ha-764617-m02" in "kube-system" namespace to be "Ready" ...
	I0816 17:07:22.261963   27287 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-764617-m03" in "kube-system" namespace to be "Ready" ...
	I0816 17:07:22.422196   27287 request.go:632] Waited for 160.179926ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/etcd-ha-764617-m03
	I0816 17:07:22.422272   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/etcd-ha-764617-m03
	I0816 17:07:22.422280   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:22.422288   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:22.422294   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:22.425474   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:22.622409   27287 request.go:632] Waited for 196.369915ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:22.622456   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:22.622462   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:22.622469   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:22.622473   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:22.625652   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:22.626282   27287 pod_ready.go:93] pod "etcd-ha-764617-m03" in "kube-system" namespace has status "Ready":"True"
	I0816 17:07:22.626300   27287 pod_ready.go:82] duration metric: took 364.331128ms for pod "etcd-ha-764617-m03" in "kube-system" namespace to be "Ready" ...
	I0816 17:07:22.626315   27287 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-764617" in "kube-system" namespace to be "Ready" ...
	I0816 17:07:22.822194   27287 request.go:632] Waited for 195.782729ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-764617
	I0816 17:07:22.822243   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-764617
	I0816 17:07:22.822248   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:22.822256   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:22.822260   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:22.825236   27287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 17:07:23.022257   27287 request.go:632] Waited for 196.345945ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/nodes/ha-764617
	I0816 17:07:23.022327   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617
	I0816 17:07:23.022334   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:23.022342   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:23.022346   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:23.025755   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:23.026223   27287 pod_ready.go:93] pod "kube-apiserver-ha-764617" in "kube-system" namespace has status "Ready":"True"
	I0816 17:07:23.026239   27287 pod_ready.go:82] duration metric: took 399.906406ms for pod "kube-apiserver-ha-764617" in "kube-system" namespace to be "Ready" ...
	I0816 17:07:23.026254   27287 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-764617-m02" in "kube-system" namespace to be "Ready" ...
	I0816 17:07:23.222330   27287 request.go:632] Waited for 195.998261ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-764617-m02
	I0816 17:07:23.222384   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-764617-m02
	I0816 17:07:23.222390   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:23.222397   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:23.222401   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:23.225522   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:23.422660   27287 request.go:632] Waited for 196.348509ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:07:23.422727   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:07:23.422736   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:23.422746   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:23.422755   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:23.426049   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:23.426813   27287 pod_ready.go:93] pod "kube-apiserver-ha-764617-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 17:07:23.426831   27287 pod_ready.go:82] duration metric: took 400.568472ms for pod "kube-apiserver-ha-764617-m02" in "kube-system" namespace to be "Ready" ...
	I0816 17:07:23.426843   27287 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-764617-m03" in "kube-system" namespace to be "Ready" ...
	I0816 17:07:23.621895   27287 request.go:632] Waited for 194.984593ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-764617-m03
	I0816 17:07:23.621956   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-764617-m03
	I0816 17:07:23.621963   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:23.621973   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:23.621980   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:23.625094   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:23.822265   27287 request.go:632] Waited for 196.377357ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:23.822343   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:23.822351   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:23.822361   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:23.822369   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:23.825375   27287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 17:07:23.826005   27287 pod_ready.go:93] pod "kube-apiserver-ha-764617-m03" in "kube-system" namespace has status "Ready":"True"
	I0816 17:07:23.826025   27287 pod_ready.go:82] duration metric: took 399.170806ms for pod "kube-apiserver-ha-764617-m03" in "kube-system" namespace to be "Ready" ...
	I0816 17:07:23.826037   27287 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-764617" in "kube-system" namespace to be "Ready" ...
	I0816 17:07:24.022117   27287 request.go:632] Waited for 196.010046ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-764617
	I0816 17:07:24.022183   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-764617
	I0816 17:07:24.022189   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:24.022197   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:24.022202   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:24.025588   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:24.222489   27287 request.go:632] Waited for 196.285418ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/nodes/ha-764617
	I0816 17:07:24.222554   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617
	I0816 17:07:24.222562   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:24.222572   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:24.222581   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:24.226019   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:24.226722   27287 pod_ready.go:93] pod "kube-controller-manager-ha-764617" in "kube-system" namespace has status "Ready":"True"
	I0816 17:07:24.226736   27287 pod_ready.go:82] duration metric: took 400.688319ms for pod "kube-controller-manager-ha-764617" in "kube-system" namespace to be "Ready" ...
	I0816 17:07:24.226746   27287 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-764617-m02" in "kube-system" namespace to be "Ready" ...
	I0816 17:07:24.422869   27287 request.go:632] Waited for 196.059866ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-764617-m02
	I0816 17:07:24.422949   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-764617-m02
	I0816 17:07:24.422956   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:24.422964   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:24.422971   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:24.426540   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:24.622518   27287 request.go:632] Waited for 195.329518ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:07:24.622600   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:07:24.622607   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:24.622614   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:24.622620   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:24.625819   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:24.626643   27287 pod_ready.go:93] pod "kube-controller-manager-ha-764617-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 17:07:24.626663   27287 pod_ready.go:82] duration metric: took 399.910627ms for pod "kube-controller-manager-ha-764617-m02" in "kube-system" namespace to be "Ready" ...
	I0816 17:07:24.626679   27287 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-764617-m03" in "kube-system" namespace to be "Ready" ...
	I0816 17:07:24.822518   27287 request.go:632] Waited for 195.76611ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-764617-m03
	I0816 17:07:24.822578   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-764617-m03
	I0816 17:07:24.822583   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:24.822591   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:24.822594   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:24.825956   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:25.021996   27287 request.go:632] Waited for 195.285348ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:25.022054   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:25.022061   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:25.022071   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:25.022077   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:25.026843   27287 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 17:07:25.027640   27287 pod_ready.go:93] pod "kube-controller-manager-ha-764617-m03" in "kube-system" namespace has status "Ready":"True"
	I0816 17:07:25.027667   27287 pod_ready.go:82] duration metric: took 400.978413ms for pod "kube-controller-manager-ha-764617-m03" in "kube-system" namespace to be "Ready" ...
	I0816 17:07:25.027680   27287 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-g5szr" in "kube-system" namespace to be "Ready" ...
	I0816 17:07:25.222633   27287 request.go:632] Waited for 194.862724ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g5szr
	I0816 17:07:25.222695   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g5szr
	I0816 17:07:25.222702   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:25.222714   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:25.222719   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:25.225761   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:25.421886   27287 request.go:632] Waited for 195.290047ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:07:25.421981   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:07:25.421996   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:25.422005   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:25.422010   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:25.425343   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:25.425903   27287 pod_ready.go:93] pod "kube-proxy-g5szr" in "kube-system" namespace has status "Ready":"True"
	I0816 17:07:25.425920   27287 pod_ready.go:82] duration metric: took 398.23273ms for pod "kube-proxy-g5szr" in "kube-system" namespace to be "Ready" ...
	I0816 17:07:25.425928   27287 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-j75vc" in "kube-system" namespace to be "Ready" ...
	I0816 17:07:25.621878   27287 request.go:632] Waited for 195.891243ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j75vc
	I0816 17:07:25.621930   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j75vc
	I0816 17:07:25.621935   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:25.621943   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:25.621948   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:25.625111   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:25.822018   27287 request.go:632] Waited for 196.305037ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/nodes/ha-764617
	I0816 17:07:25.822089   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617
	I0816 17:07:25.822098   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:25.822107   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:25.822110   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:25.825514   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:25.826094   27287 pod_ready.go:93] pod "kube-proxy-j75vc" in "kube-system" namespace has status "Ready":"True"
	I0816 17:07:25.826110   27287 pod_ready.go:82] duration metric: took 400.176235ms for pod "kube-proxy-j75vc" in "kube-system" namespace to be "Ready" ...
	I0816 17:07:25.826119   27287 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mgvzm" in "kube-system" namespace to be "Ready" ...
	I0816 17:07:26.022246   27287 request.go:632] Waited for 196.048177ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mgvzm
	I0816 17:07:26.022342   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mgvzm
	I0816 17:07:26.022355   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:26.022365   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:26.022374   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:26.026823   27287 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 17:07:26.222887   27287 request.go:632] Waited for 195.386671ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:26.222940   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:26.222945   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:26.222952   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:26.222956   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:26.226009   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:26.226549   27287 pod_ready.go:93] pod "kube-proxy-mgvzm" in "kube-system" namespace has status "Ready":"True"
	I0816 17:07:26.226575   27287 pod_ready.go:82] duration metric: took 400.449646ms for pod "kube-proxy-mgvzm" in "kube-system" namespace to be "Ready" ...
	I0816 17:07:26.226585   27287 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-764617" in "kube-system" namespace to be "Ready" ...
	I0816 17:07:26.421902   27287 request.go:632] Waited for 195.224421ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-764617
	I0816 17:07:26.421958   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-764617
	I0816 17:07:26.421963   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:26.421970   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:26.421975   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:26.424870   27287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 17:07:26.622733   27287 request.go:632] Waited for 197.348261ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/nodes/ha-764617
	I0816 17:07:26.622793   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617
	I0816 17:07:26.622798   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:26.622806   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:26.622810   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:26.626044   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:26.626658   27287 pod_ready.go:93] pod "kube-scheduler-ha-764617" in "kube-system" namespace has status "Ready":"True"
	I0816 17:07:26.626674   27287 pod_ready.go:82] duration metric: took 400.082715ms for pod "kube-scheduler-ha-764617" in "kube-system" namespace to be "Ready" ...
	I0816 17:07:26.626682   27287 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-764617-m02" in "kube-system" namespace to be "Ready" ...
	I0816 17:07:26.822944   27287 request.go:632] Waited for 196.180078ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-764617-m02
	I0816 17:07:26.823002   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-764617-m02
	I0816 17:07:26.823008   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:26.823017   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:26.823021   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:26.826512   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:27.022573   27287 request.go:632] Waited for 195.366257ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:07:27.022621   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m02
	I0816 17:07:27.022626   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:27.022635   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:27.022646   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:27.025666   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:27.026468   27287 pod_ready.go:93] pod "kube-scheduler-ha-764617-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 17:07:27.026490   27287 pod_ready.go:82] duration metric: took 399.797902ms for pod "kube-scheduler-ha-764617-m02" in "kube-system" namespace to be "Ready" ...
	I0816 17:07:27.026503   27287 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-764617-m03" in "kube-system" namespace to be "Ready" ...
	I0816 17:07:27.222447   27287 request.go:632] Waited for 195.876859ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-764617-m03
	I0816 17:07:27.222518   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-764617-m03
	I0816 17:07:27.222526   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:27.222540   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:27.222548   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:27.225901   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:27.422266   27287 request.go:632] Waited for 195.768523ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:27.422360   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/ha-764617-m03
	I0816 17:07:27.422377   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:27.422385   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:27.422389   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:27.425722   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:27.426363   27287 pod_ready.go:93] pod "kube-scheduler-ha-764617-m03" in "kube-system" namespace has status "Ready":"True"
	I0816 17:07:27.426383   27287 pod_ready.go:82] duration metric: took 399.872152ms for pod "kube-scheduler-ha-764617-m03" in "kube-system" namespace to be "Ready" ...
	I0816 17:07:27.426398   27287 pod_ready.go:39] duration metric: took 5.200306061s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 17:07:27.426414   27287 api_server.go:52] waiting for apiserver process to appear ...
	I0816 17:07:27.426468   27287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 17:07:27.441460   27287 api_server.go:72] duration metric: took 22.456852586s to wait for apiserver process to appear ...
	I0816 17:07:27.441487   27287 api_server.go:88] waiting for apiserver healthz status ...
	I0816 17:07:27.441509   27287 api_server.go:253] Checking apiserver healthz at https://192.168.39.18:8443/healthz ...
	I0816 17:07:27.449407   27287 api_server.go:279] https://192.168.39.18:8443/healthz returned 200:
	ok
	I0816 17:07:27.449492   27287 round_trippers.go:463] GET https://192.168.39.18:8443/version
	I0816 17:07:27.449503   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:27.449517   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:27.449525   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:27.450369   27287 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0816 17:07:27.450438   27287 api_server.go:141] control plane version: v1.31.0
	I0816 17:07:27.450452   27287 api_server.go:131] duration metric: took 8.959106ms to wait for apiserver health ...
	I0816 17:07:27.450460   27287 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 17:07:27.622863   27287 request.go:632] Waited for 172.327319ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods
	I0816 17:07:27.622955   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods
	I0816 17:07:27.622962   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:27.622972   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:27.622976   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:27.629756   27287 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0816 17:07:27.638058   27287 system_pods.go:59] 24 kube-system pods found
	I0816 17:07:27.638092   27287 system_pods.go:61] "coredns-6f6b679f8f-d6c7g" [255004b9-d05e-4686-9e9c-6ec6f7aae439] Running
	I0816 17:07:27.638100   27287 system_pods.go:61] "coredns-6f6b679f8f-rhb6h" [ea20ec0a-a16e-4703-bb54-2e54c31acd40] Running
	I0816 17:07:27.638105   27287 system_pods.go:61] "etcd-ha-764617" [3dcae246-5101-4a41-9f28-a6a1740644d4] Running
	I0816 17:07:27.638110   27287 system_pods.go:61] "etcd-ha-764617-m02" [650d9e63-004f-414b-8a2a-e97bf4d38065] Running
	I0816 17:07:27.638115   27287 system_pods.go:61] "etcd-ha-764617-m03" [5149ba57-c3cc-40b3-a502-b782ac9e3124] Running
	I0816 17:07:27.638119   27287 system_pods.go:61] "kindnet-7l8xt" [ee8130fb-5347-4f22-849f-ebb68e6fc48e] Running
	I0816 17:07:27.638125   27287 system_pods.go:61] "kindnet-94vkj" [a1ce0b8c-c2c8-400a-a013-6eb89e550cd9] Running
	I0816 17:07:27.638129   27287 system_pods.go:61] "kindnet-fvp67" [cab5cbb1-9c16-4639-a182-f9dc0b5c674a] Running
	I0816 17:07:27.638134   27287 system_pods.go:61] "kube-apiserver-ha-764617" [85909d10-ec15-4749-9972-40ededb0e610] Running
	I0816 17:07:27.638146   27287 system_pods.go:61] "kube-apiserver-ha-764617-m02" [adc1ab1c-c514-4e9a-bd9f-4458dbe442b4] Running
	I0816 17:07:27.638155   27287 system_pods.go:61] "kube-apiserver-ha-764617-m03" [390f78be-da45-4134-a1f9-a5605a5f8e4d] Running
	I0816 17:07:27.638161   27287 system_pods.go:61] "kube-controller-manager-ha-764617" [31c5a5d2-e4a5-4405-8f99-f13c12763055] Running
	I0816 17:07:27.638168   27287 system_pods.go:61] "kube-controller-manager-ha-764617-m02" [8d094585-050e-49b9-b2f3-ffa45eadb25b] Running
	I0816 17:07:27.638174   27287 system_pods.go:61] "kube-controller-manager-ha-764617-m03" [5389ff46-3e33-4d65-b268-e749f05c25a7] Running
	I0816 17:07:27.638182   27287 system_pods.go:61] "kube-proxy-g5szr" [6adedbcf-cd3b-4a09-8759-c0e9e4d5ddb5] Running
	I0816 17:07:27.638188   27287 system_pods.go:61] "kube-proxy-j75vc" [50262aeb-9d97-4093-a43f-cb24a5515abb] Running
	I0816 17:07:27.638196   27287 system_pods.go:61] "kube-proxy-mgvzm" [6c8796c4-3856-4e4c-984f-501bba6459e2] Running
	I0816 17:07:27.638202   27287 system_pods.go:61] "kube-scheduler-ha-764617" [4c45b1dc-cc6e-41e2-a059-955fa9fd79aa] Running
	I0816 17:07:27.638207   27287 system_pods.go:61] "kube-scheduler-ha-764617-m02" [bb3e6b70-5a60-49f8-a1c3-08690fda371d] Running
	I0816 17:07:27.638213   27287 system_pods.go:61] "kube-scheduler-ha-764617-m03" [6cc05023-8264-4400-856e-5dbf10494aec] Running
	I0816 17:07:27.638222   27287 system_pods.go:61] "kube-vip-ha-764617" [a30deffd-45c9-4685-ae4c-0c0f113f3bd7] Running
	I0816 17:07:27.638228   27287 system_pods.go:61] "kube-vip-ha-764617-m02" [869da559-ebdf-417f-9494-eb1cacbeab97] Running
	I0816 17:07:27.638235   27287 system_pods.go:61] "kube-vip-ha-764617-m03" [e1ad6002-e6a5-48ef-976e-1212312bd233] Running
	I0816 17:07:27.638240   27287 system_pods.go:61] "storage-provisioner" [15a0a2d4-69d6-4a6b-9199-f8785e015c3b] Running
	I0816 17:07:27.638248   27287 system_pods.go:74] duration metric: took 187.778992ms to wait for pod list to return data ...
	I0816 17:07:27.638260   27287 default_sa.go:34] waiting for default service account to be created ...
	I0816 17:07:27.822726   27287 request.go:632] Waited for 184.385657ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/default/serviceaccounts
	I0816 17:07:27.822777   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/default/serviceaccounts
	I0816 17:07:27.822783   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:27.822791   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:27.822795   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:27.826494   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:27.826619   27287 default_sa.go:45] found service account: "default"
	I0816 17:07:27.826633   27287 default_sa.go:55] duration metric: took 188.367368ms for default service account to be created ...
	I0816 17:07:27.826642   27287 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 17:07:28.021999   27287 request.go:632] Waited for 195.297338ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods
	I0816 17:07:28.022054   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods
	I0816 17:07:28.022059   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:28.022115   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:28.022126   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:28.028302   27287 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0816 17:07:28.035191   27287 system_pods.go:86] 24 kube-system pods found
	I0816 17:07:28.035214   27287 system_pods.go:89] "coredns-6f6b679f8f-d6c7g" [255004b9-d05e-4686-9e9c-6ec6f7aae439] Running
	I0816 17:07:28.035220   27287 system_pods.go:89] "coredns-6f6b679f8f-rhb6h" [ea20ec0a-a16e-4703-bb54-2e54c31acd40] Running
	I0816 17:07:28.035224   27287 system_pods.go:89] "etcd-ha-764617" [3dcae246-5101-4a41-9f28-a6a1740644d4] Running
	I0816 17:07:28.035228   27287 system_pods.go:89] "etcd-ha-764617-m02" [650d9e63-004f-414b-8a2a-e97bf4d38065] Running
	I0816 17:07:28.035231   27287 system_pods.go:89] "etcd-ha-764617-m03" [5149ba57-c3cc-40b3-a502-b782ac9e3124] Running
	I0816 17:07:28.035234   27287 system_pods.go:89] "kindnet-7l8xt" [ee8130fb-5347-4f22-849f-ebb68e6fc48e] Running
	I0816 17:07:28.035237   27287 system_pods.go:89] "kindnet-94vkj" [a1ce0b8c-c2c8-400a-a013-6eb89e550cd9] Running
	I0816 17:07:28.035240   27287 system_pods.go:89] "kindnet-fvp67" [cab5cbb1-9c16-4639-a182-f9dc0b5c674a] Running
	I0816 17:07:28.035248   27287 system_pods.go:89] "kube-apiserver-ha-764617" [85909d10-ec15-4749-9972-40ededb0e610] Running
	I0816 17:07:28.035251   27287 system_pods.go:89] "kube-apiserver-ha-764617-m02" [adc1ab1c-c514-4e9a-bd9f-4458dbe442b4] Running
	I0816 17:07:28.035254   27287 system_pods.go:89] "kube-apiserver-ha-764617-m03" [390f78be-da45-4134-a1f9-a5605a5f8e4d] Running
	I0816 17:07:28.035262   27287 system_pods.go:89] "kube-controller-manager-ha-764617" [31c5a5d2-e4a5-4405-8f99-f13c12763055] Running
	I0816 17:07:28.035268   27287 system_pods.go:89] "kube-controller-manager-ha-764617-m02" [8d094585-050e-49b9-b2f3-ffa45eadb25b] Running
	I0816 17:07:28.035272   27287 system_pods.go:89] "kube-controller-manager-ha-764617-m03" [5389ff46-3e33-4d65-b268-e749f05c25a7] Running
	I0816 17:07:28.035274   27287 system_pods.go:89] "kube-proxy-g5szr" [6adedbcf-cd3b-4a09-8759-c0e9e4d5ddb5] Running
	I0816 17:07:28.035282   27287 system_pods.go:89] "kube-proxy-j75vc" [50262aeb-9d97-4093-a43f-cb24a5515abb] Running
	I0816 17:07:28.035287   27287 system_pods.go:89] "kube-proxy-mgvzm" [6c8796c4-3856-4e4c-984f-501bba6459e2] Running
	I0816 17:07:28.035290   27287 system_pods.go:89] "kube-scheduler-ha-764617" [4c45b1dc-cc6e-41e2-a059-955fa9fd79aa] Running
	I0816 17:07:28.035293   27287 system_pods.go:89] "kube-scheduler-ha-764617-m02" [bb3e6b70-5a60-49f8-a1c3-08690fda371d] Running
	I0816 17:07:28.035296   27287 system_pods.go:89] "kube-scheduler-ha-764617-m03" [6cc05023-8264-4400-856e-5dbf10494aec] Running
	I0816 17:07:28.035299   27287 system_pods.go:89] "kube-vip-ha-764617" [a30deffd-45c9-4685-ae4c-0c0f113f3bd7] Running
	I0816 17:07:28.035301   27287 system_pods.go:89] "kube-vip-ha-764617-m02" [869da559-ebdf-417f-9494-eb1cacbeab97] Running
	I0816 17:07:28.035304   27287 system_pods.go:89] "kube-vip-ha-764617-m03" [e1ad6002-e6a5-48ef-976e-1212312bd233] Running
	I0816 17:07:28.035307   27287 system_pods.go:89] "storage-provisioner" [15a0a2d4-69d6-4a6b-9199-f8785e015c3b] Running
	I0816 17:07:28.035312   27287 system_pods.go:126] duration metric: took 208.66562ms to wait for k8s-apps to be running ...
	I0816 17:07:28.035321   27287 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 17:07:28.035361   27287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 17:07:28.048890   27287 system_svc.go:56] duration metric: took 13.562693ms WaitForService to wait for kubelet
	I0816 17:07:28.048913   27287 kubeadm.go:582] duration metric: took 23.064308432s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 17:07:28.048934   27287 node_conditions.go:102] verifying NodePressure condition ...
	I0816 17:07:28.222387   27287 request.go:632] Waited for 173.376848ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/nodes
	I0816 17:07:28.222479   27287 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes
	I0816 17:07:28.222489   27287 round_trippers.go:469] Request Headers:
	I0816 17:07:28.222497   27287 round_trippers.go:473]     Accept: application/json, */*
	I0816 17:07:28.222506   27287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0816 17:07:28.226095   27287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 17:07:28.227070   27287 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 17:07:28.227090   27287 node_conditions.go:123] node cpu capacity is 2
	I0816 17:07:28.227100   27287 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 17:07:28.227104   27287 node_conditions.go:123] node cpu capacity is 2
	I0816 17:07:28.227107   27287 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 17:07:28.227110   27287 node_conditions.go:123] node cpu capacity is 2
	I0816 17:07:28.227114   27287 node_conditions.go:105] duration metric: took 178.175166ms to run NodePressure ...
	I0816 17:07:28.227124   27287 start.go:241] waiting for startup goroutines ...
	I0816 17:07:28.227145   27287 start.go:255] writing updated cluster config ...
	I0816 17:07:28.227412   27287 ssh_runner.go:195] Run: rm -f paused
	I0816 17:07:28.278695   27287 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0816 17:07:28.281265   27287 out.go:177] * Done! kubectl is now configured to use "ha-764617" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 16 17:12:08 ha-764617 crio[676]: time="2024-08-16 17:12:08.973239664Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828328973211079,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7e021b23-87e2-432f-bdd5-31ce075aba47 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 17:12:08 ha-764617 crio[676]: time="2024-08-16 17:12:08.973813413Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9bac7a3a-bc7d-40cd-8240-9562ef06181a name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 17:12:08 ha-764617 crio[676]: time="2024-08-16 17:12:08.973883900Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9bac7a3a-bc7d-40cd-8240-9562ef06181a name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 17:12:08 ha-764617 crio[676]: time="2024-08-16 17:12:08.974301209Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8f49214f24a1f9d4e237db072dea4cb4011708fed1d55a3518bae64afc9a36de,PodSandboxId:31ad2ee33305c2e247dc968727a480a35dd4e341f754742f3ab45df490c9d9b6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723828052423881885,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rcq66,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef4f9584-2155-48ce-80fa-30bac466b9f5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7484d3705a58cf84eea46cc2853fefc74ff28ce7be490d80fd998780a1345a8b,PodSandboxId:0158b06f966cea3c881bdd10c5c53ac153d60e8f64868f2f1893a602660250cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723827909501655918,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15a0a2d4-69d6-4a6b-9199-f8785e015c3b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d21ff55e0d1541ec985fefc0bbe41954428037be46427e4a0b9b9f6f59ff38b5,PodSandboxId:570a9af97580c893bd59ad0da740672446aad828cc2953120c5936aa6c511600,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723827909473257011,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-rhb6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea20ec0a-a16e-4703-bb54-2e54c31acd40,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eefbb289cdc64359e43e612fda427c74a5cd21d8765173cad59129f113b6faf,PodSandboxId:a96010807e82ab97d921d55fe01739b0b7dc0b5234c6f3d29ffd5e0d7ae9a661,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723827909453822317,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-d6c7g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 255004b9-d0
5e-4686-9e9c-6ec6f7aae439,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7c860bdbf8f8c3ce3e6ce12afcd55605f1afdf02828581465b57f088bf9fc24,PodSandboxId:850550a63d423740e066cd344cb8eca06c8ab1ce91d384e5962c2c137bc4c4e2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1723827897695724443,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-94vkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1ce0b8c-c2c8-400a-a013-6eb89e550cd9,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aaf72ada1592eb9db02d4b420e3712769e6486d17ade2669153a86196aec79d,PodSandboxId:7fa8ce6eea9326aa2d9e78422a5917fa128392accfa0f9395fc23862f5324831,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172382789
4189999095,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j75vc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50262aeb-9d97-4093-a43f-cb24a5515abb,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b4d4cb04162c2a865b03b9d68c6d63fe9ac39bfd8c3a34420cef100c23de268,PodSandboxId:29c0393581395683e0841872a8b47c31fae1d73c260f1331ec0727d42d4c4898,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172382788525
6090219,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5ac04a3c0a524fb49fee0e7201d9eee,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c020d60e48e21e3f68c2ab1e7dbc4570867af80f727edd6529d1d59b17f11a6f,PodSandboxId:c5d6c0455efc0ceafe45e3e9defa6b3e39bf97e12d07d6d742a88b3cbeaf65b6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723827882764931369,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a89700dd245c99cee73a27284c5b094,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:547ba7c3099cf77e755ecafde9e451130522c4ac7ad82e4d1425c79c1f03ae1b,PodSandboxId:09ec8ad12f1f1dcd1c0e02dc275ba62dff2a2b5f6ca7e7fd781694c2d45e4a18,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723827882761194658,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45c4aa250fdc29f3673166187d642d12,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d7b524ef17cfbc76cf8e0ec5c8dc05fb415ba95dd20034cc9e994fe15802183,PodSandboxId:9410cce2ddb5a77033469e2fea5eb8cce49cb54d02df3492ef98005be3b04efe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723827882756386863,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d9a187c472f17e2ba03b6daf392b7e4,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5964f78981acee32a76525df3d36071ce0c8b129aa0af6ff7aa1cdaff80b4110,PodSandboxId:df0ff04111d0b9081730712d0f7526286300e56603bc40376b44099e52560716,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723827882544331793,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32cbd9593cdf012e272df9a250d0e00c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9bac7a3a-bc7d-40cd-8240-9562ef06181a name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 17:12:09 ha-764617 crio[676]: time="2024-08-16 17:12:09.015707092Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=19fbff05-f278-4dcb-8d41-b707838c6405 name=/runtime.v1.RuntimeService/Version
	Aug 16 17:12:09 ha-764617 crio[676]: time="2024-08-16 17:12:09.015775380Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=19fbff05-f278-4dcb-8d41-b707838c6405 name=/runtime.v1.RuntimeService/Version
	Aug 16 17:12:09 ha-764617 crio[676]: time="2024-08-16 17:12:09.016904104Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=77aabd4b-345c-48a3-851c-4102ea650064 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 17:12:09 ha-764617 crio[676]: time="2024-08-16 17:12:09.017456325Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828329017426436,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=77aabd4b-345c-48a3-851c-4102ea650064 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 17:12:09 ha-764617 crio[676]: time="2024-08-16 17:12:09.018356610Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=97abb9e1-e3e8-43f6-8eaa-5bb8e2955feb name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 17:12:09 ha-764617 crio[676]: time="2024-08-16 17:12:09.018412537Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=97abb9e1-e3e8-43f6-8eaa-5bb8e2955feb name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 17:12:09 ha-764617 crio[676]: time="2024-08-16 17:12:09.018657769Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8f49214f24a1f9d4e237db072dea4cb4011708fed1d55a3518bae64afc9a36de,PodSandboxId:31ad2ee33305c2e247dc968727a480a35dd4e341f754742f3ab45df490c9d9b6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723828052423881885,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rcq66,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef4f9584-2155-48ce-80fa-30bac466b9f5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7484d3705a58cf84eea46cc2853fefc74ff28ce7be490d80fd998780a1345a8b,PodSandboxId:0158b06f966cea3c881bdd10c5c53ac153d60e8f64868f2f1893a602660250cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723827909501655918,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15a0a2d4-69d6-4a6b-9199-f8785e015c3b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d21ff55e0d1541ec985fefc0bbe41954428037be46427e4a0b9b9f6f59ff38b5,PodSandboxId:570a9af97580c893bd59ad0da740672446aad828cc2953120c5936aa6c511600,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723827909473257011,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-rhb6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea20ec0a-a16e-4703-bb54-2e54c31acd40,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eefbb289cdc64359e43e612fda427c74a5cd21d8765173cad59129f113b6faf,PodSandboxId:a96010807e82ab97d921d55fe01739b0b7dc0b5234c6f3d29ffd5e0d7ae9a661,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723827909453822317,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-d6c7g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 255004b9-d0
5e-4686-9e9c-6ec6f7aae439,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7c860bdbf8f8c3ce3e6ce12afcd55605f1afdf02828581465b57f088bf9fc24,PodSandboxId:850550a63d423740e066cd344cb8eca06c8ab1ce91d384e5962c2c137bc4c4e2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1723827897695724443,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-94vkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1ce0b8c-c2c8-400a-a013-6eb89e550cd9,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aaf72ada1592eb9db02d4b420e3712769e6486d17ade2669153a86196aec79d,PodSandboxId:7fa8ce6eea9326aa2d9e78422a5917fa128392accfa0f9395fc23862f5324831,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172382789
4189999095,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j75vc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50262aeb-9d97-4093-a43f-cb24a5515abb,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b4d4cb04162c2a865b03b9d68c6d63fe9ac39bfd8c3a34420cef100c23de268,PodSandboxId:29c0393581395683e0841872a8b47c31fae1d73c260f1331ec0727d42d4c4898,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172382788525
6090219,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5ac04a3c0a524fb49fee0e7201d9eee,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c020d60e48e21e3f68c2ab1e7dbc4570867af80f727edd6529d1d59b17f11a6f,PodSandboxId:c5d6c0455efc0ceafe45e3e9defa6b3e39bf97e12d07d6d742a88b3cbeaf65b6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723827882764931369,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a89700dd245c99cee73a27284c5b094,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:547ba7c3099cf77e755ecafde9e451130522c4ac7ad82e4d1425c79c1f03ae1b,PodSandboxId:09ec8ad12f1f1dcd1c0e02dc275ba62dff2a2b5f6ca7e7fd781694c2d45e4a18,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723827882761194658,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45c4aa250fdc29f3673166187d642d12,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d7b524ef17cfbc76cf8e0ec5c8dc05fb415ba95dd20034cc9e994fe15802183,PodSandboxId:9410cce2ddb5a77033469e2fea5eb8cce49cb54d02df3492ef98005be3b04efe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723827882756386863,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d9a187c472f17e2ba03b6daf392b7e4,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5964f78981acee32a76525df3d36071ce0c8b129aa0af6ff7aa1cdaff80b4110,PodSandboxId:df0ff04111d0b9081730712d0f7526286300e56603bc40376b44099e52560716,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723827882544331793,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32cbd9593cdf012e272df9a250d0e00c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=97abb9e1-e3e8-43f6-8eaa-5bb8e2955feb name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 17:12:09 ha-764617 crio[676]: time="2024-08-16 17:12:09.053881799Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bb517781-7467-460a-bb5a-f767ae44c280 name=/runtime.v1.RuntimeService/Version
	Aug 16 17:12:09 ha-764617 crio[676]: time="2024-08-16 17:12:09.053970663Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bb517781-7467-460a-bb5a-f767ae44c280 name=/runtime.v1.RuntimeService/Version
	Aug 16 17:12:09 ha-764617 crio[676]: time="2024-08-16 17:12:09.055434705Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=edc4eafa-8872-45db-ad10-f478e43e78cf name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 17:12:09 ha-764617 crio[676]: time="2024-08-16 17:12:09.055909458Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828329055887454,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=edc4eafa-8872-45db-ad10-f478e43e78cf name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 17:12:09 ha-764617 crio[676]: time="2024-08-16 17:12:09.056491869Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2c93e11f-1aa8-4b65-a222-1855068e3147 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 17:12:09 ha-764617 crio[676]: time="2024-08-16 17:12:09.056561225Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2c93e11f-1aa8-4b65-a222-1855068e3147 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 17:12:09 ha-764617 crio[676]: time="2024-08-16 17:12:09.057269184Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8f49214f24a1f9d4e237db072dea4cb4011708fed1d55a3518bae64afc9a36de,PodSandboxId:31ad2ee33305c2e247dc968727a480a35dd4e341f754742f3ab45df490c9d9b6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723828052423881885,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rcq66,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef4f9584-2155-48ce-80fa-30bac466b9f5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7484d3705a58cf84eea46cc2853fefc74ff28ce7be490d80fd998780a1345a8b,PodSandboxId:0158b06f966cea3c881bdd10c5c53ac153d60e8f64868f2f1893a602660250cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723827909501655918,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15a0a2d4-69d6-4a6b-9199-f8785e015c3b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d21ff55e0d1541ec985fefc0bbe41954428037be46427e4a0b9b9f6f59ff38b5,PodSandboxId:570a9af97580c893bd59ad0da740672446aad828cc2953120c5936aa6c511600,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723827909473257011,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-rhb6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea20ec0a-a16e-4703-bb54-2e54c31acd40,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eefbb289cdc64359e43e612fda427c74a5cd21d8765173cad59129f113b6faf,PodSandboxId:a96010807e82ab97d921d55fe01739b0b7dc0b5234c6f3d29ffd5e0d7ae9a661,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723827909453822317,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-d6c7g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 255004b9-d0
5e-4686-9e9c-6ec6f7aae439,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7c860bdbf8f8c3ce3e6ce12afcd55605f1afdf02828581465b57f088bf9fc24,PodSandboxId:850550a63d423740e066cd344cb8eca06c8ab1ce91d384e5962c2c137bc4c4e2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1723827897695724443,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-94vkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1ce0b8c-c2c8-400a-a013-6eb89e550cd9,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aaf72ada1592eb9db02d4b420e3712769e6486d17ade2669153a86196aec79d,PodSandboxId:7fa8ce6eea9326aa2d9e78422a5917fa128392accfa0f9395fc23862f5324831,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172382789
4189999095,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j75vc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50262aeb-9d97-4093-a43f-cb24a5515abb,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b4d4cb04162c2a865b03b9d68c6d63fe9ac39bfd8c3a34420cef100c23de268,PodSandboxId:29c0393581395683e0841872a8b47c31fae1d73c260f1331ec0727d42d4c4898,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172382788525
6090219,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5ac04a3c0a524fb49fee0e7201d9eee,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c020d60e48e21e3f68c2ab1e7dbc4570867af80f727edd6529d1d59b17f11a6f,PodSandboxId:c5d6c0455efc0ceafe45e3e9defa6b3e39bf97e12d07d6d742a88b3cbeaf65b6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723827882764931369,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a89700dd245c99cee73a27284c5b094,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:547ba7c3099cf77e755ecafde9e451130522c4ac7ad82e4d1425c79c1f03ae1b,PodSandboxId:09ec8ad12f1f1dcd1c0e02dc275ba62dff2a2b5f6ca7e7fd781694c2d45e4a18,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723827882761194658,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45c4aa250fdc29f3673166187d642d12,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d7b524ef17cfbc76cf8e0ec5c8dc05fb415ba95dd20034cc9e994fe15802183,PodSandboxId:9410cce2ddb5a77033469e2fea5eb8cce49cb54d02df3492ef98005be3b04efe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723827882756386863,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d9a187c472f17e2ba03b6daf392b7e4,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5964f78981acee32a76525df3d36071ce0c8b129aa0af6ff7aa1cdaff80b4110,PodSandboxId:df0ff04111d0b9081730712d0f7526286300e56603bc40376b44099e52560716,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723827882544331793,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32cbd9593cdf012e272df9a250d0e00c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2c93e11f-1aa8-4b65-a222-1855068e3147 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 17:12:09 ha-764617 crio[676]: time="2024-08-16 17:12:09.097327146Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=64dcca6f-2c7a-425c-8d1d-b42929bd3f38 name=/runtime.v1.RuntimeService/Version
	Aug 16 17:12:09 ha-764617 crio[676]: time="2024-08-16 17:12:09.097411502Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=64dcca6f-2c7a-425c-8d1d-b42929bd3f38 name=/runtime.v1.RuntimeService/Version
	Aug 16 17:12:09 ha-764617 crio[676]: time="2024-08-16 17:12:09.098646310Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d4e79c2d-cb0a-4fd3-bf0a-674cc13e6631 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 17:12:09 ha-764617 crio[676]: time="2024-08-16 17:12:09.099065252Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828329099043479,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d4e79c2d-cb0a-4fd3-bf0a-674cc13e6631 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 17:12:09 ha-764617 crio[676]: time="2024-08-16 17:12:09.099581861Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3563c865-89af-400d-a306-5bc5bdb1d3a2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 17:12:09 ha-764617 crio[676]: time="2024-08-16 17:12:09.099632544Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3563c865-89af-400d-a306-5bc5bdb1d3a2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 17:12:09 ha-764617 crio[676]: time="2024-08-16 17:12:09.099866728Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8f49214f24a1f9d4e237db072dea4cb4011708fed1d55a3518bae64afc9a36de,PodSandboxId:31ad2ee33305c2e247dc968727a480a35dd4e341f754742f3ab45df490c9d9b6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723828052423881885,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rcq66,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef4f9584-2155-48ce-80fa-30bac466b9f5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7484d3705a58cf84eea46cc2853fefc74ff28ce7be490d80fd998780a1345a8b,PodSandboxId:0158b06f966cea3c881bdd10c5c53ac153d60e8f64868f2f1893a602660250cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723827909501655918,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15a0a2d4-69d6-4a6b-9199-f8785e015c3b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d21ff55e0d1541ec985fefc0bbe41954428037be46427e4a0b9b9f6f59ff38b5,PodSandboxId:570a9af97580c893bd59ad0da740672446aad828cc2953120c5936aa6c511600,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723827909473257011,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-rhb6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea20ec0a-a16e-4703-bb54-2e54c31acd40,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eefbb289cdc64359e43e612fda427c74a5cd21d8765173cad59129f113b6faf,PodSandboxId:a96010807e82ab97d921d55fe01739b0b7dc0b5234c6f3d29ffd5e0d7ae9a661,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723827909453822317,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-d6c7g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 255004b9-d0
5e-4686-9e9c-6ec6f7aae439,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7c860bdbf8f8c3ce3e6ce12afcd55605f1afdf02828581465b57f088bf9fc24,PodSandboxId:850550a63d423740e066cd344cb8eca06c8ab1ce91d384e5962c2c137bc4c4e2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1723827897695724443,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-94vkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1ce0b8c-c2c8-400a-a013-6eb89e550cd9,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aaf72ada1592eb9db02d4b420e3712769e6486d17ade2669153a86196aec79d,PodSandboxId:7fa8ce6eea9326aa2d9e78422a5917fa128392accfa0f9395fc23862f5324831,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172382789
4189999095,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j75vc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50262aeb-9d97-4093-a43f-cb24a5515abb,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b4d4cb04162c2a865b03b9d68c6d63fe9ac39bfd8c3a34420cef100c23de268,PodSandboxId:29c0393581395683e0841872a8b47c31fae1d73c260f1331ec0727d42d4c4898,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172382788525
6090219,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5ac04a3c0a524fb49fee0e7201d9eee,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c020d60e48e21e3f68c2ab1e7dbc4570867af80f727edd6529d1d59b17f11a6f,PodSandboxId:c5d6c0455efc0ceafe45e3e9defa6b3e39bf97e12d07d6d742a88b3cbeaf65b6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723827882764931369,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a89700dd245c99cee73a27284c5b094,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:547ba7c3099cf77e755ecafde9e451130522c4ac7ad82e4d1425c79c1f03ae1b,PodSandboxId:09ec8ad12f1f1dcd1c0e02dc275ba62dff2a2b5f6ca7e7fd781694c2d45e4a18,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723827882761194658,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45c4aa250fdc29f3673166187d642d12,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d7b524ef17cfbc76cf8e0ec5c8dc05fb415ba95dd20034cc9e994fe15802183,PodSandboxId:9410cce2ddb5a77033469e2fea5eb8cce49cb54d02df3492ef98005be3b04efe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723827882756386863,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d9a187c472f17e2ba03b6daf392b7e4,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5964f78981acee32a76525df3d36071ce0c8b129aa0af6ff7aa1cdaff80b4110,PodSandboxId:df0ff04111d0b9081730712d0f7526286300e56603bc40376b44099e52560716,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723827882544331793,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32cbd9593cdf012e272df9a250d0e00c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3563c865-89af-400d-a306-5bc5bdb1d3a2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8f49214f24a1f       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   31ad2ee33305c       busybox-7dff88458-rcq66
	7484d3705a58c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   0158b06f966ce       storage-provisioner
	d21ff55e0d154       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   570a9af97580c       coredns-6f6b679f8f-rhb6h
	8eefbb289cdc6       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   a96010807e82a       coredns-6f6b679f8f-d6c7g
	b7c860bdbf8f8       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    7 minutes ago       Running             kindnet-cni               0                   850550a63d423       kindnet-94vkj
	1aaf72ada1592       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      7 minutes ago       Running             kube-proxy                0                   7fa8ce6eea932       kube-proxy-j75vc
	6b4d4cb04162c       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   29c0393581395       kube-vip-ha-764617
	c020d60e48e21       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      7 minutes ago       Running             etcd                      0                   c5d6c0455efc0       etcd-ha-764617
	547ba7c3099cf       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      7 minutes ago       Running             kube-scheduler            0                   09ec8ad12f1f1       kube-scheduler-ha-764617
	0d7b524ef17cf       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      7 minutes ago       Running             kube-controller-manager   0                   9410cce2ddb5a       kube-controller-manager-ha-764617
	5964f78981ace       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      7 minutes ago       Running             kube-apiserver            0                   df0ff04111d0b       kube-apiserver-ha-764617
	
	
	==> coredns [8eefbb289cdc64359e43e612fda427c74a5cd21d8765173cad59129f113b6faf] <==
	[INFO] 10.244.0.4:55343 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117682s
	[INFO] 10.244.0.4:40863 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000090711s
	[INFO] 10.244.2.2:52832 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114258s
	[INFO] 10.244.2.2:42301 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000139766s
	[INFO] 10.244.1.2:36594 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003847303s
	[INFO] 10.244.1.2:49450 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000234436s
	[INFO] 10.244.1.2:57236 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000164289s
	[INFO] 10.244.1.2:42444 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.01143086s
	[INFO] 10.244.1.2:55740 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00014199s
	[INFO] 10.244.0.4:37842 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000213176s
	[INFO] 10.244.2.2:33930 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001767359s
	[INFO] 10.244.2.2:58987 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000092615s
	[INFO] 10.244.2.2:33562 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000210507s
	[INFO] 10.244.1.2:37263 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000099612s
	[INFO] 10.244.0.4:45145 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000086744s
	[INFO] 10.244.0.4:33500 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000050948s
	[INFO] 10.244.2.2:35019 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000095066s
	[INFO] 10.244.2.2:58975 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000209149s
	[INFO] 10.244.2.2:53664 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077503s
	[INFO] 10.244.1.2:52681 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013622s
	[INFO] 10.244.1.2:34428 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000179694s
	[INFO] 10.244.1.2:38361 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000107495s
	[INFO] 10.244.0.4:33031 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000072835s
	[INFO] 10.244.0.4:46219 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00004433s
	[INFO] 10.244.2.2:36496 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000117578s
	
	
	==> coredns [d21ff55e0d1541ec985fefc0bbe41954428037be46427e4a0b9b9f6f59ff38b5] <==
	[INFO] 10.244.1.2:44737 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000149459s
	[INFO] 10.244.0.4:48083 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00009692s
	[INFO] 10.244.0.4:46968 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001630816s
	[INFO] 10.244.0.4:57470 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000132496s
	[INFO] 10.244.0.4:48384 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001005045s
	[INFO] 10.244.0.4:40408 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000078758s
	[INFO] 10.244.0.4:54196 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000053068s
	[INFO] 10.244.0.4:58299 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000099814s
	[INFO] 10.244.2.2:44737 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000172429s
	[INFO] 10.244.2.2:44835 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000087998s
	[INFO] 10.244.2.2:59750 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001386651s
	[INFO] 10.244.2.2:36531 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000075822s
	[INFO] 10.244.2.2:33517 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00005988s
	[INFO] 10.244.1.2:58731 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000174613s
	[INFO] 10.244.1.2:43400 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000105057s
	[INFO] 10.244.1.2:41968 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000104182s
	[INFO] 10.244.0.4:46666 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121402s
	[INFO] 10.244.0.4:46004 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000066296s
	[INFO] 10.244.2.2:39282 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00010929s
	[INFO] 10.244.1.2:58290 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000151089s
	[INFO] 10.244.0.4:38377 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152447s
	[INFO] 10.244.0.4:57414 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000061601s
	[INFO] 10.244.2.2:49722 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000182712s
	[INFO] 10.244.2.2:47690 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00014162s
	[INFO] 10.244.2.2:41318 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000108034s
	
	
	==> describe nodes <==
	Name:               ha-764617
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-764617
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd
	                    minikube.k8s.io/name=ha-764617
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_16T17_04_49_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 17:04:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-764617
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 17:12:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 17:07:52 +0000   Fri, 16 Aug 2024 17:04:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 17:07:52 +0000   Fri, 16 Aug 2024 17:04:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 17:07:52 +0000   Fri, 16 Aug 2024 17:04:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 17:07:52 +0000   Fri, 16 Aug 2024 17:05:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.18
	  Hostname:    ha-764617
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c56e74c3649b4538acc75a2edf2b5dea
	  System UUID:                c56e74c3-649b-4538-acc7-5a2edf2b5dea
	  Boot ID:                    b56c67cf-18b1-46e0-819e-927538c01209
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-rcq66              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m40s
	  kube-system                 coredns-6f6b679f8f-d6c7g             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     7m16s
	  kube-system                 coredns-6f6b679f8f-rhb6h             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     7m16s
	  kube-system                 etcd-ha-764617                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         7m21s
	  kube-system                 kindnet-94vkj                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m16s
	  kube-system                 kube-apiserver-ha-764617             250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m22s
	  kube-system                 kube-controller-manager-ha-764617    200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m21s
	  kube-system                 kube-proxy-j75vc                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m16s
	  kube-system                 kube-scheduler-ha-764617             100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m21s
	  kube-system                 kube-vip-ha-764617                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m23s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m14s                  kube-proxy       
	  Normal  NodeHasSufficientPID     7m28s (x7 over 7m28s)  kubelet          Node ha-764617 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m28s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 7m28s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m28s (x8 over 7m28s)  kubelet          Node ha-764617 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m28s (x8 over 7m28s)  kubelet          Node ha-764617 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 7m21s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m21s                  kubelet          Node ha-764617 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m21s                  kubelet          Node ha-764617 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m21s                  kubelet          Node ha-764617 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m17s                  node-controller  Node ha-764617 event: Registered Node ha-764617 in Controller
	  Normal  NodeReady                7m1s                   kubelet          Node ha-764617 status is now: NodeReady
	  Normal  RegisteredNode           6m16s                  node-controller  Node ha-764617 event: Registered Node ha-764617 in Controller
	  Normal  RegisteredNode           5m                     node-controller  Node ha-764617 event: Registered Node ha-764617 in Controller
	
	
	Name:               ha-764617-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-764617-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd
	                    minikube.k8s.io/name=ha-764617
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_16T17_05_48_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 17:05:45 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-764617-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 17:08:38 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 16 Aug 2024 17:07:47 +0000   Fri, 16 Aug 2024 17:09:19 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 16 Aug 2024 17:07:47 +0000   Fri, 16 Aug 2024 17:09:19 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 16 Aug 2024 17:07:47 +0000   Fri, 16 Aug 2024 17:09:19 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 16 Aug 2024 17:07:47 +0000   Fri, 16 Aug 2024 17:09:19 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.184
	  Hostname:    ha-764617-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9b9263e99d3f46399a1ef68b5c9541da
	  System UUID:                9b9263e9-9d3f-4639-9a1e-f68b5c9541da
	  Boot ID:                    64559aa2-31fd-4afa-b1e1-b351bc809c37
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-5kg62                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m40s
	  kube-system                 etcd-ha-764617-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m22s
	  kube-system                 kindnet-7l8xt                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m24s
	  kube-system                 kube-apiserver-ha-764617-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m22s
	  kube-system                 kube-controller-manager-ha-764617-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m22s
	  kube-system                 kube-proxy-g5szr                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m24s
	  kube-system                 kube-scheduler-ha-764617-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m15s
	  kube-system                 kube-vip-ha-764617-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m20s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  6m24s (x8 over 6m24s)  kubelet          Node ha-764617-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m24s (x8 over 6m24s)  kubelet          Node ha-764617-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m24s (x7 over 6m24s)  kubelet          Node ha-764617-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m22s                  node-controller  Node ha-764617-m02 event: Registered Node ha-764617-m02 in Controller
	  Normal  RegisteredNode           6m16s                  node-controller  Node ha-764617-m02 event: Registered Node ha-764617-m02 in Controller
	  Normal  RegisteredNode           5m                     node-controller  Node ha-764617-m02 event: Registered Node ha-764617-m02 in Controller
	  Normal  NodeNotReady             2m50s                  node-controller  Node ha-764617-m02 status is now: NodeNotReady
	
	
	Name:               ha-764617-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-764617-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd
	                    minikube.k8s.io/name=ha-764617
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_16T17_07_04_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 17:07:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-764617-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 17:12:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 17:08:02 +0000   Fri, 16 Aug 2024 17:07:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 17:08:02 +0000   Fri, 16 Aug 2024 17:07:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 17:08:02 +0000   Fri, 16 Aug 2024 17:07:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 17:08:02 +0000   Fri, 16 Aug 2024 17:07:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.253
	  Hostname:    ha-764617-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c731249060784cabbf92c847e80f83c3
	  System UUID:                c7312490-6078-4cab-bf92-c847e80f83c3
	  Boot ID:                    af3e1a19-01a5-4968-b106-ed3a1fef8c3a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-rvd47                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m40s
	  kube-system                 etcd-ha-764617-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m6s
	  kube-system                 kindnet-fvp67                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m8s
	  kube-system                 kube-apiserver-ha-764617-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m8s
	  kube-system                 kube-controller-manager-ha-764617-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m7s
	  kube-system                 kube-proxy-mgvzm                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m8s
	  kube-system                 kube-scheduler-ha-764617-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m8s
	  kube-system                 kube-vip-ha-764617-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 5m4s                 kube-proxy       
	  Normal  NodeAllocatableEnforced  5m9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m8s (x8 over 5m9s)  kubelet          Node ha-764617-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m8s (x8 over 5m9s)  kubelet          Node ha-764617-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m8s (x7 over 5m9s)  kubelet          Node ha-764617-m03 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m7s                 node-controller  Node ha-764617-m03 event: Registered Node ha-764617-m03 in Controller
	  Normal  RegisteredNode           5m6s                 node-controller  Node ha-764617-m03 event: Registered Node ha-764617-m03 in Controller
	  Normal  RegisteredNode           5m                   node-controller  Node ha-764617-m03 event: Registered Node ha-764617-m03 in Controller
	
	
	Name:               ha-764617-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-764617-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd
	                    minikube.k8s.io/name=ha-764617
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_16T17_08_05_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 17:08:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-764617-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 17:11:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 17:08:35 +0000   Fri, 16 Aug 2024 17:08:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 17:08:35 +0000   Fri, 16 Aug 2024 17:08:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 17:08:35 +0000   Fri, 16 Aug 2024 17:08:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 17:08:35 +0000   Fri, 16 Aug 2024 17:08:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.137
	  Hostname:    ha-764617-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6601760275c145fda2c7de8f57c611fa
	  System UUID:                66017602-75c1-45fd-a2c7-de8f57c611fa
	  Boot ID:                    2537bdd8-4785-401f-91cd-561e77b7360b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-785hx       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m4s
	  kube-system                 kube-proxy-p9gpb    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m58s                kube-proxy       
	  Normal  NodeHasSufficientMemory  4m4s (x2 over 4m4s)  kubelet          Node ha-764617-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m4s (x2 over 4m4s)  kubelet          Node ha-764617-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m4s (x2 over 4m4s)  kubelet          Node ha-764617-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m2s                 node-controller  Node ha-764617-m04 event: Registered Node ha-764617-m04 in Controller
	  Normal  RegisteredNode           4m1s                 node-controller  Node ha-764617-m04 event: Registered Node ha-764617-m04 in Controller
	  Normal  RegisteredNode           4m                   node-controller  Node ha-764617-m04 event: Registered Node ha-764617-m04 in Controller
	  Normal  NodeReady                3m43s                kubelet          Node ha-764617-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Aug16 17:04] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050523] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036974] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.680592] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.748851] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.529279] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.494535] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.053885] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056699] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.201898] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.107599] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.255485] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +3.757333] systemd-fstab-generator[760]: Ignoring "noauto" option for root device
	[  +4.397161] systemd-fstab-generator[897]: Ignoring "noauto" option for root device
	[  +0.059974] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.993084] systemd-fstab-generator[1320]: Ignoring "noauto" option for root device
	[  +0.077626] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.633091] kauditd_printk_skb: 18 callbacks suppressed
	[Aug16 17:05] kauditd_printk_skb: 41 callbacks suppressed
	[ +41.798128] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [c020d60e48e21e3f68c2ab1e7dbc4570867af80f727edd6529d1d59b17f11a6f] <==
	{"level":"warn","ts":"2024-08-16T17:12:09.073663Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d6d01a71dfc61a14","from":"d6d01a71dfc61a14","remote-peer-id":"c0829ec3e89b55c7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T17:12:09.365564Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d6d01a71dfc61a14","from":"d6d01a71dfc61a14","remote-peer-id":"c0829ec3e89b55c7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T17:12:09.369713Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d6d01a71dfc61a14","from":"d6d01a71dfc61a14","remote-peer-id":"c0829ec3e89b55c7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T17:12:09.373694Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d6d01a71dfc61a14","from":"d6d01a71dfc61a14","remote-peer-id":"c0829ec3e89b55c7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T17:12:09.379825Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d6d01a71dfc61a14","from":"d6d01a71dfc61a14","remote-peer-id":"c0829ec3e89b55c7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T17:12:09.389494Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d6d01a71dfc61a14","from":"d6d01a71dfc61a14","remote-peer-id":"c0829ec3e89b55c7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T17:12:09.396518Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d6d01a71dfc61a14","from":"d6d01a71dfc61a14","remote-peer-id":"c0829ec3e89b55c7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T17:12:09.402238Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d6d01a71dfc61a14","from":"d6d01a71dfc61a14","remote-peer-id":"c0829ec3e89b55c7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T17:12:09.405787Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d6d01a71dfc61a14","from":"d6d01a71dfc61a14","remote-peer-id":"c0829ec3e89b55c7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T17:12:09.413055Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d6d01a71dfc61a14","from":"d6d01a71dfc61a14","remote-peer-id":"c0829ec3e89b55c7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T17:12:09.425117Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d6d01a71dfc61a14","from":"d6d01a71dfc61a14","remote-peer-id":"c0829ec3e89b55c7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T17:12:09.431661Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d6d01a71dfc61a14","from":"d6d01a71dfc61a14","remote-peer-id":"c0829ec3e89b55c7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T17:12:09.434819Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d6d01a71dfc61a14","from":"d6d01a71dfc61a14","remote-peer-id":"c0829ec3e89b55c7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T17:12:09.437842Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d6d01a71dfc61a14","from":"d6d01a71dfc61a14","remote-peer-id":"c0829ec3e89b55c7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T17:12:09.445315Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d6d01a71dfc61a14","from":"d6d01a71dfc61a14","remote-peer-id":"c0829ec3e89b55c7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T17:12:09.451800Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d6d01a71dfc61a14","from":"d6d01a71dfc61a14","remote-peer-id":"c0829ec3e89b55c7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T17:12:09.455680Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d6d01a71dfc61a14","from":"d6d01a71dfc61a14","remote-peer-id":"c0829ec3e89b55c7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T17:12:09.458919Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d6d01a71dfc61a14","from":"d6d01a71dfc61a14","remote-peer-id":"c0829ec3e89b55c7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T17:12:09.460043Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d6d01a71dfc61a14","from":"d6d01a71dfc61a14","remote-peer-id":"c0829ec3e89b55c7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T17:12:09.464021Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d6d01a71dfc61a14","from":"d6d01a71dfc61a14","remote-peer-id":"c0829ec3e89b55c7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T17:12:09.467229Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d6d01a71dfc61a14","from":"d6d01a71dfc61a14","remote-peer-id":"c0829ec3e89b55c7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T17:12:09.471397Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d6d01a71dfc61a14","from":"d6d01a71dfc61a14","remote-peer-id":"c0829ec3e89b55c7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T17:12:09.473656Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d6d01a71dfc61a14","from":"d6d01a71dfc61a14","remote-peer-id":"c0829ec3e89b55c7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T17:12:09.478616Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d6d01a71dfc61a14","from":"d6d01a71dfc61a14","remote-peer-id":"c0829ec3e89b55c7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T17:12:09.485104Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d6d01a71dfc61a14","from":"d6d01a71dfc61a14","remote-peer-id":"c0829ec3e89b55c7","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 17:12:09 up 7 min,  0 users,  load average: 0.29, 0.46, 0.26
	Linux ha-764617 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [b7c860bdbf8f8c3ce3e6ce12afcd55605f1afdf02828581465b57f088bf9fc24] <==
	I0816 17:11:38.554347       1 main.go:322] Node ha-764617-m04 has CIDR [10.244.3.0/24] 
	I0816 17:11:48.559422       1 main.go:295] Handling node with IPs: map[192.168.39.253:{}]
	I0816 17:11:48.559656       1 main.go:322] Node ha-764617-m03 has CIDR [10.244.2.0/24] 
	I0816 17:11:48.559972       1 main.go:295] Handling node with IPs: map[192.168.39.137:{}]
	I0816 17:11:48.560068       1 main.go:322] Node ha-764617-m04 has CIDR [10.244.3.0/24] 
	I0816 17:11:48.560912       1 main.go:295] Handling node with IPs: map[192.168.39.18:{}]
	I0816 17:11:48.560959       1 main.go:299] handling current node
	I0816 17:11:48.560983       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0816 17:11:48.560992       1 main.go:322] Node ha-764617-m02 has CIDR [10.244.1.0/24] 
	I0816 17:11:58.551435       1 main.go:295] Handling node with IPs: map[192.168.39.18:{}]
	I0816 17:11:58.551545       1 main.go:299] handling current node
	I0816 17:11:58.551584       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0816 17:11:58.551603       1 main.go:322] Node ha-764617-m02 has CIDR [10.244.1.0/24] 
	I0816 17:11:58.551829       1 main.go:295] Handling node with IPs: map[192.168.39.253:{}]
	I0816 17:11:58.551870       1 main.go:322] Node ha-764617-m03 has CIDR [10.244.2.0/24] 
	I0816 17:11:58.551941       1 main.go:295] Handling node with IPs: map[192.168.39.137:{}]
	I0816 17:11:58.551959       1 main.go:322] Node ha-764617-m04 has CIDR [10.244.3.0/24] 
	I0816 17:12:08.551457       1 main.go:295] Handling node with IPs: map[192.168.39.253:{}]
	I0816 17:12:08.551541       1 main.go:322] Node ha-764617-m03 has CIDR [10.244.2.0/24] 
	I0816 17:12:08.551752       1 main.go:295] Handling node with IPs: map[192.168.39.137:{}]
	I0816 17:12:08.551775       1 main.go:322] Node ha-764617-m04 has CIDR [10.244.3.0/24] 
	I0816 17:12:08.551853       1 main.go:295] Handling node with IPs: map[192.168.39.18:{}]
	I0816 17:12:08.551925       1 main.go:299] handling current node
	I0816 17:12:08.551952       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0816 17:12:08.551967       1 main.go:322] Node ha-764617-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [5964f78981acee32a76525df3d36071ce0c8b129aa0af6ff7aa1cdaff80b4110] <==
	W0816 17:04:47.326211       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.18]
	I0816 17:04:47.327200       1 controller.go:615] quota admission added evaluator for: endpoints
	I0816 17:04:47.331997       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0816 17:04:47.701125       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0816 17:04:48.671616       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0816 17:04:48.688479       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0816 17:04:48.696576       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0816 17:04:53.201660       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0816 17:04:53.459024       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0816 17:07:33.706199       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47396: use of closed network connection
	E0816 17:07:33.897650       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50922: use of closed network connection
	E0816 17:07:34.095470       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50948: use of closed network connection
	E0816 17:07:34.279524       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50966: use of closed network connection
	E0816 17:07:34.448894       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50984: use of closed network connection
	E0816 17:07:34.624762       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51016: use of closed network connection
	E0816 17:07:34.807764       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51030: use of closed network connection
	E0816 17:07:34.983617       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51036: use of closed network connection
	E0816 17:07:35.162995       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51062: use of closed network connection
	E0816 17:07:35.438882       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51072: use of closed network connection
	E0816 17:07:35.614283       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51098: use of closed network connection
	E0816 17:07:35.789491       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51122: use of closed network connection
	E0816 17:07:35.955099       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51152: use of closed network connection
	E0816 17:07:36.141080       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51164: use of closed network connection
	E0816 17:07:36.312313       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51186: use of closed network connection
	W0816 17:08:57.342700       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.18 192.168.39.253]
	
	
	==> kube-controller-manager [0d7b524ef17cfbc76cf8e0ec5c8dc05fb415ba95dd20034cc9e994fe15802183] <==
	I0816 17:08:05.397423       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-764617-m04" podCIDRs=["10.244.3.0/24"]
	I0816 17:08:05.397475       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-764617-m04"
	I0816 17:08:05.397557       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-764617-m04"
	I0816 17:08:05.408900       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-764617-m04"
	I0816 17:08:05.521368       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-764617-m04"
	I0816 17:08:05.923939       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-764617-m04"
	I0816 17:08:07.468502       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-764617-m04"
	I0816 17:08:07.537876       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-764617-m04"
	I0816 17:08:08.309439       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-764617-m04"
	I0816 17:08:08.353504       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-764617-m04"
	I0816 17:08:09.380633       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-764617-m04"
	I0816 17:08:09.463903       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-764617-m04"
	I0816 17:08:15.643550       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-764617-m04"
	I0816 17:08:26.461323       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-764617-m04"
	I0816 17:08:26.461538       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-764617-m04"
	I0816 17:08:26.483748       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-764617-m04"
	I0816 17:08:27.486425       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-764617-m04"
	I0816 17:08:35.906607       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-764617-m04"
	I0816 17:09:19.401331       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-764617-m02"
	I0816 17:09:19.401391       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-764617-m04"
	I0816 17:09:19.423873       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-764617-m02"
	I0816 17:09:19.561938       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="33.322302ms"
	I0816 17:09:19.562198       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="171.577µs"
	I0816 17:09:22.527934       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-764617-m02"
	I0816 17:09:24.671322       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-764617-m02"
	
	
	==> kube-proxy [1aaf72ada1592eb9db02d4b420e3712769e6486d17ade2669153a86196aec79d] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0816 17:04:54.457594       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0816 17:04:54.467881       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.18"]
	E0816 17:04:54.467972       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0816 17:04:54.505988       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0816 17:04:54.506046       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0816 17:04:54.506075       1 server_linux.go:169] "Using iptables Proxier"
	I0816 17:04:54.508357       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0816 17:04:54.508740       1 server.go:483] "Version info" version="v1.31.0"
	I0816 17:04:54.508807       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 17:04:54.510310       1 config.go:197] "Starting service config controller"
	I0816 17:04:54.510367       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0816 17:04:54.510427       1 config.go:104] "Starting endpoint slice config controller"
	I0816 17:04:54.510443       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0816 17:04:54.510949       1 config.go:326] "Starting node config controller"
	I0816 17:04:54.510985       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0816 17:04:54.610842       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0816 17:04:54.610910       1 shared_informer.go:320] Caches are synced for service config
	I0816 17:04:54.611095       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [547ba7c3099cf77e755ecafde9e451130522c4ac7ad82e4d1425c79c1f03ae1b] <==
	W0816 17:04:46.565034       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 17:04:46.565183       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0816 17:04:46.612257       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0816 17:04:46.612306       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:04:46.612647       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 17:04:46.612679       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:04:46.730255       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0816 17:04:46.730301       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 17:04:46.739812       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 17:04:46.739857       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0816 17:04:46.794371       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 17:04:46.794604       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:04:46.794371       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0816 17:04:46.794716       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0816 17:04:46.812070       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 17:04:46.812114       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0816 17:04:48.641797       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0816 17:07:29.208916       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-rvd47\": pod busybox-7dff88458-rvd47 is already assigned to node \"ha-764617-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-rvd47" node="ha-764617-m03"
	E0816 17:07:29.209097       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-rvd47\": pod busybox-7dff88458-rvd47 is already assigned to node \"ha-764617-m03\"" pod="default/busybox-7dff88458-rvd47"
	E0816 17:07:29.210073       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-rcq66\": pod busybox-7dff88458-rcq66 is already assigned to node \"ha-764617\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-rcq66" node="ha-764617"
	E0816 17:07:29.218500       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-rcq66\": pod busybox-7dff88458-rcq66 is already assigned to node \"ha-764617\"" pod="default/busybox-7dff88458-rcq66"
	E0816 17:08:05.463041       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-785hx\": pod kindnet-785hx is already assigned to node \"ha-764617-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-785hx" node="ha-764617-m04"
	E0816 17:08:05.468950       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 82c775a8-d580-4201-9da7-790a5a95ef6f(kube-system/kindnet-785hx) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-785hx"
	E0816 17:08:05.469002       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-785hx\": pod kindnet-785hx is already assigned to node \"ha-764617-m04\"" pod="kube-system/kindnet-785hx"
	I0816 17:08:05.469055       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-785hx" node="ha-764617-m04"
	
	
	==> kubelet <==
	Aug 16 17:10:48 ha-764617 kubelet[1328]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 17:10:48 ha-764617 kubelet[1328]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 17:10:48 ha-764617 kubelet[1328]: E0816 17:10:48.697973    1328 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828248697601185,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:10:48 ha-764617 kubelet[1328]: E0816 17:10:48.698070    1328 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828248697601185,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:10:58 ha-764617 kubelet[1328]: E0816 17:10:58.699796    1328 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828258699572978,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:10:58 ha-764617 kubelet[1328]: E0816 17:10:58.699833    1328 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828258699572978,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:11:08 ha-764617 kubelet[1328]: E0816 17:11:08.701249    1328 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828268700924886,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:11:08 ha-764617 kubelet[1328]: E0816 17:11:08.701288    1328 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828268700924886,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:11:18 ha-764617 kubelet[1328]: E0816 17:11:18.704240    1328 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828278703827755,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:11:18 ha-764617 kubelet[1328]: E0816 17:11:18.704278    1328 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828278703827755,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:11:28 ha-764617 kubelet[1328]: E0816 17:11:28.706007    1328 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828288705622252,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:11:28 ha-764617 kubelet[1328]: E0816 17:11:28.706042    1328 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828288705622252,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:11:38 ha-764617 kubelet[1328]: E0816 17:11:38.709059    1328 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828298708734632,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:11:38 ha-764617 kubelet[1328]: E0816 17:11:38.709403    1328 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828298708734632,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:11:48 ha-764617 kubelet[1328]: E0816 17:11:48.596372    1328 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 17:11:48 ha-764617 kubelet[1328]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 17:11:48 ha-764617 kubelet[1328]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 17:11:48 ha-764617 kubelet[1328]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 17:11:48 ha-764617 kubelet[1328]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 17:11:48 ha-764617 kubelet[1328]: E0816 17:11:48.711286    1328 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828308710973683,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:11:48 ha-764617 kubelet[1328]: E0816 17:11:48.711331    1328 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828308710973683,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:11:58 ha-764617 kubelet[1328]: E0816 17:11:58.713765    1328 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828318713090120,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:11:58 ha-764617 kubelet[1328]: E0816 17:11:58.714442    1328 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828318713090120,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:12:08 ha-764617 kubelet[1328]: E0816 17:12:08.717720    1328 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828328716974518,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:12:08 ha-764617 kubelet[1328]: E0816 17:12:08.718086    1328 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828328716974518,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-764617 -n ha-764617
helpers_test.go:261: (dbg) Run:  kubectl --context ha-764617 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (59.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (398.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-764617 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-764617 -v=7 --alsologtostderr
E0816 17:13:21.061976   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/functional-654639/client.crt: no such file or directory" logger="UnhandledError"
E0816 17:13:48.765729   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/functional-654639/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-764617 -v=7 --alsologtostderr: exit status 82 (2m1.778458843s)

                                                
                                                
-- stdout --
	* Stopping node "ha-764617-m04"  ...
	* Stopping node "ha-764617-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 17:12:10.901749   33094 out.go:345] Setting OutFile to fd 1 ...
	I0816 17:12:10.901986   33094 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:12:10.901994   33094 out.go:358] Setting ErrFile to fd 2...
	I0816 17:12:10.901998   33094 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:12:10.902155   33094 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-9545/.minikube/bin
	I0816 17:12:10.902399   33094 out.go:352] Setting JSON to false
	I0816 17:12:10.902520   33094 mustload.go:65] Loading cluster: ha-764617
	I0816 17:12:10.902891   33094 config.go:182] Loaded profile config "ha-764617": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 17:12:10.903019   33094 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/config.json ...
	I0816 17:12:10.903230   33094 mustload.go:65] Loading cluster: ha-764617
	I0816 17:12:10.903370   33094 config.go:182] Loaded profile config "ha-764617": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 17:12:10.903401   33094 stop.go:39] StopHost: ha-764617-m04
	I0816 17:12:10.903750   33094 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:12:10.903788   33094 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:12:10.919309   33094 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39783
	I0816 17:12:10.919732   33094 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:12:10.920264   33094 main.go:141] libmachine: Using API Version  1
	I0816 17:12:10.920295   33094 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:12:10.920655   33094 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:12:10.924143   33094 out.go:177] * Stopping node "ha-764617-m04"  ...
	I0816 17:12:10.925338   33094 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0816 17:12:10.925377   33094 main.go:141] libmachine: (ha-764617-m04) Calling .DriverName
	I0816 17:12:10.925590   33094 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0816 17:12:10.925614   33094 main.go:141] libmachine: (ha-764617-m04) Calling .GetSSHHostname
	I0816 17:12:10.928156   33094 main.go:141] libmachine: (ha-764617-m04) DBG | domain ha-764617-m04 has defined MAC address 52:54:00:61:8e:ba in network mk-ha-764617
	I0816 17:12:10.928504   33094 main.go:141] libmachine: (ha-764617-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:8e:ba", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:07:50 +0000 UTC Type:0 Mac:52:54:00:61:8e:ba Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-764617-m04 Clientid:01:52:54:00:61:8e:ba}
	I0816 17:12:10.928537   33094 main.go:141] libmachine: (ha-764617-m04) DBG | domain ha-764617-m04 has defined IP address 192.168.39.137 and MAC address 52:54:00:61:8e:ba in network mk-ha-764617
	I0816 17:12:10.928678   33094 main.go:141] libmachine: (ha-764617-m04) Calling .GetSSHPort
	I0816 17:12:10.928855   33094 main.go:141] libmachine: (ha-764617-m04) Calling .GetSSHKeyPath
	I0816 17:12:10.928991   33094 main.go:141] libmachine: (ha-764617-m04) Calling .GetSSHUsername
	I0816 17:12:10.929155   33094 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m04/id_rsa Username:docker}
	I0816 17:12:11.011524   33094 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0816 17:12:11.066165   33094 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0816 17:12:11.119174   33094 main.go:141] libmachine: Stopping "ha-764617-m04"...
	I0816 17:12:11.119201   33094 main.go:141] libmachine: (ha-764617-m04) Calling .GetState
	I0816 17:12:11.121110   33094 main.go:141] libmachine: (ha-764617-m04) Calling .Stop
	I0816 17:12:11.124815   33094 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 0/120
	I0816 17:12:12.218902   33094 main.go:141] libmachine: (ha-764617-m04) Calling .GetState
	I0816 17:12:12.219961   33094 main.go:141] libmachine: Machine "ha-764617-m04" was stopped.
	I0816 17:12:12.219974   33094 stop.go:75] duration metric: took 1.294647563s to stop
	I0816 17:12:12.219990   33094 stop.go:39] StopHost: ha-764617-m03
	I0816 17:12:12.220259   33094 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:12:12.220296   33094 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:12:12.234934   33094 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33339
	I0816 17:12:12.235293   33094 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:12:12.235749   33094 main.go:141] libmachine: Using API Version  1
	I0816 17:12:12.235768   33094 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:12:12.236049   33094 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:12:12.237921   33094 out.go:177] * Stopping node "ha-764617-m03"  ...
	I0816 17:12:12.239054   33094 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0816 17:12:12.239079   33094 main.go:141] libmachine: (ha-764617-m03) Calling .DriverName
	I0816 17:12:12.239309   33094 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0816 17:12:12.239335   33094 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHHostname
	I0816 17:12:12.242067   33094 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:12:12.242448   33094 main.go:141] libmachine: (ha-764617-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:4e:81", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:06:25 +0000 UTC Type:0 Mac:52:54:00:b2:4e:81 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-764617-m03 Clientid:01:52:54:00:b2:4e:81}
	I0816 17:12:12.242488   33094 main.go:141] libmachine: (ha-764617-m03) DBG | domain ha-764617-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:b2:4e:81 in network mk-ha-764617
	I0816 17:12:12.242610   33094 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHPort
	I0816 17:12:12.242762   33094 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHKeyPath
	I0816 17:12:12.242883   33094 main.go:141] libmachine: (ha-764617-m03) Calling .GetSSHUsername
	I0816 17:12:12.243014   33094 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m03/id_rsa Username:docker}
	I0816 17:12:12.322437   33094 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0816 17:12:12.374313   33094 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0816 17:12:12.430958   33094 main.go:141] libmachine: Stopping "ha-764617-m03"...
	I0816 17:12:12.430980   33094 main.go:141] libmachine: (ha-764617-m03) Calling .GetState
	I0816 17:12:12.432409   33094 main.go:141] libmachine: (ha-764617-m03) Calling .Stop
	I0816 17:12:12.435580   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 0/120
	I0816 17:12:13.437017   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 1/120
	I0816 17:12:14.439058   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 2/120
	I0816 17:12:15.440482   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 3/120
	I0816 17:12:16.441882   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 4/120
	I0816 17:12:17.444416   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 5/120
	I0816 17:12:18.446750   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 6/120
	I0816 17:12:19.448439   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 7/120
	I0816 17:12:20.449734   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 8/120
	I0816 17:12:21.451237   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 9/120
	I0816 17:12:22.453158   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 10/120
	I0816 17:12:23.454670   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 11/120
	I0816 17:12:24.456347   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 12/120
	I0816 17:12:25.458115   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 13/120
	I0816 17:12:26.459560   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 14/120
	I0816 17:12:27.461723   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 15/120
	I0816 17:12:28.463187   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 16/120
	I0816 17:12:29.464937   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 17/120
	I0816 17:12:30.466460   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 18/120
	I0816 17:12:31.468267   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 19/120
	I0816 17:12:32.470437   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 20/120
	I0816 17:12:33.472602   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 21/120
	I0816 17:12:34.474260   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 22/120
	I0816 17:12:35.475845   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 23/120
	I0816 17:12:36.477845   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 24/120
	I0816 17:12:37.480310   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 25/120
	I0816 17:12:38.481891   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 26/120
	I0816 17:12:39.483772   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 27/120
	I0816 17:12:40.485306   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 28/120
	I0816 17:12:41.487343   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 29/120
	I0816 17:12:42.490204   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 30/120
	I0816 17:12:43.491742   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 31/120
	I0816 17:12:44.493449   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 32/120
	I0816 17:12:45.494965   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 33/120
	I0816 17:12:46.496393   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 34/120
	I0816 17:12:47.498209   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 35/120
	I0816 17:12:48.499544   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 36/120
	I0816 17:12:49.501047   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 37/120
	I0816 17:12:50.502933   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 38/120
	I0816 17:12:51.504172   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 39/120
	I0816 17:12:52.506136   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 40/120
	I0816 17:12:53.507717   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 41/120
	I0816 17:12:54.509232   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 42/120
	I0816 17:12:55.510435   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 43/120
	I0816 17:12:56.511831   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 44/120
	I0816 17:12:57.513350   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 45/120
	I0816 17:12:58.515304   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 46/120
	I0816 17:12:59.516819   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 47/120
	I0816 17:13:00.519232   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 48/120
	I0816 17:13:01.520920   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 49/120
	I0816 17:13:02.522948   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 50/120
	I0816 17:13:03.524431   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 51/120
	I0816 17:13:04.526117   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 52/120
	I0816 17:13:05.527517   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 53/120
	I0816 17:13:06.529001   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 54/120
	I0816 17:13:07.530817   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 55/120
	I0816 17:13:08.532366   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 56/120
	I0816 17:13:09.534079   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 57/120
	I0816 17:13:10.535714   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 58/120
	I0816 17:13:11.537254   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 59/120
	I0816 17:13:12.539052   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 60/120
	I0816 17:13:13.540453   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 61/120
	I0816 17:13:14.541933   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 62/120
	I0816 17:13:15.543442   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 63/120
	I0816 17:13:16.544665   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 64/120
	I0816 17:13:17.546515   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 65/120
	I0816 17:13:18.547867   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 66/120
	I0816 17:13:19.549364   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 67/120
	I0816 17:13:20.551076   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 68/120
	I0816 17:13:21.552546   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 69/120
	I0816 17:13:22.554713   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 70/120
	I0816 17:13:23.556148   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 71/120
	I0816 17:13:24.558519   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 72/120
	I0816 17:13:25.560098   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 73/120
	I0816 17:13:26.561778   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 74/120
	I0816 17:13:27.563679   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 75/120
	I0816 17:13:28.565082   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 76/120
	I0816 17:13:29.566529   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 77/120
	I0816 17:13:30.567907   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 78/120
	I0816 17:13:31.569593   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 79/120
	I0816 17:13:32.571548   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 80/120
	I0816 17:13:33.573111   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 81/120
	I0816 17:13:34.574985   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 82/120
	I0816 17:13:35.576561   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 83/120
	I0816 17:13:36.577899   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 84/120
	I0816 17:13:37.579458   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 85/120
	I0816 17:13:38.580709   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 86/120
	I0816 17:13:39.582166   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 87/120
	I0816 17:13:40.583512   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 88/120
	I0816 17:13:41.585217   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 89/120
	I0816 17:13:42.587016   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 90/120
	I0816 17:13:43.588401   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 91/120
	I0816 17:13:44.589743   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 92/120
	I0816 17:13:45.591705   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 93/120
	I0816 17:13:46.593121   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 94/120
	I0816 17:13:47.594866   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 95/120
	I0816 17:13:48.596238   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 96/120
	I0816 17:13:49.597740   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 97/120
	I0816 17:13:50.599195   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 98/120
	I0816 17:13:51.600476   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 99/120
	I0816 17:13:52.602595   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 100/120
	I0816 17:13:53.603938   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 101/120
	I0816 17:13:54.605363   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 102/120
	I0816 17:13:55.607051   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 103/120
	I0816 17:13:56.608335   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 104/120
	I0816 17:13:57.609977   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 105/120
	I0816 17:13:58.611108   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 106/120
	I0816 17:13:59.612368   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 107/120
	I0816 17:14:00.613666   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 108/120
	I0816 17:14:01.615500   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 109/120
	I0816 17:14:02.617317   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 110/120
	I0816 17:14:03.618746   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 111/120
	I0816 17:14:04.620359   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 112/120
	I0816 17:14:05.622072   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 113/120
	I0816 17:14:06.623335   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 114/120
	I0816 17:14:07.624991   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 115/120
	I0816 17:14:08.626395   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 116/120
	I0816 17:14:09.627951   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 117/120
	I0816 17:14:10.629365   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 118/120
	I0816 17:14:11.631090   33094 main.go:141] libmachine: (ha-764617-m03) Waiting for machine to stop 119/120
	I0816 17:14:12.632063   33094 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0816 17:14:12.632132   33094 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0816 17:14:12.634044   33094 out.go:201] 
	W0816 17:14:12.635418   33094 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0816 17:14:12.635438   33094 out.go:270] * 
	* 
	W0816 17:14:12.637665   33094 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 17:14:12.639052   33094 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-764617 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-764617 --wait=true -v=7 --alsologtostderr
E0816 17:16:12.269735   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/client.crt: no such file or directory" logger="UnhandledError"
E0816 17:17:35.334731   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/client.crt: no such file or directory" logger="UnhandledError"
E0816 17:18:21.062147   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/functional-654639/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-764617 --wait=true -v=7 --alsologtostderr: (4m33.441730719s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-764617
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-764617 -n ha-764617
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-764617 logs -n 25: (2.067617965s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-764617 cp ha-764617-m03:/home/docker/cp-test.txt                              | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | ha-764617-m02:/home/docker/cp-test_ha-764617-m03_ha-764617-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-764617 ssh -n                                                                 | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | ha-764617-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-764617 ssh -n ha-764617-m02 sudo cat                                          | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | /home/docker/cp-test_ha-764617-m03_ha-764617-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-764617 cp ha-764617-m03:/home/docker/cp-test.txt                              | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | ha-764617-m04:/home/docker/cp-test_ha-764617-m03_ha-764617-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-764617 ssh -n                                                                 | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | ha-764617-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-764617 ssh -n ha-764617-m04 sudo cat                                          | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | /home/docker/cp-test_ha-764617-m03_ha-764617-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-764617 cp testdata/cp-test.txt                                                | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | ha-764617-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-764617 ssh -n                                                                 | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | ha-764617-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-764617 cp ha-764617-m04:/home/docker/cp-test.txt                              | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1933781201/001/cp-test_ha-764617-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-764617 ssh -n                                                                 | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | ha-764617-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-764617 cp ha-764617-m04:/home/docker/cp-test.txt                              | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | ha-764617:/home/docker/cp-test_ha-764617-m04_ha-764617.txt                       |           |         |         |                     |                     |
	| ssh     | ha-764617 ssh -n                                                                 | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | ha-764617-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-764617 ssh -n ha-764617 sudo cat                                              | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | /home/docker/cp-test_ha-764617-m04_ha-764617.txt                                 |           |         |         |                     |                     |
	| cp      | ha-764617 cp ha-764617-m04:/home/docker/cp-test.txt                              | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | ha-764617-m02:/home/docker/cp-test_ha-764617-m04_ha-764617-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-764617 ssh -n                                                                 | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | ha-764617-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-764617 ssh -n ha-764617-m02 sudo cat                                          | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | /home/docker/cp-test_ha-764617-m04_ha-764617-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-764617 cp ha-764617-m04:/home/docker/cp-test.txt                              | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | ha-764617-m03:/home/docker/cp-test_ha-764617-m04_ha-764617-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-764617 ssh -n                                                                 | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | ha-764617-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-764617 ssh -n ha-764617-m03 sudo cat                                          | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | /home/docker/cp-test_ha-764617-m04_ha-764617-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-764617 node stop m02 -v=7                                                     | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-764617 node start m02 -v=7                                                    | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:11 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-764617 -v=7                                                           | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:12 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-764617 -v=7                                                                | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:12 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-764617 --wait=true -v=7                                                    | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:14 UTC | 16 Aug 24 17:18 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-764617                                                                | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:18 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 17:14:12
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 17:14:12.681840   33567 out.go:345] Setting OutFile to fd 1 ...
	I0816 17:14:12.682169   33567 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:14:12.682183   33567 out.go:358] Setting ErrFile to fd 2...
	I0816 17:14:12.682190   33567 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:14:12.682629   33567 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-9545/.minikube/bin
	I0816 17:14:12.683709   33567 out.go:352] Setting JSON to false
	I0816 17:14:12.684865   33567 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3351,"bootTime":1723825102,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0816 17:14:12.684953   33567 start.go:139] virtualization: kvm guest
	I0816 17:14:12.687095   33567 out.go:177] * [ha-764617] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0816 17:14:12.688589   33567 notify.go:220] Checking for updates...
	I0816 17:14:12.688598   33567 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 17:14:12.690514   33567 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 17:14:12.692050   33567 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19461-9545/kubeconfig
	I0816 17:14:12.693171   33567 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19461-9545/.minikube
	I0816 17:14:12.694641   33567 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 17:14:12.695807   33567 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 17:14:12.697475   33567 config.go:182] Loaded profile config "ha-764617": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 17:14:12.697612   33567 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 17:14:12.698197   33567 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:14:12.698255   33567 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:14:12.714606   33567 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39161
	I0816 17:14:12.714976   33567 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:14:12.715566   33567 main.go:141] libmachine: Using API Version  1
	I0816 17:14:12.715594   33567 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:14:12.715960   33567 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:14:12.716152   33567 main.go:141] libmachine: (ha-764617) Calling .DriverName
	I0816 17:14:12.753582   33567 out.go:177] * Using the kvm2 driver based on existing profile
	I0816 17:14:12.754920   33567 start.go:297] selected driver: kvm2
	I0816 17:14:12.754940   33567 start.go:901] validating driver "kvm2" against &{Name:ha-764617 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.0 ClusterName:ha-764617 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.18 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.184 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.253 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.137 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 17:14:12.755112   33567 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 17:14:12.755482   33567 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 17:14:12.755569   33567 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19461-9545/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 17:14:12.770376   33567 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0816 17:14:12.771018   33567 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 17:14:12.771081   33567 cni.go:84] Creating CNI manager for ""
	I0816 17:14:12.771092   33567 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0816 17:14:12.771145   33567 start.go:340] cluster config:
	{Name:ha-764617 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-764617 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.18 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.184 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.253 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.137 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 17:14:12.771292   33567 iso.go:125] acquiring lock: {Name:mke35866c1d0f078e1f027c2d727692722810e04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 17:14:12.773256   33567 out.go:177] * Starting "ha-764617" primary control-plane node in "ha-764617" cluster
	I0816 17:14:12.774450   33567 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 17:14:12.774482   33567 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0816 17:14:12.774489   33567 cache.go:56] Caching tarball of preloaded images
	I0816 17:14:12.774586   33567 preload.go:172] Found /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 17:14:12.774602   33567 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0816 17:14:12.774720   33567 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/config.json ...
	I0816 17:14:12.774909   33567 start.go:360] acquireMachinesLock for ha-764617: {Name:mke1ffa1f2d6d714bdd85e184816ba8f4dfd08f1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 17:14:12.774960   33567 start.go:364] duration metric: took 31.964µs to acquireMachinesLock for "ha-764617"
	I0816 17:14:12.774980   33567 start.go:96] Skipping create...Using existing machine configuration
	I0816 17:14:12.774992   33567 fix.go:54] fixHost starting: 
	I0816 17:14:12.775286   33567 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:14:12.775319   33567 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:14:12.789423   33567 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33455
	I0816 17:14:12.789796   33567 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:14:12.790222   33567 main.go:141] libmachine: Using API Version  1
	I0816 17:14:12.790241   33567 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:14:12.790643   33567 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:14:12.790973   33567 main.go:141] libmachine: (ha-764617) Calling .DriverName
	I0816 17:14:12.791151   33567 main.go:141] libmachine: (ha-764617) Calling .GetState
	I0816 17:14:12.792756   33567 fix.go:112] recreateIfNeeded on ha-764617: state=Running err=<nil>
	W0816 17:14:12.792794   33567 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 17:14:12.794795   33567 out.go:177] * Updating the running kvm2 "ha-764617" VM ...
	I0816 17:14:12.796080   33567 machine.go:93] provisionDockerMachine start ...
	I0816 17:14:12.796098   33567 main.go:141] libmachine: (ha-764617) Calling .DriverName
	I0816 17:14:12.796360   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHHostname
	I0816 17:14:12.798979   33567 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:14:12.799400   33567 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:14:12.799426   33567 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:14:12.799569   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHPort
	I0816 17:14:12.799739   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:14:12.799891   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:14:12.800061   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHUsername
	I0816 17:14:12.800227   33567 main.go:141] libmachine: Using SSH client type: native
	I0816 17:14:12.800431   33567 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0816 17:14:12.800442   33567 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 17:14:12.922160   33567 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-764617
	
	I0816 17:14:12.922188   33567 main.go:141] libmachine: (ha-764617) Calling .GetMachineName
	I0816 17:14:12.922481   33567 buildroot.go:166] provisioning hostname "ha-764617"
	I0816 17:14:12.922504   33567 main.go:141] libmachine: (ha-764617) Calling .GetMachineName
	I0816 17:14:12.922744   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHHostname
	I0816 17:14:12.925190   33567 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:14:12.925619   33567 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:14:12.925646   33567 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:14:12.925802   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHPort
	I0816 17:14:12.925995   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:14:12.926141   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:14:12.926315   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHUsername
	I0816 17:14:12.926487   33567 main.go:141] libmachine: Using SSH client type: native
	I0816 17:14:12.926664   33567 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0816 17:14:12.926679   33567 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-764617 && echo "ha-764617" | sudo tee /etc/hostname
	I0816 17:14:13.056269   33567 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-764617
	
	I0816 17:14:13.056301   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHHostname
	I0816 17:14:13.058990   33567 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:14:13.059445   33567 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:14:13.059475   33567 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:14:13.059641   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHPort
	I0816 17:14:13.059823   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:14:13.059993   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:14:13.060114   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHUsername
	I0816 17:14:13.060246   33567 main.go:141] libmachine: Using SSH client type: native
	I0816 17:14:13.060438   33567 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0816 17:14:13.060460   33567 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-764617' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-764617/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-764617' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 17:14:13.173193   33567 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 17:14:13.173228   33567 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19461-9545/.minikube CaCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19461-9545/.minikube}
	I0816 17:14:13.173256   33567 buildroot.go:174] setting up certificates
	I0816 17:14:13.173269   33567 provision.go:84] configureAuth start
	I0816 17:14:13.173283   33567 main.go:141] libmachine: (ha-764617) Calling .GetMachineName
	I0816 17:14:13.173577   33567 main.go:141] libmachine: (ha-764617) Calling .GetIP
	I0816 17:14:13.176292   33567 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:14:13.176679   33567 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:14:13.176707   33567 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:14:13.176853   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHHostname
	I0816 17:14:13.179121   33567 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:14:13.179415   33567 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:14:13.179445   33567 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:14:13.179572   33567 provision.go:143] copyHostCerts
	I0816 17:14:13.179621   33567 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem
	I0816 17:14:13.179657   33567 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem, removing ...
	I0816 17:14:13.179666   33567 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem
	I0816 17:14:13.179739   33567 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem (1123 bytes)
	I0816 17:14:13.179818   33567 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem
	I0816 17:14:13.179837   33567 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem, removing ...
	I0816 17:14:13.179841   33567 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem
	I0816 17:14:13.179871   33567 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem (1679 bytes)
	I0816 17:14:13.179910   33567 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem
	I0816 17:14:13.179932   33567 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem, removing ...
	I0816 17:14:13.179937   33567 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem
	I0816 17:14:13.179963   33567 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem (1082 bytes)
	I0816 17:14:13.180006   33567 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem org=jenkins.ha-764617 san=[127.0.0.1 192.168.39.18 ha-764617 localhost minikube]
	I0816 17:14:13.268473   33567 provision.go:177] copyRemoteCerts
	I0816 17:14:13.268524   33567 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 17:14:13.268546   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHHostname
	I0816 17:14:13.271093   33567 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:14:13.271435   33567 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:14:13.271462   33567 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:14:13.271666   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHPort
	I0816 17:14:13.271858   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:14:13.272012   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHUsername
	I0816 17:14:13.272165   33567 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617/id_rsa Username:docker}
	I0816 17:14:13.358638   33567 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 17:14:13.358700   33567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 17:14:13.382393   33567 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 17:14:13.382485   33567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0816 17:14:13.406227   33567 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 17:14:13.406314   33567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 17:14:13.430275   33567 provision.go:87] duration metric: took 256.992665ms to configureAuth
	I0816 17:14:13.430301   33567 buildroot.go:189] setting minikube options for container-runtime
	I0816 17:14:13.430564   33567 config.go:182] Loaded profile config "ha-764617": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 17:14:13.430639   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHHostname
	I0816 17:14:13.432992   33567 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:14:13.433404   33567 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:14:13.433432   33567 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:14:13.433526   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHPort
	I0816 17:14:13.433697   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:14:13.433895   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:14:13.434031   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHUsername
	I0816 17:14:13.434190   33567 main.go:141] libmachine: Using SSH client type: native
	I0816 17:14:13.434412   33567 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0816 17:14:13.434427   33567 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 17:15:44.383680   33567 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 17:15:44.383706   33567 machine.go:96] duration metric: took 1m31.587612978s to provisionDockerMachine
	I0816 17:15:44.383720   33567 start.go:293] postStartSetup for "ha-764617" (driver="kvm2")
	I0816 17:15:44.383733   33567 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 17:15:44.383752   33567 main.go:141] libmachine: (ha-764617) Calling .DriverName
	I0816 17:15:44.384099   33567 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 17:15:44.384123   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHHostname
	I0816 17:15:44.386974   33567 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:15:44.387470   33567 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:15:44.387493   33567 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:15:44.387637   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHPort
	I0816 17:15:44.387835   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:15:44.387994   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHUsername
	I0816 17:15:44.388127   33567 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617/id_rsa Username:docker}
	I0816 17:15:44.475562   33567 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 17:15:44.479566   33567 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 17:15:44.479601   33567 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/addons for local assets ...
	I0816 17:15:44.479676   33567 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/files for local assets ...
	I0816 17:15:44.479762   33567 filesync.go:149] local asset: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem -> 167532.pem in /etc/ssl/certs
	I0816 17:15:44.479780   33567 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem -> /etc/ssl/certs/167532.pem
	I0816 17:15:44.479864   33567 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 17:15:44.488385   33567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /etc/ssl/certs/167532.pem (1708 bytes)
	I0816 17:15:44.510271   33567 start.go:296] duration metric: took 126.538123ms for postStartSetup
	I0816 17:15:44.510330   33567 main.go:141] libmachine: (ha-764617) Calling .DriverName
	I0816 17:15:44.510622   33567 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0816 17:15:44.510646   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHHostname
	I0816 17:15:44.513338   33567 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:15:44.513747   33567 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:15:44.513769   33567 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:15:44.513920   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHPort
	I0816 17:15:44.514113   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:15:44.514248   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHUsername
	I0816 17:15:44.514435   33567 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617/id_rsa Username:docker}
	W0816 17:15:44.598247   33567 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0816 17:15:44.598274   33567 fix.go:56] duration metric: took 1m31.82328506s for fixHost
	I0816 17:15:44.598294   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHHostname
	I0816 17:15:44.601014   33567 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:15:44.601372   33567 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:15:44.601401   33567 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:15:44.601597   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHPort
	I0816 17:15:44.601802   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:15:44.601972   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:15:44.602067   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHUsername
	I0816 17:15:44.602209   33567 main.go:141] libmachine: Using SSH client type: native
	I0816 17:15:44.602436   33567 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0816 17:15:44.602455   33567 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 17:15:44.717188   33567 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723828544.671591292
	
	I0816 17:15:44.717211   33567 fix.go:216] guest clock: 1723828544.671591292
	I0816 17:15:44.717218   33567 fix.go:229] Guest: 2024-08-16 17:15:44.671591292 +0000 UTC Remote: 2024-08-16 17:15:44.59828124 +0000 UTC m=+91.949787318 (delta=73.310052ms)
	I0816 17:15:44.717246   33567 fix.go:200] guest clock delta is within tolerance: 73.310052ms
	I0816 17:15:44.717251   33567 start.go:83] releasing machines lock for "ha-764617", held for 1m31.942283255s
	I0816 17:15:44.717272   33567 main.go:141] libmachine: (ha-764617) Calling .DriverName
	I0816 17:15:44.717538   33567 main.go:141] libmachine: (ha-764617) Calling .GetIP
	I0816 17:15:44.720100   33567 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:15:44.720508   33567 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:15:44.720531   33567 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:15:44.720714   33567 main.go:141] libmachine: (ha-764617) Calling .DriverName
	I0816 17:15:44.721187   33567 main.go:141] libmachine: (ha-764617) Calling .DriverName
	I0816 17:15:44.721359   33567 main.go:141] libmachine: (ha-764617) Calling .DriverName
	I0816 17:15:44.721455   33567 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 17:15:44.721501   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHHostname
	I0816 17:15:44.721547   33567 ssh_runner.go:195] Run: cat /version.json
	I0816 17:15:44.721566   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHHostname
	I0816 17:15:44.724022   33567 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:15:44.724369   33567 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:15:44.724448   33567 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:15:44.724472   33567 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:15:44.724583   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHPort
	I0816 17:15:44.724761   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:15:44.724903   33567 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:15:44.724922   33567 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:15:44.724935   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHUsername
	I0816 17:15:44.725034   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHPort
	I0816 17:15:44.725112   33567 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617/id_rsa Username:docker}
	I0816 17:15:44.725192   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:15:44.725317   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHUsername
	I0816 17:15:44.725453   33567 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617/id_rsa Username:docker}
	I0816 17:15:44.805301   33567 ssh_runner.go:195] Run: systemctl --version
	I0816 17:15:44.845850   33567 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 17:15:45.004125   33567 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 17:15:45.012739   33567 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 17:15:45.012813   33567 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 17:15:45.021271   33567 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0816 17:15:45.021291   33567 start.go:495] detecting cgroup driver to use...
	I0816 17:15:45.021394   33567 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 17:15:45.036322   33567 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 17:15:45.050097   33567 docker.go:217] disabling cri-docker service (if available) ...
	I0816 17:15:45.050155   33567 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 17:15:45.064096   33567 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 17:15:45.077640   33567 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 17:15:45.230350   33567 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 17:15:45.374034   33567 docker.go:233] disabling docker service ...
	I0816 17:15:45.374106   33567 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 17:15:45.392104   33567 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 17:15:45.405018   33567 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 17:15:45.546831   33567 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 17:15:45.686710   33567 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 17:15:45.700826   33567 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 17:15:45.719391   33567 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 17:15:45.719449   33567 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:15:45.728931   33567 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 17:15:45.728996   33567 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:15:45.738455   33567 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:15:45.747724   33567 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:15:45.757078   33567 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 17:15:45.766525   33567 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:15:45.775796   33567 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:15:45.787326   33567 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:15:45.797235   33567 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 17:15:45.806123   33567 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 17:15:45.814653   33567 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 17:15:45.951155   33567 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 17:15:48.920448   33567 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.969262566s)
	I0816 17:15:48.920482   33567 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 17:15:48.920533   33567 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 17:15:48.929517   33567 start.go:563] Will wait 60s for crictl version
	I0816 17:15:48.929606   33567 ssh_runner.go:195] Run: which crictl
	I0816 17:15:48.933325   33567 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 17:15:48.967726   33567 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 17:15:48.967829   33567 ssh_runner.go:195] Run: crio --version
	I0816 17:15:48.995025   33567 ssh_runner.go:195] Run: crio --version
	I0816 17:15:49.024017   33567 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 17:15:49.025551   33567 main.go:141] libmachine: (ha-764617) Calling .GetIP
	I0816 17:15:49.028362   33567 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:15:49.028732   33567 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:15:49.028769   33567 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:15:49.029002   33567 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0816 17:15:49.033556   33567 kubeadm.go:883] updating cluster {Name:ha-764617 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-764617 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.18 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.184 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.253 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.137 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 17:15:49.033697   33567 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 17:15:49.033755   33567 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 17:15:49.076085   33567 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 17:15:49.076105   33567 crio.go:433] Images already preloaded, skipping extraction
	I0816 17:15:49.076162   33567 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 17:15:49.109504   33567 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 17:15:49.109522   33567 cache_images.go:84] Images are preloaded, skipping loading
	I0816 17:15:49.109530   33567 kubeadm.go:934] updating node { 192.168.39.18 8443 v1.31.0 crio true true} ...
	I0816 17:15:49.109670   33567 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-764617 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.18
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-764617 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 17:15:49.109753   33567 ssh_runner.go:195] Run: crio config
	I0816 17:15:49.157459   33567 cni.go:84] Creating CNI manager for ""
	I0816 17:15:49.157484   33567 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0816 17:15:49.157493   33567 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 17:15:49.157519   33567 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.18 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-764617 NodeName:ha-764617 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.18"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.18 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 17:15:49.157685   33567 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.18
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-764617"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.18
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.18"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 17:15:49.157714   33567 kube-vip.go:115] generating kube-vip config ...
	I0816 17:15:49.157753   33567 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0816 17:15:49.168781   33567 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0816 17:15:49.168904   33567 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0816 17:15:49.168961   33567 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 17:15:49.178411   33567 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 17:15:49.178478   33567 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0816 17:15:49.187170   33567 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0816 17:15:49.203351   33567 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 17:15:49.218712   33567 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0816 17:15:49.233914   33567 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0816 17:15:49.251037   33567 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0816 17:15:49.254744   33567 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 17:15:49.393613   33567 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 17:15:49.407763   33567 certs.go:68] Setting up /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617 for IP: 192.168.39.18
	I0816 17:15:49.407794   33567 certs.go:194] generating shared ca certs ...
	I0816 17:15:49.407812   33567 certs.go:226] acquiring lock for ca certs: {Name:mk1e000575c94c138513704c2900b8a68810eb65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:15:49.407979   33567 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key
	I0816 17:15:49.408050   33567 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key
	I0816 17:15:49.408069   33567 certs.go:256] generating profile certs ...
	I0816 17:15:49.408191   33567 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/client.key
	I0816 17:15:49.408231   33567 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.key.81eed208
	I0816 17:15:49.408265   33567 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.crt.81eed208 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.18 192.168.39.184 192.168.39.253 192.168.39.254]
	I0816 17:15:49.529281   33567 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.crt.81eed208 ...
	I0816 17:15:49.529313   33567 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.crt.81eed208: {Name:mkba387e9626a8467f3548bc2879abbf94f19965 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:15:49.529491   33567 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.key.81eed208 ...
	I0816 17:15:49.529505   33567 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.key.81eed208: {Name:mkacc5f31f268458dfb07a0a1f8c85e5d2963b1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:15:49.529587   33567 certs.go:381] copying /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.crt.81eed208 -> /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.crt
	I0816 17:15:49.529778   33567 certs.go:385] copying /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.key.81eed208 -> /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.key
	I0816 17:15:49.529920   33567 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/proxy-client.key
	I0816 17:15:49.529936   33567 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0816 17:15:49.529950   33567 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0816 17:15:49.529966   33567 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0816 17:15:49.529982   33567 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0816 17:15:49.529997   33567 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0816 17:15:49.530011   33567 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0816 17:15:49.530029   33567 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0816 17:15:49.530043   33567 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0816 17:15:49.530101   33567 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem (1338 bytes)
	W0816 17:15:49.530131   33567 certs.go:480] ignoring /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753_empty.pem, impossibly tiny 0 bytes
	I0816 17:15:49.530154   33567 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 17:15:49.530184   33567 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem (1082 bytes)
	I0816 17:15:49.530215   33567 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem (1123 bytes)
	I0816 17:15:49.530240   33567 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem (1679 bytes)
	I0816 17:15:49.530337   33567 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem (1708 bytes)
	I0816 17:15:49.530376   33567 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem -> /usr/share/ca-certificates/167532.pem
	I0816 17:15:49.530392   33567 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0816 17:15:49.530407   33567 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem -> /usr/share/ca-certificates/16753.pem
	I0816 17:15:49.531417   33567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 17:15:49.556111   33567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 17:15:49.577933   33567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 17:15:49.601123   33567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 17:15:49.623704   33567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0816 17:15:49.645796   33567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 17:15:49.668042   33567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 17:15:49.691469   33567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 17:15:49.713630   33567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /usr/share/ca-certificates/167532.pem (1708 bytes)
	I0816 17:15:49.736194   33567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 17:15:49.759002   33567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem --> /usr/share/ca-certificates/16753.pem (1338 bytes)
	I0816 17:15:49.781331   33567 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 17:15:49.796401   33567 ssh_runner.go:195] Run: openssl version
	I0816 17:15:49.801725   33567 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 17:15:49.811298   33567 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 17:15:49.815214   33567 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:49 /usr/share/ca-certificates/minikubeCA.pem
	I0816 17:15:49.815256   33567 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 17:15:49.820340   33567 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 17:15:49.828902   33567 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16753.pem && ln -fs /usr/share/ca-certificates/16753.pem /etc/ssl/certs/16753.pem"
	I0816 17:15:49.838333   33567 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16753.pem
	I0816 17:15:49.842205   33567 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 17:00 /usr/share/ca-certificates/16753.pem
	I0816 17:15:49.842259   33567 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16753.pem
	I0816 17:15:49.847306   33567 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16753.pem /etc/ssl/certs/51391683.0"
	I0816 17:15:49.855687   33567 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167532.pem && ln -fs /usr/share/ca-certificates/167532.pem /etc/ssl/certs/167532.pem"
	I0816 17:15:49.865422   33567 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167532.pem
	I0816 17:15:49.869385   33567 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 17:00 /usr/share/ca-certificates/167532.pem
	I0816 17:15:49.869419   33567 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167532.pem
	I0816 17:15:49.874514   33567 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167532.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 17:15:49.882797   33567 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 17:15:49.886837   33567 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 17:15:49.899849   33567 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 17:15:49.906766   33567 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 17:15:49.916591   33567 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 17:15:49.928299   33567 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 17:15:49.936397   33567 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 17:15:49.942566   33567 kubeadm.go:392] StartCluster: {Name:ha-764617 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-764617 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.18 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.184 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.253 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.137 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 17:15:49.942691   33567 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 17:15:49.942732   33567 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 17:15:50.012616   33567 cri.go:89] found id: "78a2c078c9a836f3c3f3792f4888cf462cc0115bbd832fb0b3fe6afcea71593b"
	I0816 17:15:50.012666   33567 cri.go:89] found id: "49173ab56bb476ad0e5e598050b2d6cdf03bad18ffd952c9fc5a040efba23313"
	I0816 17:15:50.012671   33567 cri.go:89] found id: "a13c43bf5322cc3c68429cd57b4f2b0cd808310cbf83a054c8f8ceac9247fdc9"
	I0816 17:15:50.012674   33567 cri.go:89] found id: "7484d3705a58cf84eea46cc2853fefc74ff28ce7be490d80fd998780a1345a8b"
	I0816 17:15:50.012676   33567 cri.go:89] found id: "d21ff55e0d1541ec985fefc0bbe41954428037be46427e4a0b9b9f6f59ff38b5"
	I0816 17:15:50.012680   33567 cri.go:89] found id: "8eefbb289cdc64359e43e612fda427c74a5cd21d8765173cad59129f113b6faf"
	I0816 17:15:50.012682   33567 cri.go:89] found id: "b7c860bdbf8f8c3ce3e6ce12afcd55605f1afdf02828581465b57f088bf9fc24"
	I0816 17:15:50.012685   33567 cri.go:89] found id: "1aaf72ada1592eb9db02d4b420e3712769e6486d17ade2669153a86196aec79d"
	I0816 17:15:50.012687   33567 cri.go:89] found id: "6b4d4cb04162c2a865b03b9d68c6d63fe9ac39bfd8c3a34420cef100c23de268"
	I0816 17:15:50.012694   33567 cri.go:89] found id: "c020d60e48e21e3f68c2ab1e7dbc4570867af80f727edd6529d1d59b17f11a6f"
	I0816 17:15:50.012696   33567 cri.go:89] found id: "547ba7c3099cf77e755ecafde9e451130522c4ac7ad82e4d1425c79c1f03ae1b"
	I0816 17:15:50.012711   33567 cri.go:89] found id: "0d7b524ef17cfbc76cf8e0ec5c8dc05fb415ba95dd20034cc9e994fe15802183"
	I0816 17:15:50.012714   33567 cri.go:89] found id: "5964f78981acee32a76525df3d36071ce0c8b129aa0af6ff7aa1cdaff80b4110"
	I0816 17:15:50.012716   33567 cri.go:89] found id: ""
	I0816 17:15:50.012759   33567 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 16 17:18:46 ha-764617 crio[3620]: time="2024-08-16 17:18:46.853381294Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828726853357554,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9a3843d5-0933-479d-9029-76cc674a5195 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 17:18:46 ha-764617 crio[3620]: time="2024-08-16 17:18:46.853867959Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0b690148-009a-4627-aa04-e186166e18cd name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 17:18:46 ha-764617 crio[3620]: time="2024-08-16 17:18:46.853925865Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0b690148-009a-4627-aa04-e186166e18cd name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 17:18:46 ha-764617 crio[3620]: time="2024-08-16 17:18:46.854735275Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ac791931e4d13a44e95dbbe18074f060aa624cf0f580ca310128f507be6bbf03,PodSandboxId:651fc3ebab41e91c11dbd9ec45bb8b289a1439ce84ec71b882f60ed00ae65ff9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723828634581089417,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15a0a2d4-69d6-4a6b-9199-f8785e015c3b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dac1a1703e40dc64e13d71f786ff8b6bad8f232b19fdddbe915665a5c1a0627d,PodSandboxId:ea534f2443e4a9a38561d602ba9e916096ad3fd8fbd7e5bb5f9370fb199dfbe7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723828598579792512,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d9a187c472f17e2ba03b6daf392b7e4,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4f92a8547daed9b708e0671f69fe945f34065114cd65321f075d9534251caee,PodSandboxId:a58c1ee703662e3f9cc4701c7c92ce72463943104460dd68c4441ac04e9c9171,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723828597577658920,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32cbd9593cdf012e272df9a250d0e00c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d42c6410ce9e1888286c8b697f6379a5bfacef89f9bdbfb33bad67cf3d03394b,PodSandboxId:651fc3ebab41e91c11dbd9ec45bb8b289a1439ce84ec71b882f60ed00ae65ff9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723828592584796731,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15a0a2d4-69d6-4a6b-9199-f8785e015c3b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2a44b535a74ed775a0bd6458b6bac0ad19ba8996e3f1c7325d30c5a3ae67297,PodSandboxId:872ace8403f2540a30c549847c18e8f9f5807493bb8ea967789ea4a7e014933c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723828589495393551,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rcq66,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef4f9584-2155-48ce-80fa-30bac466b9f5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19098e85977205229a51b1b4c24778595dcbc936d499282e98c120e9ff36695c,PodSandboxId:fa54cec332137f89fe347c604ba465a243e0da76448cfa44dfbad6b11d5b729e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723828572580092397,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a38b1e2e1b12167875f857f3d80e7b4e,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:030986f0ddc53b18e5e09d98f86351494ff02ed8d6b0e901d908a4544c679c3a,PodSandboxId:31b42c4c94b6f6c09d5a6b52f1b812af66811b2a302205fb3a6b7ecb5a764d6c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723828556165805551,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a89700dd245c99cee73a27284c5b094,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:f97fbca9aeaa7f4efc01aa36e9842fcbe1317fa5826d11051aab16ce390474fb,PodSandboxId:ac31589ee8673c28232e093f596c034bb71506aeb2765639d7ee3ae75dc7e97a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723828556229024809,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j75vc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50262aeb-9d97-4093-a43f-cb24a5515abb,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:290a055bbcb3d1cf9eb284751a
1e49d8b00b30dffd90fe3456c00e2d0d23dadb,PodSandboxId:ea534f2443e4a9a38561d602ba9e916096ad3fd8fbd7e5bb5f9370fb199dfbe7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723828556115461754,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d9a187c472f17e2ba03b6daf392b7e4,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:417237a361000abc40887
a2662fb7b87d56264d8520dea58cafbba0151e2ce42,PodSandboxId:7ff7258bc28f6b1905834c430adc872d22465a481131c790cff4ae1dc0f9f4fc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723828556226312067,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-rhb6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea20ec0a-a16e-4703-bb54-2e54c31acd40,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes
.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8281ecd4daddf7726964e428d3d817eb8b5f5b72ebf8f05f4209903dabfadeaa,PodSandboxId:8667e68ae5f7bb3667dc32338de129e4dbeae550106e74bc76e6ced118844a0e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723828556124470990,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-94vkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1ce0b8c-c2c8-400a-a013-6eb89e550cd9,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aeafb73cca63549a23e7a8a77c52a26c0759572b031848d5098a1f5ef81b3993,PodSandboxId:080ca9d7f3bf76f833091d1838a68a575c1cde45e2b3ac2cb29200b42e6fcb48,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723828556050965217,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45c4aa250fdc29f3673166187d642d12,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40117a7184f2549debf482d1b96d53560d5bc57d1fa0fb46ea93007ee8f3d940,PodSandboxId:a58c1ee703662e3f9cc4701c7c92ce72463943104460dd68c4441ac04e9c9171,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723828555942745800,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32cbd9593cdf012e272df9a250d0e00c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2eaae77033399ae62f905b8ed70d8509b5839c5f8b6b80e7462075c55fcb114,PodSandboxId:f0e8a8ba4e74af4758b8a010b6164dd1a2a0a190dbcc343d150b4efc61b24d4a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723828550059770473,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-d6c7g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 255004b9-d05e-4686-9e9c-6ec6f7aae439,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f49214f24a1f9d4e237db072dea4cb4011708fed1d55a3518bae64afc9a36de,PodSandboxId:31ad2ee33305c2e247dc968727a480a35dd4e341f754742f3ab45df490c9d9b6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723828052424014199,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rcq66,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef4f9584-2155-48ce-80fa-30bac466b9f5,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d21ff55e0d1541ec985fefc0bbe41954428037be46427e4a0b9b9f6f59ff38b5,PodSandboxId:570a9af97580c893bd59ad0da740672446aad828cc2953120c5936aa6c511600,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723827909473320957,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-rhb6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea20ec0a-a16e-4703-bb54-2e54c31acd40,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eefbb289cdc64359e43e612fda427c74a5cd21d8765173cad59129f113b6faf,PodSandboxId:a96010807e82ab97d921d55fe01739b0b7dc0b5234c6f3d29ffd5e0d7ae9a661,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723827909453875946,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-d6c7g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 255004b9-d05e-4686-9e9c-6ec6f7aae439,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7c860bdbf8f8c3ce3e6ce12afcd55605f1afdf02828581465b57f088bf9fc24,PodSandboxId:850550a63d423740e066cd344cb8eca06c8ab1ce91d384e5962c2c137bc4c4e2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723827897695873378,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-94vkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1ce0b8c-c2c8-400a-a013-6eb89e550cd9,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aaf72ada1592eb9db02d4b420e3712769e6486d17ade2669153a86196aec79d,PodSandboxId:7fa8ce6eea9326aa2d9e78422a5917fa128392accfa0f9395fc23862f5324831,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723827894190049003,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j75vc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50262aeb-9d97-4093-a43f-cb24a5515abb,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c020d60e48e21e3f68c2ab1e7dbc4570867af80f727edd6529d1d59b17f11a6f,PodSandboxId:c5d6c0455efc0ceafe45e3e9defa6b3e39bf97e12d07d6d742a88b3cbeaf65b6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915a
f3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723827882765000311,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a89700dd245c99cee73a27284c5b094,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:547ba7c3099cf77e755ecafde9e451130522c4ac7ad82e4d1425c79c1f03ae1b,PodSandboxId:09ec8ad12f1f1dcd1c0e02dc275ba62dff2a2b5f6ca7e7fd781694c2d45e4a18,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
,State:CONTAINER_EXITED,CreatedAt:1723827882761308074,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45c4aa250fdc29f3673166187d642d12,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0b690148-009a-4627-aa04-e186166e18cd name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 17:18:46 ha-764617 crio[3620]: time="2024-08-16 17:18:46.899243337Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a5cfc2d8-5e9a-4e4e-bf9c-7f2772c9bab4 name=/runtime.v1.RuntimeService/Version
	Aug 16 17:18:46 ha-764617 crio[3620]: time="2024-08-16 17:18:46.899317303Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a5cfc2d8-5e9a-4e4e-bf9c-7f2772c9bab4 name=/runtime.v1.RuntimeService/Version
	Aug 16 17:18:46 ha-764617 crio[3620]: time="2024-08-16 17:18:46.900569563Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=df37b6cb-782c-491c-8d20-17b65f04ef70 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 17:18:46 ha-764617 crio[3620]: time="2024-08-16 17:18:46.901028518Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828726901003388,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=df37b6cb-782c-491c-8d20-17b65f04ef70 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 17:18:46 ha-764617 crio[3620]: time="2024-08-16 17:18:46.901484940Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0f22f452-c021-4b57-a92d-3dacc0eff3d7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 17:18:46 ha-764617 crio[3620]: time="2024-08-16 17:18:46.901538370Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0f22f452-c021-4b57-a92d-3dacc0eff3d7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 17:18:46 ha-764617 crio[3620]: time="2024-08-16 17:18:46.901938606Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ac791931e4d13a44e95dbbe18074f060aa624cf0f580ca310128f507be6bbf03,PodSandboxId:651fc3ebab41e91c11dbd9ec45bb8b289a1439ce84ec71b882f60ed00ae65ff9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723828634581089417,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15a0a2d4-69d6-4a6b-9199-f8785e015c3b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dac1a1703e40dc64e13d71f786ff8b6bad8f232b19fdddbe915665a5c1a0627d,PodSandboxId:ea534f2443e4a9a38561d602ba9e916096ad3fd8fbd7e5bb5f9370fb199dfbe7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723828598579792512,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d9a187c472f17e2ba03b6daf392b7e4,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4f92a8547daed9b708e0671f69fe945f34065114cd65321f075d9534251caee,PodSandboxId:a58c1ee703662e3f9cc4701c7c92ce72463943104460dd68c4441ac04e9c9171,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723828597577658920,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32cbd9593cdf012e272df9a250d0e00c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d42c6410ce9e1888286c8b697f6379a5bfacef89f9bdbfb33bad67cf3d03394b,PodSandboxId:651fc3ebab41e91c11dbd9ec45bb8b289a1439ce84ec71b882f60ed00ae65ff9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723828592584796731,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15a0a2d4-69d6-4a6b-9199-f8785e015c3b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2a44b535a74ed775a0bd6458b6bac0ad19ba8996e3f1c7325d30c5a3ae67297,PodSandboxId:872ace8403f2540a30c549847c18e8f9f5807493bb8ea967789ea4a7e014933c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723828589495393551,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rcq66,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef4f9584-2155-48ce-80fa-30bac466b9f5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19098e85977205229a51b1b4c24778595dcbc936d499282e98c120e9ff36695c,PodSandboxId:fa54cec332137f89fe347c604ba465a243e0da76448cfa44dfbad6b11d5b729e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723828572580092397,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a38b1e2e1b12167875f857f3d80e7b4e,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:030986f0ddc53b18e5e09d98f86351494ff02ed8d6b0e901d908a4544c679c3a,PodSandboxId:31b42c4c94b6f6c09d5a6b52f1b812af66811b2a302205fb3a6b7ecb5a764d6c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723828556165805551,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a89700dd245c99cee73a27284c5b094,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:f97fbca9aeaa7f4efc01aa36e9842fcbe1317fa5826d11051aab16ce390474fb,PodSandboxId:ac31589ee8673c28232e093f596c034bb71506aeb2765639d7ee3ae75dc7e97a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723828556229024809,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j75vc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50262aeb-9d97-4093-a43f-cb24a5515abb,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:290a055bbcb3d1cf9eb284751a
1e49d8b00b30dffd90fe3456c00e2d0d23dadb,PodSandboxId:ea534f2443e4a9a38561d602ba9e916096ad3fd8fbd7e5bb5f9370fb199dfbe7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723828556115461754,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d9a187c472f17e2ba03b6daf392b7e4,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:417237a361000abc40887
a2662fb7b87d56264d8520dea58cafbba0151e2ce42,PodSandboxId:7ff7258bc28f6b1905834c430adc872d22465a481131c790cff4ae1dc0f9f4fc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723828556226312067,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-rhb6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea20ec0a-a16e-4703-bb54-2e54c31acd40,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes
.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8281ecd4daddf7726964e428d3d817eb8b5f5b72ebf8f05f4209903dabfadeaa,PodSandboxId:8667e68ae5f7bb3667dc32338de129e4dbeae550106e74bc76e6ced118844a0e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723828556124470990,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-94vkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1ce0b8c-c2c8-400a-a013-6eb89e550cd9,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aeafb73cca63549a23e7a8a77c52a26c0759572b031848d5098a1f5ef81b3993,PodSandboxId:080ca9d7f3bf76f833091d1838a68a575c1cde45e2b3ac2cb29200b42e6fcb48,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723828556050965217,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45c4aa250fdc29f3673166187d642d12,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40117a7184f2549debf482d1b96d53560d5bc57d1fa0fb46ea93007ee8f3d940,PodSandboxId:a58c1ee703662e3f9cc4701c7c92ce72463943104460dd68c4441ac04e9c9171,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723828555942745800,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32cbd9593cdf012e272df9a250d0e00c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2eaae77033399ae62f905b8ed70d8509b5839c5f8b6b80e7462075c55fcb114,PodSandboxId:f0e8a8ba4e74af4758b8a010b6164dd1a2a0a190dbcc343d150b4efc61b24d4a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723828550059770473,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-d6c7g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 255004b9-d05e-4686-9e9c-6ec6f7aae439,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f49214f24a1f9d4e237db072dea4cb4011708fed1d55a3518bae64afc9a36de,PodSandboxId:31ad2ee33305c2e247dc968727a480a35dd4e341f754742f3ab45df490c9d9b6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723828052424014199,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rcq66,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef4f9584-2155-48ce-80fa-30bac466b9f5,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d21ff55e0d1541ec985fefc0bbe41954428037be46427e4a0b9b9f6f59ff38b5,PodSandboxId:570a9af97580c893bd59ad0da740672446aad828cc2953120c5936aa6c511600,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723827909473320957,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-rhb6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea20ec0a-a16e-4703-bb54-2e54c31acd40,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eefbb289cdc64359e43e612fda427c74a5cd21d8765173cad59129f113b6faf,PodSandboxId:a96010807e82ab97d921d55fe01739b0b7dc0b5234c6f3d29ffd5e0d7ae9a661,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723827909453875946,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-d6c7g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 255004b9-d05e-4686-9e9c-6ec6f7aae439,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7c860bdbf8f8c3ce3e6ce12afcd55605f1afdf02828581465b57f088bf9fc24,PodSandboxId:850550a63d423740e066cd344cb8eca06c8ab1ce91d384e5962c2c137bc4c4e2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723827897695873378,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-94vkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1ce0b8c-c2c8-400a-a013-6eb89e550cd9,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aaf72ada1592eb9db02d4b420e3712769e6486d17ade2669153a86196aec79d,PodSandboxId:7fa8ce6eea9326aa2d9e78422a5917fa128392accfa0f9395fc23862f5324831,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723827894190049003,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j75vc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50262aeb-9d97-4093-a43f-cb24a5515abb,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c020d60e48e21e3f68c2ab1e7dbc4570867af80f727edd6529d1d59b17f11a6f,PodSandboxId:c5d6c0455efc0ceafe45e3e9defa6b3e39bf97e12d07d6d742a88b3cbeaf65b6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915a
f3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723827882765000311,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a89700dd245c99cee73a27284c5b094,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:547ba7c3099cf77e755ecafde9e451130522c4ac7ad82e4d1425c79c1f03ae1b,PodSandboxId:09ec8ad12f1f1dcd1c0e02dc275ba62dff2a2b5f6ca7e7fd781694c2d45e4a18,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
,State:CONTAINER_EXITED,CreatedAt:1723827882761308074,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45c4aa250fdc29f3673166187d642d12,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0f22f452-c021-4b57-a92d-3dacc0eff3d7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 17:18:46 ha-764617 crio[3620]: time="2024-08-16 17:18:46.943495935Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3cc64942-d473-4d77-a188-16ef56d5834a name=/runtime.v1.RuntimeService/Version
	Aug 16 17:18:46 ha-764617 crio[3620]: time="2024-08-16 17:18:46.943584226Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3cc64942-d473-4d77-a188-16ef56d5834a name=/runtime.v1.RuntimeService/Version
	Aug 16 17:18:46 ha-764617 crio[3620]: time="2024-08-16 17:18:46.944816043Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2867b0f5-542a-4be8-949c-f8b9399d9eea name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 17:18:46 ha-764617 crio[3620]: time="2024-08-16 17:18:46.945313341Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828726945291050,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2867b0f5-542a-4be8-949c-f8b9399d9eea name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 17:18:46 ha-764617 crio[3620]: time="2024-08-16 17:18:46.945914688Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ed860b04-0c56-400a-a9b7-cfb17f18dff9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 17:18:46 ha-764617 crio[3620]: time="2024-08-16 17:18:46.945980253Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ed860b04-0c56-400a-a9b7-cfb17f18dff9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 17:18:46 ha-764617 crio[3620]: time="2024-08-16 17:18:46.946421791Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ac791931e4d13a44e95dbbe18074f060aa624cf0f580ca310128f507be6bbf03,PodSandboxId:651fc3ebab41e91c11dbd9ec45bb8b289a1439ce84ec71b882f60ed00ae65ff9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723828634581089417,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15a0a2d4-69d6-4a6b-9199-f8785e015c3b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dac1a1703e40dc64e13d71f786ff8b6bad8f232b19fdddbe915665a5c1a0627d,PodSandboxId:ea534f2443e4a9a38561d602ba9e916096ad3fd8fbd7e5bb5f9370fb199dfbe7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723828598579792512,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d9a187c472f17e2ba03b6daf392b7e4,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4f92a8547daed9b708e0671f69fe945f34065114cd65321f075d9534251caee,PodSandboxId:a58c1ee703662e3f9cc4701c7c92ce72463943104460dd68c4441ac04e9c9171,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723828597577658920,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32cbd9593cdf012e272df9a250d0e00c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d42c6410ce9e1888286c8b697f6379a5bfacef89f9bdbfb33bad67cf3d03394b,PodSandboxId:651fc3ebab41e91c11dbd9ec45bb8b289a1439ce84ec71b882f60ed00ae65ff9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723828592584796731,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15a0a2d4-69d6-4a6b-9199-f8785e015c3b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2a44b535a74ed775a0bd6458b6bac0ad19ba8996e3f1c7325d30c5a3ae67297,PodSandboxId:872ace8403f2540a30c549847c18e8f9f5807493bb8ea967789ea4a7e014933c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723828589495393551,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rcq66,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef4f9584-2155-48ce-80fa-30bac466b9f5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19098e85977205229a51b1b4c24778595dcbc936d499282e98c120e9ff36695c,PodSandboxId:fa54cec332137f89fe347c604ba465a243e0da76448cfa44dfbad6b11d5b729e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723828572580092397,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a38b1e2e1b12167875f857f3d80e7b4e,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:030986f0ddc53b18e5e09d98f86351494ff02ed8d6b0e901d908a4544c679c3a,PodSandboxId:31b42c4c94b6f6c09d5a6b52f1b812af66811b2a302205fb3a6b7ecb5a764d6c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723828556165805551,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a89700dd245c99cee73a27284c5b094,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:f97fbca9aeaa7f4efc01aa36e9842fcbe1317fa5826d11051aab16ce390474fb,PodSandboxId:ac31589ee8673c28232e093f596c034bb71506aeb2765639d7ee3ae75dc7e97a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723828556229024809,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j75vc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50262aeb-9d97-4093-a43f-cb24a5515abb,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:290a055bbcb3d1cf9eb284751a
1e49d8b00b30dffd90fe3456c00e2d0d23dadb,PodSandboxId:ea534f2443e4a9a38561d602ba9e916096ad3fd8fbd7e5bb5f9370fb199dfbe7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723828556115461754,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d9a187c472f17e2ba03b6daf392b7e4,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:417237a361000abc40887
a2662fb7b87d56264d8520dea58cafbba0151e2ce42,PodSandboxId:7ff7258bc28f6b1905834c430adc872d22465a481131c790cff4ae1dc0f9f4fc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723828556226312067,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-rhb6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea20ec0a-a16e-4703-bb54-2e54c31acd40,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes
.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8281ecd4daddf7726964e428d3d817eb8b5f5b72ebf8f05f4209903dabfadeaa,PodSandboxId:8667e68ae5f7bb3667dc32338de129e4dbeae550106e74bc76e6ced118844a0e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723828556124470990,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-94vkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1ce0b8c-c2c8-400a-a013-6eb89e550cd9,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aeafb73cca63549a23e7a8a77c52a26c0759572b031848d5098a1f5ef81b3993,PodSandboxId:080ca9d7f3bf76f833091d1838a68a575c1cde45e2b3ac2cb29200b42e6fcb48,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723828556050965217,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45c4aa250fdc29f3673166187d642d12,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40117a7184f2549debf482d1b96d53560d5bc57d1fa0fb46ea93007ee8f3d940,PodSandboxId:a58c1ee703662e3f9cc4701c7c92ce72463943104460dd68c4441ac04e9c9171,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723828555942745800,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32cbd9593cdf012e272df9a250d0e00c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2eaae77033399ae62f905b8ed70d8509b5839c5f8b6b80e7462075c55fcb114,PodSandboxId:f0e8a8ba4e74af4758b8a010b6164dd1a2a0a190dbcc343d150b4efc61b24d4a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723828550059770473,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-d6c7g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 255004b9-d05e-4686-9e9c-6ec6f7aae439,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f49214f24a1f9d4e237db072dea4cb4011708fed1d55a3518bae64afc9a36de,PodSandboxId:31ad2ee33305c2e247dc968727a480a35dd4e341f754742f3ab45df490c9d9b6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723828052424014199,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rcq66,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef4f9584-2155-48ce-80fa-30bac466b9f5,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d21ff55e0d1541ec985fefc0bbe41954428037be46427e4a0b9b9f6f59ff38b5,PodSandboxId:570a9af97580c893bd59ad0da740672446aad828cc2953120c5936aa6c511600,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723827909473320957,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-rhb6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea20ec0a-a16e-4703-bb54-2e54c31acd40,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eefbb289cdc64359e43e612fda427c74a5cd21d8765173cad59129f113b6faf,PodSandboxId:a96010807e82ab97d921d55fe01739b0b7dc0b5234c6f3d29ffd5e0d7ae9a661,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723827909453875946,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-d6c7g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 255004b9-d05e-4686-9e9c-6ec6f7aae439,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7c860bdbf8f8c3ce3e6ce12afcd55605f1afdf02828581465b57f088bf9fc24,PodSandboxId:850550a63d423740e066cd344cb8eca06c8ab1ce91d384e5962c2c137bc4c4e2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723827897695873378,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-94vkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1ce0b8c-c2c8-400a-a013-6eb89e550cd9,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aaf72ada1592eb9db02d4b420e3712769e6486d17ade2669153a86196aec79d,PodSandboxId:7fa8ce6eea9326aa2d9e78422a5917fa128392accfa0f9395fc23862f5324831,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723827894190049003,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j75vc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50262aeb-9d97-4093-a43f-cb24a5515abb,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c020d60e48e21e3f68c2ab1e7dbc4570867af80f727edd6529d1d59b17f11a6f,PodSandboxId:c5d6c0455efc0ceafe45e3e9defa6b3e39bf97e12d07d6d742a88b3cbeaf65b6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915a
f3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723827882765000311,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a89700dd245c99cee73a27284c5b094,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:547ba7c3099cf77e755ecafde9e451130522c4ac7ad82e4d1425c79c1f03ae1b,PodSandboxId:09ec8ad12f1f1dcd1c0e02dc275ba62dff2a2b5f6ca7e7fd781694c2d45e4a18,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
,State:CONTAINER_EXITED,CreatedAt:1723827882761308074,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45c4aa250fdc29f3673166187d642d12,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ed860b04-0c56-400a-a9b7-cfb17f18dff9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 17:18:46 ha-764617 crio[3620]: time="2024-08-16 17:18:46.998543884Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3b8d45f8-f612-433b-99fe-2b193c17bcd9 name=/runtime.v1.RuntimeService/Version
	Aug 16 17:18:46 ha-764617 crio[3620]: time="2024-08-16 17:18:46.998627470Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3b8d45f8-f612-433b-99fe-2b193c17bcd9 name=/runtime.v1.RuntimeService/Version
	Aug 16 17:18:47 ha-764617 crio[3620]: time="2024-08-16 17:18:46.999939516Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5b6f4d96-5da6-4853-9f49-649ff1f96ef0 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 17:18:47 ha-764617 crio[3620]: time="2024-08-16 17:18:47.000650044Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828727000613656,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5b6f4d96-5da6-4853-9f49-649ff1f96ef0 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 17:18:47 ha-764617 crio[3620]: time="2024-08-16 17:18:47.001605755Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f193b1ca-ed65-4a1b-b7df-19c46f863111 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 17:18:47 ha-764617 crio[3620]: time="2024-08-16 17:18:47.001836718Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f193b1ca-ed65-4a1b-b7df-19c46f863111 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 17:18:47 ha-764617 crio[3620]: time="2024-08-16 17:18:47.002713464Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ac791931e4d13a44e95dbbe18074f060aa624cf0f580ca310128f507be6bbf03,PodSandboxId:651fc3ebab41e91c11dbd9ec45bb8b289a1439ce84ec71b882f60ed00ae65ff9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723828634581089417,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15a0a2d4-69d6-4a6b-9199-f8785e015c3b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dac1a1703e40dc64e13d71f786ff8b6bad8f232b19fdddbe915665a5c1a0627d,PodSandboxId:ea534f2443e4a9a38561d602ba9e916096ad3fd8fbd7e5bb5f9370fb199dfbe7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723828598579792512,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d9a187c472f17e2ba03b6daf392b7e4,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4f92a8547daed9b708e0671f69fe945f34065114cd65321f075d9534251caee,PodSandboxId:a58c1ee703662e3f9cc4701c7c92ce72463943104460dd68c4441ac04e9c9171,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723828597577658920,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32cbd9593cdf012e272df9a250d0e00c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d42c6410ce9e1888286c8b697f6379a5bfacef89f9bdbfb33bad67cf3d03394b,PodSandboxId:651fc3ebab41e91c11dbd9ec45bb8b289a1439ce84ec71b882f60ed00ae65ff9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723828592584796731,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15a0a2d4-69d6-4a6b-9199-f8785e015c3b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2a44b535a74ed775a0bd6458b6bac0ad19ba8996e3f1c7325d30c5a3ae67297,PodSandboxId:872ace8403f2540a30c549847c18e8f9f5807493bb8ea967789ea4a7e014933c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723828589495393551,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rcq66,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef4f9584-2155-48ce-80fa-30bac466b9f5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19098e85977205229a51b1b4c24778595dcbc936d499282e98c120e9ff36695c,PodSandboxId:fa54cec332137f89fe347c604ba465a243e0da76448cfa44dfbad6b11d5b729e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723828572580092397,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a38b1e2e1b12167875f857f3d80e7b4e,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:030986f0ddc53b18e5e09d98f86351494ff02ed8d6b0e901d908a4544c679c3a,PodSandboxId:31b42c4c94b6f6c09d5a6b52f1b812af66811b2a302205fb3a6b7ecb5a764d6c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723828556165805551,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a89700dd245c99cee73a27284c5b094,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:f97fbca9aeaa7f4efc01aa36e9842fcbe1317fa5826d11051aab16ce390474fb,PodSandboxId:ac31589ee8673c28232e093f596c034bb71506aeb2765639d7ee3ae75dc7e97a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723828556229024809,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j75vc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50262aeb-9d97-4093-a43f-cb24a5515abb,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:290a055bbcb3d1cf9eb284751a
1e49d8b00b30dffd90fe3456c00e2d0d23dadb,PodSandboxId:ea534f2443e4a9a38561d602ba9e916096ad3fd8fbd7e5bb5f9370fb199dfbe7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723828556115461754,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d9a187c472f17e2ba03b6daf392b7e4,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:417237a361000abc40887
a2662fb7b87d56264d8520dea58cafbba0151e2ce42,PodSandboxId:7ff7258bc28f6b1905834c430adc872d22465a481131c790cff4ae1dc0f9f4fc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723828556226312067,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-rhb6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea20ec0a-a16e-4703-bb54-2e54c31acd40,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes
.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8281ecd4daddf7726964e428d3d817eb8b5f5b72ebf8f05f4209903dabfadeaa,PodSandboxId:8667e68ae5f7bb3667dc32338de129e4dbeae550106e74bc76e6ced118844a0e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723828556124470990,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-94vkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1ce0b8c-c2c8-400a-a013-6eb89e550cd9,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aeafb73cca63549a23e7a8a77c52a26c0759572b031848d5098a1f5ef81b3993,PodSandboxId:080ca9d7f3bf76f833091d1838a68a575c1cde45e2b3ac2cb29200b42e6fcb48,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723828556050965217,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45c4aa250fdc29f3673166187d642d12,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40117a7184f2549debf482d1b96d53560d5bc57d1fa0fb46ea93007ee8f3d940,PodSandboxId:a58c1ee703662e3f9cc4701c7c92ce72463943104460dd68c4441ac04e9c9171,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723828555942745800,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32cbd9593cdf012e272df9a250d0e00c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2eaae77033399ae62f905b8ed70d8509b5839c5f8b6b80e7462075c55fcb114,PodSandboxId:f0e8a8ba4e74af4758b8a010b6164dd1a2a0a190dbcc343d150b4efc61b24d4a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723828550059770473,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-d6c7g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 255004b9-d05e-4686-9e9c-6ec6f7aae439,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f49214f24a1f9d4e237db072dea4cb4011708fed1d55a3518bae64afc9a36de,PodSandboxId:31ad2ee33305c2e247dc968727a480a35dd4e341f754742f3ab45df490c9d9b6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723828052424014199,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rcq66,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef4f9584-2155-48ce-80fa-30bac466b9f5,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d21ff55e0d1541ec985fefc0bbe41954428037be46427e4a0b9b9f6f59ff38b5,PodSandboxId:570a9af97580c893bd59ad0da740672446aad828cc2953120c5936aa6c511600,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723827909473320957,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-rhb6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea20ec0a-a16e-4703-bb54-2e54c31acd40,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eefbb289cdc64359e43e612fda427c74a5cd21d8765173cad59129f113b6faf,PodSandboxId:a96010807e82ab97d921d55fe01739b0b7dc0b5234c6f3d29ffd5e0d7ae9a661,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723827909453875946,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-d6c7g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 255004b9-d05e-4686-9e9c-6ec6f7aae439,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7c860bdbf8f8c3ce3e6ce12afcd55605f1afdf02828581465b57f088bf9fc24,PodSandboxId:850550a63d423740e066cd344cb8eca06c8ab1ce91d384e5962c2c137bc4c4e2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723827897695873378,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-94vkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1ce0b8c-c2c8-400a-a013-6eb89e550cd9,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aaf72ada1592eb9db02d4b420e3712769e6486d17ade2669153a86196aec79d,PodSandboxId:7fa8ce6eea9326aa2d9e78422a5917fa128392accfa0f9395fc23862f5324831,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723827894190049003,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j75vc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50262aeb-9d97-4093-a43f-cb24a5515abb,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c020d60e48e21e3f68c2ab1e7dbc4570867af80f727edd6529d1d59b17f11a6f,PodSandboxId:c5d6c0455efc0ceafe45e3e9defa6b3e39bf97e12d07d6d742a88b3cbeaf65b6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915a
f3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723827882765000311,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a89700dd245c99cee73a27284c5b094,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:547ba7c3099cf77e755ecafde9e451130522c4ac7ad82e4d1425c79c1f03ae1b,PodSandboxId:09ec8ad12f1f1dcd1c0e02dc275ba62dff2a2b5f6ca7e7fd781694c2d45e4a18,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
,State:CONTAINER_EXITED,CreatedAt:1723827882761308074,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45c4aa250fdc29f3673166187d642d12,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f193b1ca-ed65-4a1b-b7df-19c46f863111 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	ac791931e4d13       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       4                   651fc3ebab41e       storage-provisioner
	dac1a1703e40d       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      2 minutes ago        Running             kube-controller-manager   2                   ea534f2443e4a       kube-controller-manager-ha-764617
	f4f92a8547dae       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      2 minutes ago        Running             kube-apiserver            3                   a58c1ee703662       kube-apiserver-ha-764617
	d42c6410ce9e1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Exited              storage-provisioner       3                   651fc3ebab41e       storage-provisioner
	e2a44b535a74e       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      2 minutes ago        Running             busybox                   1                   872ace8403f25       busybox-7dff88458-rcq66
	19098e8597720       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   fa54cec332137       kube-vip-ha-764617
	f97fbca9aeaa7       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      2 minutes ago        Running             kube-proxy                1                   ac31589ee8673       kube-proxy-j75vc
	417237a361000       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   7ff7258bc28f6       coredns-6f6b679f8f-rhb6h
	030986f0ddc53       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      2 minutes ago        Running             etcd                      1                   31b42c4c94b6f       etcd-ha-764617
	8281ecd4daddf       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      2 minutes ago        Running             kindnet-cni               1                   8667e68ae5f7b       kindnet-94vkj
	290a055bbcb3d       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      2 minutes ago        Exited              kube-controller-manager   1                   ea534f2443e4a       kube-controller-manager-ha-764617
	aeafb73cca635       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      2 minutes ago        Running             kube-scheduler            1                   080ca9d7f3bf7       kube-scheduler-ha-764617
	40117a7184f25       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      2 minutes ago        Exited              kube-apiserver            2                   a58c1ee703662       kube-apiserver-ha-764617
	d2eaae7703339       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   f0e8a8ba4e74a       coredns-6f6b679f8f-d6c7g
	8f49214f24a1f       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   11 minutes ago       Exited              busybox                   0                   31ad2ee33305c       busybox-7dff88458-rcq66
	d21ff55e0d154       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   570a9af97580c       coredns-6f6b679f8f-rhb6h
	8eefbb289cdc6       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   a96010807e82a       coredns-6f6b679f8f-d6c7g
	b7c860bdbf8f8       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    13 minutes ago       Exited              kindnet-cni               0                   850550a63d423       kindnet-94vkj
	1aaf72ada1592       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      13 minutes ago       Exited              kube-proxy                0                   7fa8ce6eea932       kube-proxy-j75vc
	c020d60e48e21       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      14 minutes ago       Exited              etcd                      0                   c5d6c0455efc0       etcd-ha-764617
	547ba7c3099cf       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      14 minutes ago       Exited              kube-scheduler            0                   09ec8ad12f1f1       kube-scheduler-ha-764617
	
	
	==> coredns [417237a361000abc40887a2662fb7b87d56264d8520dea58cafbba0151e2ce42] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:41244->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:41244->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:41228->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:41228->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:41222->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1466542355]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Aug-2024 17:16:08.046) (total time: 12917ms):
	Trace[1466542355]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:41222->10.96.0.1:443: read: connection reset by peer 12917ms (17:16:20.963)
	Trace[1466542355]: [12.917946118s] [12.917946118s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:41222->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [8eefbb289cdc64359e43e612fda427c74a5cd21d8765173cad59129f113b6faf] <==
	[INFO] 10.244.1.2:52681 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013622s
	[INFO] 10.244.1.2:34428 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000179694s
	[INFO] 10.244.1.2:38361 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000107495s
	[INFO] 10.244.0.4:33031 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000072835s
	[INFO] 10.244.0.4:46219 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00004433s
	[INFO] 10.244.2.2:36496 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000117578s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1915&timeout=6m47s&timeoutSeconds=407&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1915&timeout=5m42s&timeoutSeconds=342&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=1915": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=1915": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=1915": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=1915": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: Trace[447197809]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Aug-2024 17:14:01.648) (total time: 10064ms):
	Trace[447197809]: ---"Objects listed" error:Unauthorized 10064ms (17:14:11.713)
	Trace[447197809]: [10.064349415s] [10.064349415s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: Trace[1233712988]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Aug-2024 17:14:00.298) (total time: 11415ms):
	Trace[1233712988]: ---"Objects listed" error:Unauthorized 11415ms (17:14:11.714)
	Trace[1233712988]: [11.415834293s] [11.415834293s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces)
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [d21ff55e0d1541ec985fefc0bbe41954428037be46427e4a0b9b9f6f59ff38b5] <==
	[INFO] 10.244.2.2:33517 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00005988s
	[INFO] 10.244.1.2:58731 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000174613s
	[INFO] 10.244.1.2:43400 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000105057s
	[INFO] 10.244.1.2:41968 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000104182s
	[INFO] 10.244.0.4:46666 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121402s
	[INFO] 10.244.0.4:46004 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000066296s
	[INFO] 10.244.2.2:39282 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00010929s
	[INFO] 10.244.1.2:58290 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000151089s
	[INFO] 10.244.0.4:38377 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152447s
	[INFO] 10.244.0.4:57414 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000061601s
	[INFO] 10.244.2.2:49722 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000182712s
	[INFO] 10.244.2.2:47690 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00014162s
	[INFO] 10.244.2.2:41318 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000108034s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1915&timeout=6m31s&timeoutSeconds=391&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1930&timeout=8m14s&timeoutSeconds=494&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=1915": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=1915": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=1930": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=1930": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [d2eaae77033399ae62f905b8ed70d8509b5839c5f8b6b80e7462075c55fcb114] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1349058062]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Aug-2024 17:16:04.463) (total time: 10001ms):
	Trace[1349058062]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (17:16:14.464)
	Trace[1349058062]: [10.001638046s] [10.001638046s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1743112337]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Aug-2024 17:16:04.990) (total time: 10001ms):
	Trace[1743112337]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (17:16:14.991)
	Trace[1743112337]: [10.001244969s] [10.001244969s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-764617
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-764617
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd
	                    minikube.k8s.io/name=ha-764617
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_16T17_04_49_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 17:04:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-764617
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 17:18:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 17:16:41 +0000   Fri, 16 Aug 2024 17:04:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 17:16:41 +0000   Fri, 16 Aug 2024 17:04:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 17:16:41 +0000   Fri, 16 Aug 2024 17:04:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 17:16:41 +0000   Fri, 16 Aug 2024 17:05:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.18
	  Hostname:    ha-764617
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c56e74c3649b4538acc75a2edf2b5dea
	  System UUID:                c56e74c3-649b-4538-acc7-5a2edf2b5dea
	  Boot ID:                    b56c67cf-18b1-46e0-819e-927538c01209
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-rcq66              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-6f6b679f8f-d6c7g             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 coredns-6f6b679f8f-rhb6h             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 etcd-ha-764617                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         13m
	  kube-system                 kindnet-94vkj                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-apiserver-ha-764617             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-764617    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-j75vc                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-764617             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-764617                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         65s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m7s                   kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node ha-764617 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     14m (x7 over 14m)      kubelet          Node ha-764617 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node ha-764617 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 14m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     13m                    kubelet          Node ha-764617 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    13m                    kubelet          Node ha-764617 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  13m                    kubelet          Node ha-764617 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Normal   RegisteredNode           13m                    node-controller  Node ha-764617 event: Registered Node ha-764617 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-764617 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-764617 event: Registered Node ha-764617 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-764617 event: Registered Node ha-764617 in Controller
	  Warning  ContainerGCFailed        2m59s (x2 over 3m59s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             2m56s (x3 over 3m45s)  kubelet          Node ha-764617 status is now: NodeNotReady
	  Normal   RegisteredNode           2m15s                  node-controller  Node ha-764617 event: Registered Node ha-764617 in Controller
	  Normal   RegisteredNode           2m4s                   node-controller  Node ha-764617 event: Registered Node ha-764617 in Controller
	  Normal   RegisteredNode           35s                    node-controller  Node ha-764617 event: Registered Node ha-764617 in Controller
	
	
	Name:               ha-764617-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-764617-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd
	                    minikube.k8s.io/name=ha-764617
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_16T17_05_48_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 17:05:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-764617-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 17:18:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 17:17:21 +0000   Fri, 16 Aug 2024 17:16:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 17:17:21 +0000   Fri, 16 Aug 2024 17:16:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 17:17:21 +0000   Fri, 16 Aug 2024 17:16:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 17:17:21 +0000   Fri, 16 Aug 2024 17:16:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.184
	  Hostname:    ha-764617-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9b9263e99d3f46399a1ef68b5c9541da
	  System UUID:                9b9263e9-9d3f-4639-9a1e-f68b5c9541da
	  Boot ID:                    2f4561c3-220c-425a-ae31-ea31a2191f13
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-5kg62                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-764617-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         13m
	  kube-system                 kindnet-7l8xt                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-apiserver-ha-764617-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-764617-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-g5szr                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-764617-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-764617-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 109s                   kube-proxy       
	  Normal  Starting                 12m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)      kubelet          Node ha-764617-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)      kubelet          Node ha-764617-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)      kubelet          Node ha-764617-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m                    node-controller  Node ha-764617-m02 event: Registered Node ha-764617-m02 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-764617-m02 event: Registered Node ha-764617-m02 in Controller
	  Normal  RegisteredNode           11m                    node-controller  Node ha-764617-m02 event: Registered Node ha-764617-m02 in Controller
	  Normal  NodeNotReady             9m28s                  node-controller  Node ha-764617-m02 status is now: NodeNotReady
	  Normal  Starting                 2m34s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m34s (x8 over 2m34s)  kubelet          Node ha-764617-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m34s (x8 over 2m34s)  kubelet          Node ha-764617-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m34s (x7 over 2m34s)  kubelet          Node ha-764617-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m34s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m15s                  node-controller  Node ha-764617-m02 event: Registered Node ha-764617-m02 in Controller
	  Normal  RegisteredNode           2m4s                   node-controller  Node ha-764617-m02 event: Registered Node ha-764617-m02 in Controller
	  Normal  RegisteredNode           35s                    node-controller  Node ha-764617-m02 event: Registered Node ha-764617-m02 in Controller
	
	
	Name:               ha-764617-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-764617-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd
	                    minikube.k8s.io/name=ha-764617
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_16T17_07_04_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 17:07:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-764617-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 17:18:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 17:18:23 +0000   Fri, 16 Aug 2024 17:17:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 17:18:23 +0000   Fri, 16 Aug 2024 17:17:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 17:18:23 +0000   Fri, 16 Aug 2024 17:17:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 17:18:23 +0000   Fri, 16 Aug 2024 17:17:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.253
	  Hostname:    ha-764617-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c731249060784cabbf92c847e80f83c3
	  System UUID:                c7312490-6078-4cab-bf92-c847e80f83c3
	  Boot ID:                    c022199a-4132-4e2b-93b4-1f4d07f9d435
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-rvd47                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-764617-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         11m
	  kube-system                 kindnet-fvp67                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      11m
	  kube-system                 kube-apiserver-ha-764617-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-764617-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-mgvzm                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-764617-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-764617-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 37s                kube-proxy       
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-764617-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-764617-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-764617-m03 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m                node-controller  Node ha-764617-m03 event: Registered Node ha-764617-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-764617-m03 event: Registered Node ha-764617-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-764617-m03 event: Registered Node ha-764617-m03 in Controller
	  Normal   RegisteredNode           2m14s              node-controller  Node ha-764617-m03 event: Registered Node ha-764617-m03 in Controller
	  Normal   RegisteredNode           2m4s               node-controller  Node ha-764617-m03 event: Registered Node ha-764617-m03 in Controller
	  Normal   NodeNotReady             94s                node-controller  Node ha-764617-m03 status is now: NodeNotReady
	  Normal   Starting                 55s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  55s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 55s                kubelet          Node ha-764617-m03 has been rebooted, boot id: c022199a-4132-4e2b-93b4-1f4d07f9d435
	  Normal   NodeReady                55s                kubelet          Node ha-764617-m03 status is now: NodeReady
	  Normal   NodeHasSufficientMemory  54s (x2 over 55s)  kubelet          Node ha-764617-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    54s (x2 over 55s)  kubelet          Node ha-764617-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     54s (x2 over 55s)  kubelet          Node ha-764617-m03 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           35s                node-controller  Node ha-764617-m03 event: Registered Node ha-764617-m03 in Controller
	
	
	Name:               ha-764617-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-764617-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd
	                    minikube.k8s.io/name=ha-764617
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_16T17_08_05_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 17:08:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-764617-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 17:18:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 17:18:39 +0000   Fri, 16 Aug 2024 17:18:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 17:18:39 +0000   Fri, 16 Aug 2024 17:18:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 17:18:39 +0000   Fri, 16 Aug 2024 17:18:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 17:18:39 +0000   Fri, 16 Aug 2024 17:18:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.137
	  Hostname:    ha-764617-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6601760275c145fda2c7de8f57c611fa
	  System UUID:                66017602-75c1-45fd-a2c7-de8f57c611fa
	  Boot ID:                    e4e990ae-bfad-4760-916d-243430ff145a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-785hx       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-proxy-p9gpb    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-764617-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-764617-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-764617-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-764617-m04 event: Registered Node ha-764617-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-764617-m04 event: Registered Node ha-764617-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-764617-m04 event: Registered Node ha-764617-m04 in Controller
	  Normal   NodeReady                10m                kubelet          Node ha-764617-m04 status is now: NodeReady
	  Normal   RegisteredNode           2m14s              node-controller  Node ha-764617-m04 event: Registered Node ha-764617-m04 in Controller
	  Normal   RegisteredNode           2m4s               node-controller  Node ha-764617-m04 event: Registered Node ha-764617-m04 in Controller
	  Normal   NodeNotReady             94s                node-controller  Node ha-764617-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           35s                node-controller  Node ha-764617-m04 event: Registered Node ha-764617-m04 in Controller
	  Normal   Starting                 8s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  8s                 kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 8s                 kubelet          Node ha-764617-m04 has been rebooted, boot id: e4e990ae-bfad-4760-916d-243430ff145a
	  Normal   NodeHasSufficientMemory  8s (x2 over 8s)    kubelet          Node ha-764617-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s (x2 over 8s)    kubelet          Node ha-764617-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s (x2 over 8s)    kubelet          Node ha-764617-m04 status is now: NodeHasSufficientPID
	  Normal   NodeReady                8s                 kubelet          Node ha-764617-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +6.494535] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.053885] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056699] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.201898] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.107599] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.255485] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +3.757333] systemd-fstab-generator[760]: Ignoring "noauto" option for root device
	[  +4.397161] systemd-fstab-generator[897]: Ignoring "noauto" option for root device
	[  +0.059974] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.993084] systemd-fstab-generator[1320]: Ignoring "noauto" option for root device
	[  +0.077626] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.633091] kauditd_printk_skb: 18 callbacks suppressed
	[Aug16 17:05] kauditd_printk_skb: 41 callbacks suppressed
	[ +41.798128] kauditd_printk_skb: 26 callbacks suppressed
	[Aug16 17:15] systemd-fstab-generator[3533]: Ignoring "noauto" option for root device
	[  +0.147419] systemd-fstab-generator[3545]: Ignoring "noauto" option for root device
	[  +0.168608] systemd-fstab-generator[3559]: Ignoring "noauto" option for root device
	[  +0.136670] systemd-fstab-generator[3571]: Ignoring "noauto" option for root device
	[  +0.269453] systemd-fstab-generator[3599]: Ignoring "noauto" option for root device
	[  +3.441490] systemd-fstab-generator[3706]: Ignoring "noauto" option for root device
	[  +6.395201] kauditd_printk_skb: 132 callbacks suppressed
	[Aug16 17:16] kauditd_printk_skb: 75 callbacks suppressed
	[ +10.062557] kauditd_printk_skb: 1 callbacks suppressed
	[ +16.831551] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.360691] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [030986f0ddc53b18e5e09d98f86351494ff02ed8d6b0e901d908a4544c679c3a] <==
	{"level":"warn","ts":"2024-08-16T17:17:46.865340Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d6d01a71dfc61a14","from":"d6d01a71dfc61a14","remote-peer-id":"44c87d00f43700c5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T17:17:46.897451Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d6d01a71dfc61a14","from":"d6d01a71dfc61a14","remote-peer-id":"44c87d00f43700c5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-16T17:17:47.033027Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"44c87d00f43700c5","rtt":"0s","error":"dial tcp 192.168.39.253:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-08-16T17:17:47.033249Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"44c87d00f43700c5","rtt":"0s","error":"dial tcp 192.168.39.253:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-08-16T17:17:50.148085Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.253:2380/version","remote-member-id":"44c87d00f43700c5","error":"Get \"https://192.168.39.253:2380/version\": dial tcp 192.168.39.253:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-16T17:17:50.148207Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"44c87d00f43700c5","error":"Get \"https://192.168.39.253:2380/version\": dial tcp 192.168.39.253:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-16T17:17:52.033581Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"44c87d00f43700c5","rtt":"0s","error":"dial tcp 192.168.39.253:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-16T17:17:52.033771Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"44c87d00f43700c5","rtt":"0s","error":"dial tcp 192.168.39.253:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-16T17:17:54.150302Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.253:2380/version","remote-member-id":"44c87d00f43700c5","error":"Get \"https://192.168.39.253:2380/version\": dial tcp 192.168.39.253:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-16T17:17:54.150444Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"44c87d00f43700c5","error":"Get \"https://192.168.39.253:2380/version\": dial tcp 192.168.39.253:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-16T17:17:57.033867Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"44c87d00f43700c5","rtt":"0s","error":"dial tcp 192.168.39.253:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-16T17:17:57.033978Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"44c87d00f43700c5","rtt":"0s","error":"dial tcp 192.168.39.253:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-16T17:17:58.153044Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.253:2380/version","remote-member-id":"44c87d00f43700c5","error":"Get \"https://192.168.39.253:2380/version\": dial tcp 192.168.39.253:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-16T17:17:58.153103Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"44c87d00f43700c5","error":"Get \"https://192.168.39.253:2380/version\": dial tcp 192.168.39.253:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-16T17:18:02.034351Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"44c87d00f43700c5","rtt":"0s","error":"dial tcp 192.168.39.253:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-16T17:18:02.034380Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"44c87d00f43700c5","rtt":"0s","error":"dial tcp 192.168.39.253:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-16T17:18:02.154757Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.253:2380/version","remote-member-id":"44c87d00f43700c5","error":"Get \"https://192.168.39.253:2380/version\": dial tcp 192.168.39.253:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-16T17:18:02.154841Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"44c87d00f43700c5","error":"Get \"https://192.168.39.253:2380/version\": dial tcp 192.168.39.253:2380: connect: connection refused"}
	{"level":"info","ts":"2024-08-16T17:18:05.014672Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"44c87d00f43700c5"}
	{"level":"info","ts":"2024-08-16T17:18:05.015338Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"d6d01a71dfc61a14","remote-peer-id":"44c87d00f43700c5"}
	{"level":"info","ts":"2024-08-16T17:18:05.015396Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"d6d01a71dfc61a14","remote-peer-id":"44c87d00f43700c5"}
	{"level":"info","ts":"2024-08-16T17:18:05.054001Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"d6d01a71dfc61a14","to":"44c87d00f43700c5","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-16T17:18:05.054266Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"d6d01a71dfc61a14","remote-peer-id":"44c87d00f43700c5"}
	{"level":"info","ts":"2024-08-16T17:18:05.070546Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"d6d01a71dfc61a14","to":"44c87d00f43700c5","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-16T17:18:05.070763Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"d6d01a71dfc61a14","remote-peer-id":"44c87d00f43700c5"}
	
	
	==> etcd [c020d60e48e21e3f68c2ab1e7dbc4570867af80f727edd6529d1d59b17f11a6f] <==
	{"level":"warn","ts":"2024-08-16T17:14:13.562744Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"856.882335ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" limit:10000 ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-08-16T17:14:13.562755Z","caller":"traceutil/trace.go:171","msg":"trace[1004593196] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; }","duration":"856.894422ms","start":"2024-08-16T17:14:12.705858Z","end":"2024-08-16T17:14:13.562752Z","steps":["trace[1004593196] 'agreement among raft nodes before linearized reading'  (duration: 856.882164ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T17:14:13.562766Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T17:14:12.705851Z","time spent":"856.91213ms","remote":"127.0.0.1:47644","response type":"/etcdserverpb.KV/Range","request count":0,"request size":43,"response count":0,"response size":0,"request content":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" limit:10000 "}
	2024/08/16 17:14:13 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-16T17:14:13.623112Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.18:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-16T17:14:13.623226Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.18:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-16T17:14:13.623305Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"d6d01a71dfc61a14","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-16T17:14:13.623523Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"c0829ec3e89b55c7"}
	{"level":"info","ts":"2024-08-16T17:14:13.623588Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"c0829ec3e89b55c7"}
	{"level":"info","ts":"2024-08-16T17:14:13.623644Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"c0829ec3e89b55c7"}
	{"level":"info","ts":"2024-08-16T17:14:13.623762Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"d6d01a71dfc61a14","remote-peer-id":"c0829ec3e89b55c7"}
	{"level":"info","ts":"2024-08-16T17:14:13.623839Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"d6d01a71dfc61a14","remote-peer-id":"c0829ec3e89b55c7"}
	{"level":"info","ts":"2024-08-16T17:14:13.623893Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"d6d01a71dfc61a14","remote-peer-id":"c0829ec3e89b55c7"}
	{"level":"info","ts":"2024-08-16T17:14:13.623956Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"c0829ec3e89b55c7"}
	{"level":"info","ts":"2024-08-16T17:14:13.623965Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"44c87d00f43700c5"}
	{"level":"info","ts":"2024-08-16T17:14:13.623978Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"44c87d00f43700c5"}
	{"level":"info","ts":"2024-08-16T17:14:13.624021Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"44c87d00f43700c5"}
	{"level":"info","ts":"2024-08-16T17:14:13.624083Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"d6d01a71dfc61a14","remote-peer-id":"44c87d00f43700c5"}
	{"level":"info","ts":"2024-08-16T17:14:13.624176Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"d6d01a71dfc61a14","remote-peer-id":"44c87d00f43700c5"}
	{"level":"info","ts":"2024-08-16T17:14:13.624230Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"d6d01a71dfc61a14","remote-peer-id":"44c87d00f43700c5"}
	{"level":"info","ts":"2024-08-16T17:14:13.624287Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"44c87d00f43700c5"}
	{"level":"info","ts":"2024-08-16T17:14:13.626941Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.18:2380"}
	{"level":"info","ts":"2024-08-16T17:14:13.627026Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.18:2380"}
	{"level":"info","ts":"2024-08-16T17:14:13.627046Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-764617","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.18:2380"],"advertise-client-urls":["https://192.168.39.18:2379"]}
	{"level":"warn","ts":"2024-08-16T17:14:13.627033Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.915963497s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	
	
	==> kernel <==
	 17:18:47 up 14 min,  0 users,  load average: 0.18, 0.45, 0.35
	Linux ha-764617 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [8281ecd4daddf7726964e428d3d817eb8b5f5b72ebf8f05f4209903dabfadeaa] <==
	I0816 17:18:17.004924       1 main.go:322] Node ha-764617-m03 has CIDR [10.244.2.0/24] 
	I0816 17:18:27.000525       1 main.go:295] Handling node with IPs: map[192.168.39.18:{}]
	I0816 17:18:27.000609       1 main.go:299] handling current node
	I0816 17:18:27.000636       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0816 17:18:27.000646       1 main.go:322] Node ha-764617-m02 has CIDR [10.244.1.0/24] 
	I0816 17:18:27.000848       1 main.go:295] Handling node with IPs: map[192.168.39.253:{}]
	I0816 17:18:27.000878       1 main.go:322] Node ha-764617-m03 has CIDR [10.244.2.0/24] 
	I0816 17:18:27.001003       1 main.go:295] Handling node with IPs: map[192.168.39.137:{}]
	I0816 17:18:27.001028       1 main.go:322] Node ha-764617-m04 has CIDR [10.244.3.0/24] 
	I0816 17:18:37.009307       1 main.go:295] Handling node with IPs: map[192.168.39.253:{}]
	I0816 17:18:37.009506       1 main.go:322] Node ha-764617-m03 has CIDR [10.244.2.0/24] 
	I0816 17:18:37.010596       1 main.go:295] Handling node with IPs: map[192.168.39.137:{}]
	I0816 17:18:37.010969       1 main.go:322] Node ha-764617-m04 has CIDR [10.244.3.0/24] 
	I0816 17:18:37.011841       1 main.go:295] Handling node with IPs: map[192.168.39.18:{}]
	I0816 17:18:37.011905       1 main.go:299] handling current node
	I0816 17:18:37.011946       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0816 17:18:37.012006       1 main.go:322] Node ha-764617-m02 has CIDR [10.244.1.0/24] 
	I0816 17:18:47.000030       1 main.go:295] Handling node with IPs: map[192.168.39.18:{}]
	I0816 17:18:47.000102       1 main.go:299] handling current node
	I0816 17:18:47.000123       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0816 17:18:47.000176       1 main.go:322] Node ha-764617-m02 has CIDR [10.244.1.0/24] 
	I0816 17:18:47.000425       1 main.go:295] Handling node with IPs: map[192.168.39.253:{}]
	I0816 17:18:47.000440       1 main.go:322] Node ha-764617-m03 has CIDR [10.244.2.0/24] 
	I0816 17:18:47.000531       1 main.go:295] Handling node with IPs: map[192.168.39.137:{}]
	I0816 17:18:47.000558       1 main.go:322] Node ha-764617-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [b7c860bdbf8f8c3ce3e6ce12afcd55605f1afdf02828581465b57f088bf9fc24] <==
	I0816 17:13:48.552243       1 main.go:295] Handling node with IPs: map[192.168.39.18:{}]
	I0816 17:13:48.552292       1 main.go:299] handling current node
	I0816 17:13:48.552314       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0816 17:13:48.552319       1 main.go:322] Node ha-764617-m02 has CIDR [10.244.1.0/24] 
	I0816 17:13:48.552473       1 main.go:295] Handling node with IPs: map[192.168.39.253:{}]
	I0816 17:13:48.552492       1 main.go:322] Node ha-764617-m03 has CIDR [10.244.2.0/24] 
	I0816 17:13:48.552553       1 main.go:295] Handling node with IPs: map[192.168.39.137:{}]
	I0816 17:13:48.552570       1 main.go:322] Node ha-764617-m04 has CIDR [10.244.3.0/24] 
	I0816 17:13:58.551071       1 main.go:295] Handling node with IPs: map[192.168.39.18:{}]
	I0816 17:13:58.551115       1 main.go:299] handling current node
	I0816 17:13:58.551176       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0816 17:13:58.551183       1 main.go:322] Node ha-764617-m02 has CIDR [10.244.1.0/24] 
	I0816 17:13:58.551405       1 main.go:295] Handling node with IPs: map[192.168.39.253:{}]
	I0816 17:13:58.551453       1 main.go:322] Node ha-764617-m03 has CIDR [10.244.2.0/24] 
	I0816 17:13:58.551629       1 main.go:295] Handling node with IPs: map[192.168.39.137:{}]
	I0816 17:13:58.551660       1 main.go:322] Node ha-764617-m04 has CIDR [10.244.3.0/24] 
	I0816 17:14:08.552049       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0816 17:14:08.552200       1 main.go:322] Node ha-764617-m02 has CIDR [10.244.1.0/24] 
	I0816 17:14:08.552373       1 main.go:295] Handling node with IPs: map[192.168.39.253:{}]
	I0816 17:14:08.552395       1 main.go:322] Node ha-764617-m03 has CIDR [10.244.2.0/24] 
	I0816 17:14:08.552487       1 main.go:295] Handling node with IPs: map[192.168.39.137:{}]
	I0816 17:14:08.552503       1 main.go:322] Node ha-764617-m04 has CIDR [10.244.3.0/24] 
	I0816 17:14:08.552594       1 main.go:295] Handling node with IPs: map[192.168.39.18:{}]
	I0816 17:14:08.552613       1 main.go:299] handling current node
	E0816 17:14:09.443722       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=1899&timeout=8m11s&timeoutSeconds=491&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	
	
	==> kube-apiserver [40117a7184f2549debf482d1b96d53560d5bc57d1fa0fb46ea93007ee8f3d940] <==
	I0816 17:15:56.615730       1 options.go:228] external host was not specified, using 192.168.39.18
	I0816 17:15:56.629059       1 server.go:142] Version: v1.31.0
	I0816 17:15:56.629244       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 17:15:57.455030       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0816 17:15:57.476247       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0816 17:15:57.484034       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0816 17:15:57.484180       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0816 17:15:57.484472       1 instance.go:232] Using reconciler: lease
	W0816 17:16:17.454716       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0816 17:16:17.454717       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0816 17:16:17.487271       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [f4f92a8547daed9b708e0671f69fe945f34065114cd65321f075d9534251caee] <==
	I0816 17:16:39.962559       1 establishing_controller.go:81] Starting EstablishingController
	I0816 17:16:39.962582       1 nonstructuralschema_controller.go:195] Starting NonStructuralSchemaConditionController
	I0816 17:16:39.962588       1 apiapproval_controller.go:189] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0816 17:16:39.962600       1 crd_finalizer.go:269] Starting CRDFinalizer
	I0816 17:16:40.063961       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0816 17:16:40.066023       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0816 17:16:40.067628       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0816 17:16:40.067928       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0816 17:16:40.070489       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0816 17:16:40.073483       1 shared_informer.go:320] Caches are synced for configmaps
	I0816 17:16:40.073687       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0816 17:16:40.074494       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0816 17:16:40.079054       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0816 17:16:40.079187       1 policy_source.go:224] refreshing policies
	I0816 17:16:40.081651       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0816 17:16:40.094544       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0816 17:16:40.097077       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0816 17:16:40.097230       1 aggregator.go:171] initial CRD sync complete...
	I0816 17:16:40.097272       1 autoregister_controller.go:144] Starting autoregister controller
	I0816 17:16:40.097300       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0816 17:16:40.097324       1 cache.go:39] Caches are synced for autoregister controller
	I0816 17:16:40.977701       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0816 17:16:41.708583       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.18 192.168.39.184]
	I0816 17:16:41.710170       1 controller.go:615] quota admission added evaluator for: endpoints
	I0816 17:16:41.732036       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [290a055bbcb3d1cf9eb284751a1e49d8b00b30dffd90fe3456c00e2d0d23dadb] <==
	I0816 17:15:57.347991       1 serving.go:386] Generated self-signed cert in-memory
	I0816 17:15:57.599844       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0816 17:15:57.599957       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 17:15:57.601665       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0816 17:15:57.601825       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0816 17:15:57.602324       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0816 17:15:57.602410       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0816 17:16:18.493788       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.18:8443/healthz\": dial tcp 192.168.39.18:8443: connect: connection refused"
	
	
	==> kube-controller-manager [dac1a1703e40dc64e13d71f786ff8b6bad8f232b19fdddbe915665a5c1a0627d] <==
	I0816 17:17:13.210694       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-764617-m04"
	I0816 17:17:13.390096       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="11.843437ms"
	I0816 17:17:13.390357       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="88.931µs"
	I0816 17:17:13.442848       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-764617-m03"
	I0816 17:17:18.482666       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-764617-m04"
	I0816 17:17:19.274963       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="14.753991ms"
	I0816 17:17:19.275317       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="221.166µs"
	I0816 17:17:19.280423       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-7v65c EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-7v65c\": the object has been modified; please apply your changes to the latest version and try again"
	I0816 17:17:19.280649       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"f9f25b7a-603d-4578-8164-0a85cbb9ada0", APIVersion:"v1", ResourceVersion:"293", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-7v65c EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-7v65c": the object has been modified; please apply your changes to the latest version and try again
	I0816 17:17:21.486245       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-764617-m02"
	I0816 17:17:23.526107       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-764617-m04"
	I0816 17:17:28.557092       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-764617-m03"
	I0816 17:17:52.961904       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-764617-m03"
	I0816 17:17:52.981305       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-764617-m03"
	I0816 17:17:53.388969       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-764617-m03"
	I0816 17:17:54.017950       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="43.709µs"
	I0816 17:18:12.277602       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-764617-m04"
	I0816 17:18:12.354448       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-764617-m04"
	I0816 17:18:13.211657       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="12.549633ms"
	I0816 17:18:13.213327       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="100.27µs"
	I0816 17:18:23.542760       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-764617-m03"
	I0816 17:18:39.743575       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-764617-m04"
	I0816 17:18:39.744205       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-764617-m04"
	I0816 17:18:39.769968       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-764617-m04"
	I0816 17:18:42.294447       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-764617-m04"
	
	
	==> kube-proxy [1aaf72ada1592eb9db02d4b420e3712769e6486d17ade2669153a86196aec79d] <==
	E0816 17:12:49.380013       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-764617&resourceVersion=1888\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:12:56.419670       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1822": dial tcp 192.168.39.254:8443: connect: no route to host
	E0816 17:12:56.419745       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1822\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:12:56.419833       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1893": dial tcp 192.168.39.254:8443: connect: no route to host
	E0816 17:12:56.419867       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1893\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:12:56.419980       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-764617&resourceVersion=1888": dial tcp 192.168.39.254:8443: connect: no route to host
	E0816 17:12:56.420093       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-764617&resourceVersion=1888\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:13:05.061614       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1893": dial tcp 192.168.39.254:8443: connect: no route to host
	E0816 17:13:05.061752       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1893\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:13:05.061634       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1822": dial tcp 192.168.39.254:8443: connect: no route to host
	W0816 17:13:05.061845       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-764617&resourceVersion=1888": dial tcp 192.168.39.254:8443: connect: no route to host
	E0816 17:13:05.061851       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1822\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	E0816 17:13:05.061862       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-764617&resourceVersion=1888\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:13:20.420096       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1822": dial tcp 192.168.39.254:8443: connect: no route to host
	E0816 17:13:20.420208       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1822\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:13:26.563815       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-764617&resourceVersion=1888": dial tcp 192.168.39.254:8443: connect: no route to host
	E0816 17:13:26.563914       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-764617&resourceVersion=1888\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:13:26.564083       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1893": dial tcp 192.168.39.254:8443: connect: no route to host
	E0816 17:13:26.564182       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1893\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:14:03.428978       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-764617&resourceVersion=1888": dial tcp 192.168.39.254:8443: connect: no route to host
	E0816 17:14:03.429261       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-764617&resourceVersion=1888\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:14:03.429083       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1893": dial tcp 192.168.39.254:8443: connect: no route to host
	E0816 17:14:03.429416       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1893\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:14:06.500827       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1822": dial tcp 192.168.39.254:8443: connect: no route to host
	E0816 17:14:06.500917       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1822\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-proxy [f97fbca9aeaa7f4efc01aa36e9842fcbe1317fa5826d11051aab16ce390474fb] <==
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0816 17:16:00.164339       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-764617\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0816 17:16:03.235627       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-764617\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0816 17:16:06.308200       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-764617\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0816 17:16:12.451678       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-764617\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0816 17:16:21.667673       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-764617\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0816 17:16:39.662307       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.18"]
	E0816 17:16:39.662531       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0816 17:16:40.129982       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0816 17:16:40.130116       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0816 17:16:40.130286       1 server_linux.go:169] "Using iptables Proxier"
	I0816 17:16:40.133388       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0816 17:16:40.133849       1 server.go:483] "Version info" version="v1.31.0"
	I0816 17:16:40.133901       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 17:16:40.136174       1 config.go:197] "Starting service config controller"
	I0816 17:16:40.136397       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0816 17:16:40.136461       1 config.go:104] "Starting endpoint slice config controller"
	I0816 17:16:40.136494       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0816 17:16:40.137291       1 config.go:326] "Starting node config controller"
	I0816 17:16:40.137331       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0816 17:16:40.237367       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0816 17:16:40.237442       1 shared_informer.go:320] Caches are synced for service config
	I0816 17:16:40.237924       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [547ba7c3099cf77e755ecafde9e451130522c4ac7ad82e4d1425c79c1f03ae1b] <==
	W0816 17:04:46.812070       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 17:04:46.812114       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0816 17:04:48.641797       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0816 17:07:29.208916       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-rvd47\": pod busybox-7dff88458-rvd47 is already assigned to node \"ha-764617-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-rvd47" node="ha-764617-m03"
	E0816 17:07:29.209097       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-rvd47\": pod busybox-7dff88458-rvd47 is already assigned to node \"ha-764617-m03\"" pod="default/busybox-7dff88458-rvd47"
	E0816 17:07:29.210073       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-rcq66\": pod busybox-7dff88458-rcq66 is already assigned to node \"ha-764617\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-rcq66" node="ha-764617"
	E0816 17:07:29.218500       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-rcq66\": pod busybox-7dff88458-rcq66 is already assigned to node \"ha-764617\"" pod="default/busybox-7dff88458-rcq66"
	E0816 17:08:05.463041       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-785hx\": pod kindnet-785hx is already assigned to node \"ha-764617-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-785hx" node="ha-764617-m04"
	E0816 17:08:05.468950       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 82c775a8-d580-4201-9da7-790a5a95ef6f(kube-system/kindnet-785hx) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-785hx"
	E0816 17:08:05.469002       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-785hx\": pod kindnet-785hx is already assigned to node \"ha-764617-m04\"" pod="kube-system/kindnet-785hx"
	I0816 17:08:05.469055       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-785hx" node="ha-764617-m04"
	E0816 17:14:04.583240       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0816 17:14:04.846054       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0816 17:14:05.519747       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0816 17:14:07.227009       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0816 17:14:08.262040       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0816 17:14:09.631030       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0816 17:14:10.772187       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0816 17:14:11.268912       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0816 17:14:11.374786       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0816 17:14:11.570866       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0816 17:14:12.701254       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0816 17:14:13.171368       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0816 17:14:13.491001       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0816 17:14:13.554502       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [aeafb73cca63549a23e7a8a77c52a26c0759572b031848d5098a1f5ef81b3993] <==
	W0816 17:16:34.059893       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.18:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.18:8443: connect: connection refused
	E0816 17:16:34.059953       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.18:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.18:8443: connect: connection refused" logger="UnhandledError"
	W0816 17:16:34.143234       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.18:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.18:8443: connect: connection refused
	E0816 17:16:34.143308       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.18:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.18:8443: connect: connection refused" logger="UnhandledError"
	W0816 17:16:34.378890       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.18:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.18:8443: connect: connection refused
	E0816 17:16:34.378966       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.18:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.18:8443: connect: connection refused" logger="UnhandledError"
	W0816 17:16:34.479671       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.18:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.18:8443: connect: connection refused
	E0816 17:16:34.479744       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.39.18:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.18:8443: connect: connection refused" logger="UnhandledError"
	W0816 17:16:34.572313       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.18:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.18:8443: connect: connection refused
	E0816 17:16:34.572360       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.18:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.18:8443: connect: connection refused" logger="UnhandledError"
	W0816 17:16:36.136767       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.18:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.18:8443: connect: connection refused
	E0816 17:16:36.136930       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.18:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.18:8443: connect: connection refused" logger="UnhandledError"
	W0816 17:16:36.611952       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.18:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.18:8443: connect: connection refused
	E0816 17:16:36.612004       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.39.18:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.18:8443: connect: connection refused" logger="UnhandledError"
	W0816 17:16:36.622913       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.18:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.18:8443: connect: connection refused
	E0816 17:16:36.622976       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.18:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.18:8443: connect: connection refused" logger="UnhandledError"
	W0816 17:16:37.284461       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.18:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.18:8443: connect: connection refused
	E0816 17:16:37.284523       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.18:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.18:8443: connect: connection refused" logger="UnhandledError"
	W0816 17:16:37.402497       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.18:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.18:8443: connect: connection refused
	E0816 17:16:37.402564       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.18:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.18:8443: connect: connection refused" logger="UnhandledError"
	W0816 17:16:40.005972       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 17:16:40.006487       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:16:40.006719       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 17:16:40.006791       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0816 17:16:55.706093       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 16 17:17:18 ha-764617 kubelet[1328]: E0816 17:17:18.902306    1328 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828638901694499,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:17:18 ha-764617 kubelet[1328]: E0816 17:17:18.902349    1328 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828638901694499,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:17:28 ha-764617 kubelet[1328]: E0816 17:17:28.903979    1328 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828648903612183,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:17:28 ha-764617 kubelet[1328]: E0816 17:17:28.904451    1328 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828648903612183,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:17:38 ha-764617 kubelet[1328]: E0816 17:17:38.907014    1328 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828658906568422,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:17:38 ha-764617 kubelet[1328]: E0816 17:17:38.907562    1328 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828658906568422,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:17:42 ha-764617 kubelet[1328]: I0816 17:17:42.564532    1328 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-vip-ha-764617" podUID="a30deffd-45c9-4685-ae4c-0c0f113f3bd7"
	Aug 16 17:17:42 ha-764617 kubelet[1328]: I0816 17:17:42.591925    1328 kubelet.go:1900] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-764617"
	Aug 16 17:17:48 ha-764617 kubelet[1328]: E0816 17:17:48.600754    1328 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 17:17:48 ha-764617 kubelet[1328]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 17:17:48 ha-764617 kubelet[1328]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 17:17:48 ha-764617 kubelet[1328]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 17:17:48 ha-764617 kubelet[1328]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 17:17:48 ha-764617 kubelet[1328]: E0816 17:17:48.909788    1328 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828668909352612,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:17:48 ha-764617 kubelet[1328]: E0816 17:17:48.909826    1328 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828668909352612,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:17:58 ha-764617 kubelet[1328]: E0816 17:17:58.912373    1328 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828678911828652,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:17:58 ha-764617 kubelet[1328]: E0816 17:17:58.912423    1328 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828678911828652,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:18:08 ha-764617 kubelet[1328]: E0816 17:18:08.915519    1328 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828688914841908,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:18:08 ha-764617 kubelet[1328]: E0816 17:18:08.915935    1328 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828688914841908,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:18:18 ha-764617 kubelet[1328]: E0816 17:18:18.918072    1328 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828698917392242,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:18:18 ha-764617 kubelet[1328]: E0816 17:18:18.918440    1328 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828698917392242,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:18:28 ha-764617 kubelet[1328]: E0816 17:18:28.920943    1328 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828708920546465,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:18:28 ha-764617 kubelet[1328]: E0816 17:18:28.921389    1328 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828708920546465,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:18:38 ha-764617 kubelet[1328]: E0816 17:18:38.925051    1328 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828718924430894,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:18:38 ha-764617 kubelet[1328]: E0816 17:18:38.925543    1328 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828718924430894,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 17:18:46.503216   35468 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19461-9545/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-764617 -n ha-764617
helpers_test.go:261: (dbg) Run:  kubectl --context ha-764617 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (398.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-764617 stop -v=7 --alsologtostderr: exit status 82 (2m0.459555217s)

                                                
                                                
-- stdout --
	* Stopping node "ha-764617-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 17:19:05.806041   35879 out.go:345] Setting OutFile to fd 1 ...
	I0816 17:19:05.806166   35879 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:19:05.806176   35879 out.go:358] Setting ErrFile to fd 2...
	I0816 17:19:05.806180   35879 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:19:05.806347   35879 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-9545/.minikube/bin
	I0816 17:19:05.806592   35879 out.go:352] Setting JSON to false
	I0816 17:19:05.806664   35879 mustload.go:65] Loading cluster: ha-764617
	I0816 17:19:05.807012   35879 config.go:182] Loaded profile config "ha-764617": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 17:19:05.807116   35879 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/config.json ...
	I0816 17:19:05.807315   35879 mustload.go:65] Loading cluster: ha-764617
	I0816 17:19:05.807459   35879 config.go:182] Loaded profile config "ha-764617": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 17:19:05.807487   35879 stop.go:39] StopHost: ha-764617-m04
	I0816 17:19:05.807857   35879 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:19:05.807902   35879 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:19:05.823144   35879 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38077
	I0816 17:19:05.823715   35879 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:19:05.824279   35879 main.go:141] libmachine: Using API Version  1
	I0816 17:19:05.824319   35879 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:19:05.824649   35879 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:19:05.826667   35879 out.go:177] * Stopping node "ha-764617-m04"  ...
	I0816 17:19:05.828233   35879 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0816 17:19:05.828266   35879 main.go:141] libmachine: (ha-764617-m04) Calling .DriverName
	I0816 17:19:05.828487   35879 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0816 17:19:05.828510   35879 main.go:141] libmachine: (ha-764617-m04) Calling .GetSSHHostname
	I0816 17:19:05.831373   35879 main.go:141] libmachine: (ha-764617-m04) DBG | domain ha-764617-m04 has defined MAC address 52:54:00:61:8e:ba in network mk-ha-764617
	I0816 17:19:05.831817   35879 main.go:141] libmachine: (ha-764617-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:8e:ba", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:18:34 +0000 UTC Type:0 Mac:52:54:00:61:8e:ba Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-764617-m04 Clientid:01:52:54:00:61:8e:ba}
	I0816 17:19:05.831840   35879 main.go:141] libmachine: (ha-764617-m04) DBG | domain ha-764617-m04 has defined IP address 192.168.39.137 and MAC address 52:54:00:61:8e:ba in network mk-ha-764617
	I0816 17:19:05.831951   35879 main.go:141] libmachine: (ha-764617-m04) Calling .GetSSHPort
	I0816 17:19:05.832119   35879 main.go:141] libmachine: (ha-764617-m04) Calling .GetSSHKeyPath
	I0816 17:19:05.832259   35879 main.go:141] libmachine: (ha-764617-m04) Calling .GetSSHUsername
	I0816 17:19:05.832393   35879 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m04/id_rsa Username:docker}
	I0816 17:19:05.912278   35879 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0816 17:19:05.965074   35879 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0816 17:19:06.018500   35879 main.go:141] libmachine: Stopping "ha-764617-m04"...
	I0816 17:19:06.018526   35879 main.go:141] libmachine: (ha-764617-m04) Calling .GetState
	I0816 17:19:06.020061   35879 main.go:141] libmachine: (ha-764617-m04) Calling .Stop
	I0816 17:19:06.023418   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 0/120
	I0816 17:19:07.024663   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 1/120
	I0816 17:19:08.026412   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 2/120
	I0816 17:19:09.027890   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 3/120
	I0816 17:19:10.029311   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 4/120
	I0816 17:19:11.031030   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 5/120
	I0816 17:19:12.032673   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 6/120
	I0816 17:19:13.034783   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 7/120
	I0816 17:19:14.035983   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 8/120
	I0816 17:19:15.037367   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 9/120
	I0816 17:19:16.038626   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 10/120
	I0816 17:19:17.040287   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 11/120
	I0816 17:19:18.041627   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 12/120
	I0816 17:19:19.043159   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 13/120
	I0816 17:19:20.044679   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 14/120
	I0816 17:19:21.046427   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 15/120
	I0816 17:19:22.047801   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 16/120
	I0816 17:19:23.049183   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 17/120
	I0816 17:19:24.051088   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 18/120
	I0816 17:19:25.052445   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 19/120
	I0816 17:19:26.054619   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 20/120
	I0816 17:19:27.056816   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 21/120
	I0816 17:19:28.058153   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 22/120
	I0816 17:19:29.060421   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 23/120
	I0816 17:19:30.062385   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 24/120
	I0816 17:19:31.064245   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 25/120
	I0816 17:19:32.065822   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 26/120
	I0816 17:19:33.067309   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 27/120
	I0816 17:19:34.068550   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 28/120
	I0816 17:19:35.069830   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 29/120
	I0816 17:19:36.071941   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 30/120
	I0816 17:19:37.073219   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 31/120
	I0816 17:19:38.074997   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 32/120
	I0816 17:19:39.076386   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 33/120
	I0816 17:19:40.078223   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 34/120
	I0816 17:19:41.080321   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 35/120
	I0816 17:19:42.081855   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 36/120
	I0816 17:19:43.083191   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 37/120
	I0816 17:19:44.084521   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 38/120
	I0816 17:19:45.086055   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 39/120
	I0816 17:19:46.088166   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 40/120
	I0816 17:19:47.089559   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 41/120
	I0816 17:19:48.091067   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 42/120
	I0816 17:19:49.092768   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 43/120
	I0816 17:19:50.094187   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 44/120
	I0816 17:19:51.096178   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 45/120
	I0816 17:19:52.097408   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 46/120
	I0816 17:19:53.098926   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 47/120
	I0816 17:19:54.100667   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 48/120
	I0816 17:19:55.102253   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 49/120
	I0816 17:19:56.103682   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 50/120
	I0816 17:19:57.104968   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 51/120
	I0816 17:19:58.107036   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 52/120
	I0816 17:19:59.108611   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 53/120
	I0816 17:20:00.110190   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 54/120
	I0816 17:20:01.111883   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 55/120
	I0816 17:20:02.113254   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 56/120
	I0816 17:20:03.115653   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 57/120
	I0816 17:20:04.117411   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 58/120
	I0816 17:20:05.119006   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 59/120
	I0816 17:20:06.121309   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 60/120
	I0816 17:20:07.122564   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 61/120
	I0816 17:20:08.124186   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 62/120
	I0816 17:20:09.125813   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 63/120
	I0816 17:20:10.127018   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 64/120
	I0816 17:20:11.128881   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 65/120
	I0816 17:20:12.130489   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 66/120
	I0816 17:20:13.132153   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 67/120
	I0816 17:20:14.133637   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 68/120
	I0816 17:20:15.135229   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 69/120
	I0816 17:20:16.137445   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 70/120
	I0816 17:20:17.139020   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 71/120
	I0816 17:20:18.140288   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 72/120
	I0816 17:20:19.141766   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 73/120
	I0816 17:20:20.143336   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 74/120
	I0816 17:20:21.145531   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 75/120
	I0816 17:20:22.146695   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 76/120
	I0816 17:20:23.148035   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 77/120
	I0816 17:20:24.149608   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 78/120
	I0816 17:20:25.150803   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 79/120
	I0816 17:20:26.152966   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 80/120
	I0816 17:20:27.154975   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 81/120
	I0816 17:20:28.156276   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 82/120
	I0816 17:20:29.157779   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 83/120
	I0816 17:20:30.159450   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 84/120
	I0816 17:20:31.161149   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 85/120
	I0816 17:20:32.162491   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 86/120
	I0816 17:20:33.163898   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 87/120
	I0816 17:20:34.165226   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 88/120
	I0816 17:20:35.167293   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 89/120
	I0816 17:20:36.169596   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 90/120
	I0816 17:20:37.171077   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 91/120
	I0816 17:20:38.172428   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 92/120
	I0816 17:20:39.174019   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 93/120
	I0816 17:20:40.175446   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 94/120
	I0816 17:20:41.177645   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 95/120
	I0816 17:20:42.179188   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 96/120
	I0816 17:20:43.180540   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 97/120
	I0816 17:20:44.182021   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 98/120
	I0816 17:20:45.184149   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 99/120
	I0816 17:20:46.186421   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 100/120
	I0816 17:20:47.187644   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 101/120
	I0816 17:20:48.189180   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 102/120
	I0816 17:20:49.191041   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 103/120
	I0816 17:20:50.193295   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 104/120
	I0816 17:20:51.195213   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 105/120
	I0816 17:20:52.196501   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 106/120
	I0816 17:20:53.197889   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 107/120
	I0816 17:20:54.199090   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 108/120
	I0816 17:20:55.200580   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 109/120
	I0816 17:20:56.202494   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 110/120
	I0816 17:20:57.203880   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 111/120
	I0816 17:20:58.205212   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 112/120
	I0816 17:20:59.207183   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 113/120
	I0816 17:21:00.208570   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 114/120
	I0816 17:21:01.210771   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 115/120
	I0816 17:21:02.212037   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 116/120
	I0816 17:21:03.213341   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 117/120
	I0816 17:21:04.215062   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 118/120
	I0816 17:21:05.216560   35879 main.go:141] libmachine: (ha-764617-m04) Waiting for machine to stop 119/120
	I0816 17:21:06.217150   35879 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0816 17:21:06.217195   35879 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0816 17:21:06.219038   35879 out.go:201] 
	W0816 17:21:06.220426   35879 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0816 17:21:06.220442   35879 out.go:270] * 
	* 
	W0816 17:21:06.222604   35879 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 17:21:06.223755   35879 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-764617 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 status -v=7 --alsologtostderr
E0816 17:21:12.269114   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-764617 status -v=7 --alsologtostderr: exit status 3 (18.971045028s)

                                                
                                                
-- stdout --
	ha-764617
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-764617-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-764617-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 17:21:06.267491   36319 out.go:345] Setting OutFile to fd 1 ...
	I0816 17:21:06.267611   36319 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:21:06.267620   36319 out.go:358] Setting ErrFile to fd 2...
	I0816 17:21:06.267625   36319 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:21:06.267791   36319 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-9545/.minikube/bin
	I0816 17:21:06.267947   36319 out.go:352] Setting JSON to false
	I0816 17:21:06.267970   36319 mustload.go:65] Loading cluster: ha-764617
	I0816 17:21:06.268105   36319 notify.go:220] Checking for updates...
	I0816 17:21:06.268305   36319 config.go:182] Loaded profile config "ha-764617": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 17:21:06.268317   36319 status.go:255] checking status of ha-764617 ...
	I0816 17:21:06.268710   36319 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:21:06.268773   36319 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:21:06.287689   36319 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38005
	I0816 17:21:06.288151   36319 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:21:06.288877   36319 main.go:141] libmachine: Using API Version  1
	I0816 17:21:06.288916   36319 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:21:06.290204   36319 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:21:06.290468   36319 main.go:141] libmachine: (ha-764617) Calling .GetState
	I0816 17:21:06.292351   36319 status.go:330] ha-764617 host status = "Running" (err=<nil>)
	I0816 17:21:06.292379   36319 host.go:66] Checking if "ha-764617" exists ...
	I0816 17:21:06.292716   36319 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:21:06.292763   36319 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:21:06.307653   36319 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40803
	I0816 17:21:06.308009   36319 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:21:06.308432   36319 main.go:141] libmachine: Using API Version  1
	I0816 17:21:06.308450   36319 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:21:06.308764   36319 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:21:06.308928   36319 main.go:141] libmachine: (ha-764617) Calling .GetIP
	I0816 17:21:06.311814   36319 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:21:06.312306   36319 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:21:06.312354   36319 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:21:06.312422   36319 host.go:66] Checking if "ha-764617" exists ...
	I0816 17:21:06.312771   36319 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:21:06.312807   36319 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:21:06.328678   36319 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34659
	I0816 17:21:06.329132   36319 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:21:06.329691   36319 main.go:141] libmachine: Using API Version  1
	I0816 17:21:06.329715   36319 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:21:06.329989   36319 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:21:06.330151   36319 main.go:141] libmachine: (ha-764617) Calling .DriverName
	I0816 17:21:06.330304   36319 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 17:21:06.330334   36319 main.go:141] libmachine: (ha-764617) Calling .GetSSHHostname
	I0816 17:21:06.333023   36319 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:21:06.333544   36319 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:21:06.333578   36319 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:21:06.333774   36319 main.go:141] libmachine: (ha-764617) Calling .GetSSHPort
	I0816 17:21:06.333956   36319 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:21:06.334085   36319 main.go:141] libmachine: (ha-764617) Calling .GetSSHUsername
	I0816 17:21:06.334221   36319 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617/id_rsa Username:docker}
	I0816 17:21:06.422696   36319 ssh_runner.go:195] Run: systemctl --version
	I0816 17:21:06.430452   36319 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 17:21:06.450127   36319 kubeconfig.go:125] found "ha-764617" server: "https://192.168.39.254:8443"
	I0816 17:21:06.450159   36319 api_server.go:166] Checking apiserver status ...
	I0816 17:21:06.450212   36319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 17:21:06.467058   36319 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4896/cgroup
	W0816 17:21:06.477892   36319 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4896/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 17:21:06.477961   36319 ssh_runner.go:195] Run: ls
	I0816 17:21:06.482938   36319 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0816 17:21:06.489253   36319 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0816 17:21:06.489276   36319 status.go:422] ha-764617 apiserver status = Running (err=<nil>)
	I0816 17:21:06.489287   36319 status.go:257] ha-764617 status: &{Name:ha-764617 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 17:21:06.489311   36319 status.go:255] checking status of ha-764617-m02 ...
	I0816 17:21:06.489610   36319 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:21:06.489648   36319 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:21:06.504428   36319 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45331
	I0816 17:21:06.504780   36319 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:21:06.505269   36319 main.go:141] libmachine: Using API Version  1
	I0816 17:21:06.505290   36319 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:21:06.505641   36319 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:21:06.505790   36319 main.go:141] libmachine: (ha-764617-m02) Calling .GetState
	I0816 17:21:06.507142   36319 status.go:330] ha-764617-m02 host status = "Running" (err=<nil>)
	I0816 17:21:06.507158   36319 host.go:66] Checking if "ha-764617-m02" exists ...
	I0816 17:21:06.507458   36319 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:21:06.507507   36319 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:21:06.522057   36319 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46227
	I0816 17:21:06.522400   36319 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:21:06.522796   36319 main.go:141] libmachine: Using API Version  1
	I0816 17:21:06.522818   36319 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:21:06.523156   36319 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:21:06.523342   36319 main.go:141] libmachine: (ha-764617-m02) Calling .GetIP
	I0816 17:21:06.526202   36319 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:21:06.526620   36319 main.go:141] libmachine: (ha-764617-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:3e:7f", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:16:00 +0000 UTC Type:0 Mac:52:54:00:cf:3e:7f Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-764617-m02 Clientid:01:52:54:00:cf:3e:7f}
	I0816 17:21:06.526640   36319 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:21:06.526838   36319 host.go:66] Checking if "ha-764617-m02" exists ...
	I0816 17:21:06.527131   36319 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:21:06.527171   36319 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:21:06.541549   36319 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40619
	I0816 17:21:06.541986   36319 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:21:06.542495   36319 main.go:141] libmachine: Using API Version  1
	I0816 17:21:06.542520   36319 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:21:06.542873   36319 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:21:06.543081   36319 main.go:141] libmachine: (ha-764617-m02) Calling .DriverName
	I0816 17:21:06.543269   36319 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 17:21:06.543303   36319 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHHostname
	I0816 17:21:06.546182   36319 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:21:06.546574   36319 main.go:141] libmachine: (ha-764617-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:3e:7f", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:16:00 +0000 UTC Type:0 Mac:52:54:00:cf:3e:7f Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-764617-m02 Clientid:01:52:54:00:cf:3e:7f}
	I0816 17:21:06.546602   36319 main.go:141] libmachine: (ha-764617-m02) DBG | domain ha-764617-m02 has defined IP address 192.168.39.184 and MAC address 52:54:00:cf:3e:7f in network mk-ha-764617
	I0816 17:21:06.546722   36319 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHPort
	I0816 17:21:06.546887   36319 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHKeyPath
	I0816 17:21:06.547029   36319 main.go:141] libmachine: (ha-764617-m02) Calling .GetSSHUsername
	I0816 17:21:06.547177   36319 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m02/id_rsa Username:docker}
	I0816 17:21:06.624645   36319 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 17:21:06.642445   36319 kubeconfig.go:125] found "ha-764617" server: "https://192.168.39.254:8443"
	I0816 17:21:06.642471   36319 api_server.go:166] Checking apiserver status ...
	I0816 17:21:06.642504   36319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 17:21:06.661529   36319 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1499/cgroup
	W0816 17:21:06.670038   36319 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1499/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 17:21:06.670097   36319 ssh_runner.go:195] Run: ls
	I0816 17:21:06.674129   36319 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0816 17:21:06.678181   36319 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0816 17:21:06.678206   36319 status.go:422] ha-764617-m02 apiserver status = Running (err=<nil>)
	I0816 17:21:06.678217   36319 status.go:257] ha-764617-m02 status: &{Name:ha-764617-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 17:21:06.678236   36319 status.go:255] checking status of ha-764617-m04 ...
	I0816 17:21:06.678541   36319 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:21:06.678575   36319 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:21:06.694750   36319 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38929
	I0816 17:21:06.695164   36319 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:21:06.695658   36319 main.go:141] libmachine: Using API Version  1
	I0816 17:21:06.695680   36319 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:21:06.695976   36319 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:21:06.696227   36319 main.go:141] libmachine: (ha-764617-m04) Calling .GetState
	I0816 17:21:06.697764   36319 status.go:330] ha-764617-m04 host status = "Running" (err=<nil>)
	I0816 17:21:06.697777   36319 host.go:66] Checking if "ha-764617-m04" exists ...
	I0816 17:21:06.698115   36319 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:21:06.698153   36319 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:21:06.712784   36319 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35449
	I0816 17:21:06.713173   36319 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:21:06.713612   36319 main.go:141] libmachine: Using API Version  1
	I0816 17:21:06.713632   36319 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:21:06.713953   36319 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:21:06.714140   36319 main.go:141] libmachine: (ha-764617-m04) Calling .GetIP
	I0816 17:21:06.716918   36319 main.go:141] libmachine: (ha-764617-m04) DBG | domain ha-764617-m04 has defined MAC address 52:54:00:61:8e:ba in network mk-ha-764617
	I0816 17:21:06.717444   36319 main.go:141] libmachine: (ha-764617-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:8e:ba", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:18:34 +0000 UTC Type:0 Mac:52:54:00:61:8e:ba Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-764617-m04 Clientid:01:52:54:00:61:8e:ba}
	I0816 17:21:06.717471   36319 main.go:141] libmachine: (ha-764617-m04) DBG | domain ha-764617-m04 has defined IP address 192.168.39.137 and MAC address 52:54:00:61:8e:ba in network mk-ha-764617
	I0816 17:21:06.717643   36319 host.go:66] Checking if "ha-764617-m04" exists ...
	I0816 17:21:06.717972   36319 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:21:06.718017   36319 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:21:06.732338   36319 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42947
	I0816 17:21:06.732726   36319 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:21:06.733161   36319 main.go:141] libmachine: Using API Version  1
	I0816 17:21:06.733180   36319 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:21:06.733519   36319 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:21:06.733705   36319 main.go:141] libmachine: (ha-764617-m04) Calling .DriverName
	I0816 17:21:06.733866   36319 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 17:21:06.733887   36319 main.go:141] libmachine: (ha-764617-m04) Calling .GetSSHHostname
	I0816 17:21:06.736728   36319 main.go:141] libmachine: (ha-764617-m04) DBG | domain ha-764617-m04 has defined MAC address 52:54:00:61:8e:ba in network mk-ha-764617
	I0816 17:21:06.737228   36319 main.go:141] libmachine: (ha-764617-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:8e:ba", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:18:34 +0000 UTC Type:0 Mac:52:54:00:61:8e:ba Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-764617-m04 Clientid:01:52:54:00:61:8e:ba}
	I0816 17:21:06.737254   36319 main.go:141] libmachine: (ha-764617-m04) DBG | domain ha-764617-m04 has defined IP address 192.168.39.137 and MAC address 52:54:00:61:8e:ba in network mk-ha-764617
	I0816 17:21:06.737512   36319 main.go:141] libmachine: (ha-764617-m04) Calling .GetSSHPort
	I0816 17:21:06.737724   36319 main.go:141] libmachine: (ha-764617-m04) Calling .GetSSHKeyPath
	I0816 17:21:06.737904   36319 main.go:141] libmachine: (ha-764617-m04) Calling .GetSSHUsername
	I0816 17:21:06.738039   36319 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617-m04/id_rsa Username:docker}
	W0816 17:21:25.196871   36319 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.137:22: connect: no route to host
	W0816 17:21:25.196967   36319 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.137:22: connect: no route to host
	E0816 17:21:25.196984   36319 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.137:22: connect: no route to host
	I0816 17:21:25.197001   36319 status.go:257] ha-764617-m04 status: &{Name:ha-764617-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0816 17:21:25.197018   36319 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.137:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-764617 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-764617 -n ha-764617
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-764617 logs -n 25: (1.600612352s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-764617 ssh -n ha-764617-m02 sudo cat                                          | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | /home/docker/cp-test_ha-764617-m03_ha-764617-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-764617 cp ha-764617-m03:/home/docker/cp-test.txt                              | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | ha-764617-m04:/home/docker/cp-test_ha-764617-m03_ha-764617-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-764617 ssh -n                                                                 | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | ha-764617-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-764617 ssh -n ha-764617-m04 sudo cat                                          | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | /home/docker/cp-test_ha-764617-m03_ha-764617-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-764617 cp testdata/cp-test.txt                                                | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | ha-764617-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-764617 ssh -n                                                                 | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | ha-764617-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-764617 cp ha-764617-m04:/home/docker/cp-test.txt                              | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1933781201/001/cp-test_ha-764617-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-764617 ssh -n                                                                 | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | ha-764617-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-764617 cp ha-764617-m04:/home/docker/cp-test.txt                              | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | ha-764617:/home/docker/cp-test_ha-764617-m04_ha-764617.txt                       |           |         |         |                     |                     |
	| ssh     | ha-764617 ssh -n                                                                 | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | ha-764617-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-764617 ssh -n ha-764617 sudo cat                                              | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | /home/docker/cp-test_ha-764617-m04_ha-764617.txt                                 |           |         |         |                     |                     |
	| cp      | ha-764617 cp ha-764617-m04:/home/docker/cp-test.txt                              | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | ha-764617-m02:/home/docker/cp-test_ha-764617-m04_ha-764617-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-764617 ssh -n                                                                 | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | ha-764617-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-764617 ssh -n ha-764617-m02 sudo cat                                          | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | /home/docker/cp-test_ha-764617-m04_ha-764617-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-764617 cp ha-764617-m04:/home/docker/cp-test.txt                              | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | ha-764617-m03:/home/docker/cp-test_ha-764617-m04_ha-764617-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-764617 ssh -n                                                                 | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | ha-764617-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-764617 ssh -n ha-764617-m03 sudo cat                                          | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC | 16 Aug 24 17:08 UTC |
	|         | /home/docker/cp-test_ha-764617-m04_ha-764617-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-764617 node stop m02 -v=7                                                     | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:08 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-764617 node start m02 -v=7                                                    | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:11 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-764617 -v=7                                                           | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:12 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-764617 -v=7                                                                | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:12 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-764617 --wait=true -v=7                                                    | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:14 UTC | 16 Aug 24 17:18 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-764617                                                                | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:18 UTC |                     |
	| node    | ha-764617 node delete m03 -v=7                                                   | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:18 UTC | 16 Aug 24 17:19 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-764617 stop -v=7                                                              | ha-764617 | jenkins | v1.33.1 | 16 Aug 24 17:19 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 17:14:12
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 17:14:12.681840   33567 out.go:345] Setting OutFile to fd 1 ...
	I0816 17:14:12.682169   33567 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:14:12.682183   33567 out.go:358] Setting ErrFile to fd 2...
	I0816 17:14:12.682190   33567 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:14:12.682629   33567 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-9545/.minikube/bin
	I0816 17:14:12.683709   33567 out.go:352] Setting JSON to false
	I0816 17:14:12.684865   33567 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3351,"bootTime":1723825102,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0816 17:14:12.684953   33567 start.go:139] virtualization: kvm guest
	I0816 17:14:12.687095   33567 out.go:177] * [ha-764617] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0816 17:14:12.688589   33567 notify.go:220] Checking for updates...
	I0816 17:14:12.688598   33567 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 17:14:12.690514   33567 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 17:14:12.692050   33567 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19461-9545/kubeconfig
	I0816 17:14:12.693171   33567 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19461-9545/.minikube
	I0816 17:14:12.694641   33567 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 17:14:12.695807   33567 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 17:14:12.697475   33567 config.go:182] Loaded profile config "ha-764617": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 17:14:12.697612   33567 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 17:14:12.698197   33567 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:14:12.698255   33567 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:14:12.714606   33567 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39161
	I0816 17:14:12.714976   33567 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:14:12.715566   33567 main.go:141] libmachine: Using API Version  1
	I0816 17:14:12.715594   33567 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:14:12.715960   33567 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:14:12.716152   33567 main.go:141] libmachine: (ha-764617) Calling .DriverName
	I0816 17:14:12.753582   33567 out.go:177] * Using the kvm2 driver based on existing profile
	I0816 17:14:12.754920   33567 start.go:297] selected driver: kvm2
	I0816 17:14:12.754940   33567 start.go:901] validating driver "kvm2" against &{Name:ha-764617 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.0 ClusterName:ha-764617 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.18 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.184 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.253 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.137 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 17:14:12.755112   33567 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 17:14:12.755482   33567 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 17:14:12.755569   33567 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19461-9545/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 17:14:12.770376   33567 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0816 17:14:12.771018   33567 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 17:14:12.771081   33567 cni.go:84] Creating CNI manager for ""
	I0816 17:14:12.771092   33567 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0816 17:14:12.771145   33567 start.go:340] cluster config:
	{Name:ha-764617 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-764617 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.18 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.184 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.253 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.137 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 17:14:12.771292   33567 iso.go:125] acquiring lock: {Name:mke35866c1d0f078e1f027c2d727692722810e04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 17:14:12.773256   33567 out.go:177] * Starting "ha-764617" primary control-plane node in "ha-764617" cluster
	I0816 17:14:12.774450   33567 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 17:14:12.774482   33567 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0816 17:14:12.774489   33567 cache.go:56] Caching tarball of preloaded images
	I0816 17:14:12.774586   33567 preload.go:172] Found /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 17:14:12.774602   33567 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0816 17:14:12.774720   33567 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/config.json ...
	I0816 17:14:12.774909   33567 start.go:360] acquireMachinesLock for ha-764617: {Name:mke1ffa1f2d6d714bdd85e184816ba8f4dfd08f1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 17:14:12.774960   33567 start.go:364] duration metric: took 31.964µs to acquireMachinesLock for "ha-764617"
	I0816 17:14:12.774980   33567 start.go:96] Skipping create...Using existing machine configuration
	I0816 17:14:12.774992   33567 fix.go:54] fixHost starting: 
	I0816 17:14:12.775286   33567 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:14:12.775319   33567 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:14:12.789423   33567 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33455
	I0816 17:14:12.789796   33567 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:14:12.790222   33567 main.go:141] libmachine: Using API Version  1
	I0816 17:14:12.790241   33567 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:14:12.790643   33567 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:14:12.790973   33567 main.go:141] libmachine: (ha-764617) Calling .DriverName
	I0816 17:14:12.791151   33567 main.go:141] libmachine: (ha-764617) Calling .GetState
	I0816 17:14:12.792756   33567 fix.go:112] recreateIfNeeded on ha-764617: state=Running err=<nil>
	W0816 17:14:12.792794   33567 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 17:14:12.794795   33567 out.go:177] * Updating the running kvm2 "ha-764617" VM ...
	I0816 17:14:12.796080   33567 machine.go:93] provisionDockerMachine start ...
	I0816 17:14:12.796098   33567 main.go:141] libmachine: (ha-764617) Calling .DriverName
	I0816 17:14:12.796360   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHHostname
	I0816 17:14:12.798979   33567 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:14:12.799400   33567 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:14:12.799426   33567 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:14:12.799569   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHPort
	I0816 17:14:12.799739   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:14:12.799891   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:14:12.800061   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHUsername
	I0816 17:14:12.800227   33567 main.go:141] libmachine: Using SSH client type: native
	I0816 17:14:12.800431   33567 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0816 17:14:12.800442   33567 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 17:14:12.922160   33567 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-764617
	
	I0816 17:14:12.922188   33567 main.go:141] libmachine: (ha-764617) Calling .GetMachineName
	I0816 17:14:12.922481   33567 buildroot.go:166] provisioning hostname "ha-764617"
	I0816 17:14:12.922504   33567 main.go:141] libmachine: (ha-764617) Calling .GetMachineName
	I0816 17:14:12.922744   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHHostname
	I0816 17:14:12.925190   33567 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:14:12.925619   33567 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:14:12.925646   33567 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:14:12.925802   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHPort
	I0816 17:14:12.925995   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:14:12.926141   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:14:12.926315   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHUsername
	I0816 17:14:12.926487   33567 main.go:141] libmachine: Using SSH client type: native
	I0816 17:14:12.926664   33567 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0816 17:14:12.926679   33567 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-764617 && echo "ha-764617" | sudo tee /etc/hostname
	I0816 17:14:13.056269   33567 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-764617
	
	I0816 17:14:13.056301   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHHostname
	I0816 17:14:13.058990   33567 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:14:13.059445   33567 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:14:13.059475   33567 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:14:13.059641   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHPort
	I0816 17:14:13.059823   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:14:13.059993   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:14:13.060114   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHUsername
	I0816 17:14:13.060246   33567 main.go:141] libmachine: Using SSH client type: native
	I0816 17:14:13.060438   33567 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0816 17:14:13.060460   33567 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-764617' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-764617/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-764617' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 17:14:13.173193   33567 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 17:14:13.173228   33567 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19461-9545/.minikube CaCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19461-9545/.minikube}
	I0816 17:14:13.173256   33567 buildroot.go:174] setting up certificates
	I0816 17:14:13.173269   33567 provision.go:84] configureAuth start
	I0816 17:14:13.173283   33567 main.go:141] libmachine: (ha-764617) Calling .GetMachineName
	I0816 17:14:13.173577   33567 main.go:141] libmachine: (ha-764617) Calling .GetIP
	I0816 17:14:13.176292   33567 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:14:13.176679   33567 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:14:13.176707   33567 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:14:13.176853   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHHostname
	I0816 17:14:13.179121   33567 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:14:13.179415   33567 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:14:13.179445   33567 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:14:13.179572   33567 provision.go:143] copyHostCerts
	I0816 17:14:13.179621   33567 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem
	I0816 17:14:13.179657   33567 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem, removing ...
	I0816 17:14:13.179666   33567 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem
	I0816 17:14:13.179739   33567 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem (1123 bytes)
	I0816 17:14:13.179818   33567 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem
	I0816 17:14:13.179837   33567 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem, removing ...
	I0816 17:14:13.179841   33567 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem
	I0816 17:14:13.179871   33567 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem (1679 bytes)
	I0816 17:14:13.179910   33567 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem
	I0816 17:14:13.179932   33567 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem, removing ...
	I0816 17:14:13.179937   33567 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem
	I0816 17:14:13.179963   33567 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem (1082 bytes)
	I0816 17:14:13.180006   33567 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem org=jenkins.ha-764617 san=[127.0.0.1 192.168.39.18 ha-764617 localhost minikube]
	I0816 17:14:13.268473   33567 provision.go:177] copyRemoteCerts
	I0816 17:14:13.268524   33567 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 17:14:13.268546   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHHostname
	I0816 17:14:13.271093   33567 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:14:13.271435   33567 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:14:13.271462   33567 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:14:13.271666   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHPort
	I0816 17:14:13.271858   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:14:13.272012   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHUsername
	I0816 17:14:13.272165   33567 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617/id_rsa Username:docker}
	I0816 17:14:13.358638   33567 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 17:14:13.358700   33567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 17:14:13.382393   33567 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 17:14:13.382485   33567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0816 17:14:13.406227   33567 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 17:14:13.406314   33567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 17:14:13.430275   33567 provision.go:87] duration metric: took 256.992665ms to configureAuth
	I0816 17:14:13.430301   33567 buildroot.go:189] setting minikube options for container-runtime
	I0816 17:14:13.430564   33567 config.go:182] Loaded profile config "ha-764617": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 17:14:13.430639   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHHostname
	I0816 17:14:13.432992   33567 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:14:13.433404   33567 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:14:13.433432   33567 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:14:13.433526   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHPort
	I0816 17:14:13.433697   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:14:13.433895   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:14:13.434031   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHUsername
	I0816 17:14:13.434190   33567 main.go:141] libmachine: Using SSH client type: native
	I0816 17:14:13.434412   33567 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0816 17:14:13.434427   33567 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 17:15:44.383680   33567 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 17:15:44.383706   33567 machine.go:96] duration metric: took 1m31.587612978s to provisionDockerMachine
	I0816 17:15:44.383720   33567 start.go:293] postStartSetup for "ha-764617" (driver="kvm2")
	I0816 17:15:44.383733   33567 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 17:15:44.383752   33567 main.go:141] libmachine: (ha-764617) Calling .DriverName
	I0816 17:15:44.384099   33567 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 17:15:44.384123   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHHostname
	I0816 17:15:44.386974   33567 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:15:44.387470   33567 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:15:44.387493   33567 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:15:44.387637   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHPort
	I0816 17:15:44.387835   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:15:44.387994   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHUsername
	I0816 17:15:44.388127   33567 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617/id_rsa Username:docker}
	I0816 17:15:44.475562   33567 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 17:15:44.479566   33567 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 17:15:44.479601   33567 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/addons for local assets ...
	I0816 17:15:44.479676   33567 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/files for local assets ...
	I0816 17:15:44.479762   33567 filesync.go:149] local asset: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem -> 167532.pem in /etc/ssl/certs
	I0816 17:15:44.479780   33567 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem -> /etc/ssl/certs/167532.pem
	I0816 17:15:44.479864   33567 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 17:15:44.488385   33567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /etc/ssl/certs/167532.pem (1708 bytes)
	I0816 17:15:44.510271   33567 start.go:296] duration metric: took 126.538123ms for postStartSetup
	I0816 17:15:44.510330   33567 main.go:141] libmachine: (ha-764617) Calling .DriverName
	I0816 17:15:44.510622   33567 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0816 17:15:44.510646   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHHostname
	I0816 17:15:44.513338   33567 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:15:44.513747   33567 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:15:44.513769   33567 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:15:44.513920   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHPort
	I0816 17:15:44.514113   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:15:44.514248   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHUsername
	I0816 17:15:44.514435   33567 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617/id_rsa Username:docker}
	W0816 17:15:44.598247   33567 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0816 17:15:44.598274   33567 fix.go:56] duration metric: took 1m31.82328506s for fixHost
	I0816 17:15:44.598294   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHHostname
	I0816 17:15:44.601014   33567 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:15:44.601372   33567 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:15:44.601401   33567 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:15:44.601597   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHPort
	I0816 17:15:44.601802   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:15:44.601972   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:15:44.602067   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHUsername
	I0816 17:15:44.602209   33567 main.go:141] libmachine: Using SSH client type: native
	I0816 17:15:44.602436   33567 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0816 17:15:44.602455   33567 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 17:15:44.717188   33567 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723828544.671591292
	
	I0816 17:15:44.717211   33567 fix.go:216] guest clock: 1723828544.671591292
	I0816 17:15:44.717218   33567 fix.go:229] Guest: 2024-08-16 17:15:44.671591292 +0000 UTC Remote: 2024-08-16 17:15:44.59828124 +0000 UTC m=+91.949787318 (delta=73.310052ms)
	I0816 17:15:44.717246   33567 fix.go:200] guest clock delta is within tolerance: 73.310052ms
	I0816 17:15:44.717251   33567 start.go:83] releasing machines lock for "ha-764617", held for 1m31.942283255s
	I0816 17:15:44.717272   33567 main.go:141] libmachine: (ha-764617) Calling .DriverName
	I0816 17:15:44.717538   33567 main.go:141] libmachine: (ha-764617) Calling .GetIP
	I0816 17:15:44.720100   33567 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:15:44.720508   33567 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:15:44.720531   33567 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:15:44.720714   33567 main.go:141] libmachine: (ha-764617) Calling .DriverName
	I0816 17:15:44.721187   33567 main.go:141] libmachine: (ha-764617) Calling .DriverName
	I0816 17:15:44.721359   33567 main.go:141] libmachine: (ha-764617) Calling .DriverName
	I0816 17:15:44.721455   33567 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 17:15:44.721501   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHHostname
	I0816 17:15:44.721547   33567 ssh_runner.go:195] Run: cat /version.json
	I0816 17:15:44.721566   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHHostname
	I0816 17:15:44.724022   33567 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:15:44.724369   33567 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:15:44.724448   33567 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:15:44.724472   33567 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:15:44.724583   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHPort
	I0816 17:15:44.724761   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:15:44.724903   33567 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:15:44.724922   33567 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:15:44.724935   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHUsername
	I0816 17:15:44.725034   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHPort
	I0816 17:15:44.725112   33567 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617/id_rsa Username:docker}
	I0816 17:15:44.725192   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHKeyPath
	I0816 17:15:44.725317   33567 main.go:141] libmachine: (ha-764617) Calling .GetSSHUsername
	I0816 17:15:44.725453   33567 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/ha-764617/id_rsa Username:docker}
	I0816 17:15:44.805301   33567 ssh_runner.go:195] Run: systemctl --version
	I0816 17:15:44.845850   33567 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 17:15:45.004125   33567 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 17:15:45.012739   33567 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 17:15:45.012813   33567 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 17:15:45.021271   33567 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0816 17:15:45.021291   33567 start.go:495] detecting cgroup driver to use...
	I0816 17:15:45.021394   33567 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 17:15:45.036322   33567 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 17:15:45.050097   33567 docker.go:217] disabling cri-docker service (if available) ...
	I0816 17:15:45.050155   33567 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 17:15:45.064096   33567 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 17:15:45.077640   33567 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 17:15:45.230350   33567 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 17:15:45.374034   33567 docker.go:233] disabling docker service ...
	I0816 17:15:45.374106   33567 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 17:15:45.392104   33567 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 17:15:45.405018   33567 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 17:15:45.546831   33567 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 17:15:45.686710   33567 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 17:15:45.700826   33567 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 17:15:45.719391   33567 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 17:15:45.719449   33567 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:15:45.728931   33567 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 17:15:45.728996   33567 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:15:45.738455   33567 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:15:45.747724   33567 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:15:45.757078   33567 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 17:15:45.766525   33567 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:15:45.775796   33567 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:15:45.787326   33567 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:15:45.797235   33567 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 17:15:45.806123   33567 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 17:15:45.814653   33567 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 17:15:45.951155   33567 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 17:15:48.920448   33567 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.969262566s)
	I0816 17:15:48.920482   33567 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 17:15:48.920533   33567 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 17:15:48.929517   33567 start.go:563] Will wait 60s for crictl version
	I0816 17:15:48.929606   33567 ssh_runner.go:195] Run: which crictl
	I0816 17:15:48.933325   33567 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 17:15:48.967726   33567 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 17:15:48.967829   33567 ssh_runner.go:195] Run: crio --version
	I0816 17:15:48.995025   33567 ssh_runner.go:195] Run: crio --version
	I0816 17:15:49.024017   33567 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 17:15:49.025551   33567 main.go:141] libmachine: (ha-764617) Calling .GetIP
	I0816 17:15:49.028362   33567 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:15:49.028732   33567 main.go:141] libmachine: (ha-764617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:ba:f5", ip: ""} in network mk-ha-764617: {Iface:virbr1 ExpiryTime:2024-08-16 18:04:24 +0000 UTC Type:0 Mac:52:54:00:5b:ba:f5 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-764617 Clientid:01:52:54:00:5b:ba:f5}
	I0816 17:15:49.028769   33567 main.go:141] libmachine: (ha-764617) DBG | domain ha-764617 has defined IP address 192.168.39.18 and MAC address 52:54:00:5b:ba:f5 in network mk-ha-764617
	I0816 17:15:49.029002   33567 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0816 17:15:49.033556   33567 kubeadm.go:883] updating cluster {Name:ha-764617 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-764617 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.18 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.184 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.253 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.137 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 17:15:49.033697   33567 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 17:15:49.033755   33567 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 17:15:49.076085   33567 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 17:15:49.076105   33567 crio.go:433] Images already preloaded, skipping extraction
	I0816 17:15:49.076162   33567 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 17:15:49.109504   33567 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 17:15:49.109522   33567 cache_images.go:84] Images are preloaded, skipping loading
	I0816 17:15:49.109530   33567 kubeadm.go:934] updating node { 192.168.39.18 8443 v1.31.0 crio true true} ...
	I0816 17:15:49.109670   33567 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-764617 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.18
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-764617 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 17:15:49.109753   33567 ssh_runner.go:195] Run: crio config
	I0816 17:15:49.157459   33567 cni.go:84] Creating CNI manager for ""
	I0816 17:15:49.157484   33567 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0816 17:15:49.157493   33567 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 17:15:49.157519   33567 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.18 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-764617 NodeName:ha-764617 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.18"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.18 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 17:15:49.157685   33567 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.18
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-764617"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.18
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.18"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 17:15:49.157714   33567 kube-vip.go:115] generating kube-vip config ...
	I0816 17:15:49.157753   33567 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0816 17:15:49.168781   33567 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0816 17:15:49.168904   33567 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0816 17:15:49.168961   33567 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 17:15:49.178411   33567 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 17:15:49.178478   33567 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0816 17:15:49.187170   33567 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0816 17:15:49.203351   33567 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 17:15:49.218712   33567 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0816 17:15:49.233914   33567 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0816 17:15:49.251037   33567 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0816 17:15:49.254744   33567 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 17:15:49.393613   33567 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 17:15:49.407763   33567 certs.go:68] Setting up /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617 for IP: 192.168.39.18
	I0816 17:15:49.407794   33567 certs.go:194] generating shared ca certs ...
	I0816 17:15:49.407812   33567 certs.go:226] acquiring lock for ca certs: {Name:mk1e000575c94c138513704c2900b8a68810eb65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:15:49.407979   33567 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key
	I0816 17:15:49.408050   33567 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key
	I0816 17:15:49.408069   33567 certs.go:256] generating profile certs ...
	I0816 17:15:49.408191   33567 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/client.key
	I0816 17:15:49.408231   33567 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.key.81eed208
	I0816 17:15:49.408265   33567 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.crt.81eed208 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.18 192.168.39.184 192.168.39.253 192.168.39.254]
	I0816 17:15:49.529281   33567 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.crt.81eed208 ...
	I0816 17:15:49.529313   33567 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.crt.81eed208: {Name:mkba387e9626a8467f3548bc2879abbf94f19965 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:15:49.529491   33567 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.key.81eed208 ...
	I0816 17:15:49.529505   33567 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.key.81eed208: {Name:mkacc5f31f268458dfb07a0a1f8c85e5d2963b1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:15:49.529587   33567 certs.go:381] copying /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.crt.81eed208 -> /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.crt
	I0816 17:15:49.529778   33567 certs.go:385] copying /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.key.81eed208 -> /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.key
	I0816 17:15:49.529920   33567 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/proxy-client.key
	I0816 17:15:49.529936   33567 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0816 17:15:49.529950   33567 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0816 17:15:49.529966   33567 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0816 17:15:49.529982   33567 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0816 17:15:49.529997   33567 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0816 17:15:49.530011   33567 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0816 17:15:49.530029   33567 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0816 17:15:49.530043   33567 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0816 17:15:49.530101   33567 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem (1338 bytes)
	W0816 17:15:49.530131   33567 certs.go:480] ignoring /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753_empty.pem, impossibly tiny 0 bytes
	I0816 17:15:49.530154   33567 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 17:15:49.530184   33567 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem (1082 bytes)
	I0816 17:15:49.530215   33567 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem (1123 bytes)
	I0816 17:15:49.530240   33567 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem (1679 bytes)
	I0816 17:15:49.530337   33567 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem (1708 bytes)
	I0816 17:15:49.530376   33567 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem -> /usr/share/ca-certificates/167532.pem
	I0816 17:15:49.530392   33567 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0816 17:15:49.530407   33567 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem -> /usr/share/ca-certificates/16753.pem
	I0816 17:15:49.531417   33567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 17:15:49.556111   33567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 17:15:49.577933   33567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 17:15:49.601123   33567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 17:15:49.623704   33567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0816 17:15:49.645796   33567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 17:15:49.668042   33567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 17:15:49.691469   33567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/ha-764617/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 17:15:49.713630   33567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /usr/share/ca-certificates/167532.pem (1708 bytes)
	I0816 17:15:49.736194   33567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 17:15:49.759002   33567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem --> /usr/share/ca-certificates/16753.pem (1338 bytes)
	I0816 17:15:49.781331   33567 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 17:15:49.796401   33567 ssh_runner.go:195] Run: openssl version
	I0816 17:15:49.801725   33567 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 17:15:49.811298   33567 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 17:15:49.815214   33567 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:49 /usr/share/ca-certificates/minikubeCA.pem
	I0816 17:15:49.815256   33567 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 17:15:49.820340   33567 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 17:15:49.828902   33567 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16753.pem && ln -fs /usr/share/ca-certificates/16753.pem /etc/ssl/certs/16753.pem"
	I0816 17:15:49.838333   33567 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16753.pem
	I0816 17:15:49.842205   33567 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 17:00 /usr/share/ca-certificates/16753.pem
	I0816 17:15:49.842259   33567 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16753.pem
	I0816 17:15:49.847306   33567 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16753.pem /etc/ssl/certs/51391683.0"
	I0816 17:15:49.855687   33567 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167532.pem && ln -fs /usr/share/ca-certificates/167532.pem /etc/ssl/certs/167532.pem"
	I0816 17:15:49.865422   33567 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167532.pem
	I0816 17:15:49.869385   33567 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 17:00 /usr/share/ca-certificates/167532.pem
	I0816 17:15:49.869419   33567 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167532.pem
	I0816 17:15:49.874514   33567 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167532.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 17:15:49.882797   33567 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 17:15:49.886837   33567 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 17:15:49.899849   33567 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 17:15:49.906766   33567 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 17:15:49.916591   33567 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 17:15:49.928299   33567 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 17:15:49.936397   33567 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 17:15:49.942566   33567 kubeadm.go:392] StartCluster: {Name:ha-764617 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-764617 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.18 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.184 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.253 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.137 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 17:15:49.942691   33567 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 17:15:49.942732   33567 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 17:15:50.012616   33567 cri.go:89] found id: "78a2c078c9a836f3c3f3792f4888cf462cc0115bbd832fb0b3fe6afcea71593b"
	I0816 17:15:50.012666   33567 cri.go:89] found id: "49173ab56bb476ad0e5e598050b2d6cdf03bad18ffd952c9fc5a040efba23313"
	I0816 17:15:50.012671   33567 cri.go:89] found id: "a13c43bf5322cc3c68429cd57b4f2b0cd808310cbf83a054c8f8ceac9247fdc9"
	I0816 17:15:50.012674   33567 cri.go:89] found id: "7484d3705a58cf84eea46cc2853fefc74ff28ce7be490d80fd998780a1345a8b"
	I0816 17:15:50.012676   33567 cri.go:89] found id: "d21ff55e0d1541ec985fefc0bbe41954428037be46427e4a0b9b9f6f59ff38b5"
	I0816 17:15:50.012680   33567 cri.go:89] found id: "8eefbb289cdc64359e43e612fda427c74a5cd21d8765173cad59129f113b6faf"
	I0816 17:15:50.012682   33567 cri.go:89] found id: "b7c860bdbf8f8c3ce3e6ce12afcd55605f1afdf02828581465b57f088bf9fc24"
	I0816 17:15:50.012685   33567 cri.go:89] found id: "1aaf72ada1592eb9db02d4b420e3712769e6486d17ade2669153a86196aec79d"
	I0816 17:15:50.012687   33567 cri.go:89] found id: "6b4d4cb04162c2a865b03b9d68c6d63fe9ac39bfd8c3a34420cef100c23de268"
	I0816 17:15:50.012694   33567 cri.go:89] found id: "c020d60e48e21e3f68c2ab1e7dbc4570867af80f727edd6529d1d59b17f11a6f"
	I0816 17:15:50.012696   33567 cri.go:89] found id: "547ba7c3099cf77e755ecafde9e451130522c4ac7ad82e4d1425c79c1f03ae1b"
	I0816 17:15:50.012711   33567 cri.go:89] found id: "0d7b524ef17cfbc76cf8e0ec5c8dc05fb415ba95dd20034cc9e994fe15802183"
	I0816 17:15:50.012714   33567 cri.go:89] found id: "5964f78981acee32a76525df3d36071ce0c8b129aa0af6ff7aa1cdaff80b4110"
	I0816 17:15:50.012716   33567 cri.go:89] found id: ""
	I0816 17:15:50.012759   33567 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 16 17:21:25 ha-764617 crio[3620]: time="2024-08-16 17:21:25.774836965Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828885774815622,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3324793a-fc70-42a2-a468-ab967d866ac2 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 17:21:25 ha-764617 crio[3620]: time="2024-08-16 17:21:25.775255771Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=332590c6-1d8c-4612-a989-c4b410d82a57 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 17:21:25 ha-764617 crio[3620]: time="2024-08-16 17:21:25.775320116Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=332590c6-1d8c-4612-a989-c4b410d82a57 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 17:21:25 ha-764617 crio[3620]: time="2024-08-16 17:21:25.775718691Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ac791931e4d13a44e95dbbe18074f060aa624cf0f580ca310128f507be6bbf03,PodSandboxId:651fc3ebab41e91c11dbd9ec45bb8b289a1439ce84ec71b882f60ed00ae65ff9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723828634581089417,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15a0a2d4-69d6-4a6b-9199-f8785e015c3b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dac1a1703e40dc64e13d71f786ff8b6bad8f232b19fdddbe915665a5c1a0627d,PodSandboxId:ea534f2443e4a9a38561d602ba9e916096ad3fd8fbd7e5bb5f9370fb199dfbe7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723828598579792512,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d9a187c472f17e2ba03b6daf392b7e4,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4f92a8547daed9b708e0671f69fe945f34065114cd65321f075d9534251caee,PodSandboxId:a58c1ee703662e3f9cc4701c7c92ce72463943104460dd68c4441ac04e9c9171,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723828597577658920,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32cbd9593cdf012e272df9a250d0e00c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d42c6410ce9e1888286c8b697f6379a5bfacef89f9bdbfb33bad67cf3d03394b,PodSandboxId:651fc3ebab41e91c11dbd9ec45bb8b289a1439ce84ec71b882f60ed00ae65ff9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723828592584796731,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15a0a2d4-69d6-4a6b-9199-f8785e015c3b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2a44b535a74ed775a0bd6458b6bac0ad19ba8996e3f1c7325d30c5a3ae67297,PodSandboxId:872ace8403f2540a30c549847c18e8f9f5807493bb8ea967789ea4a7e014933c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723828589495393551,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rcq66,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef4f9584-2155-48ce-80fa-30bac466b9f5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19098e85977205229a51b1b4c24778595dcbc936d499282e98c120e9ff36695c,PodSandboxId:fa54cec332137f89fe347c604ba465a243e0da76448cfa44dfbad6b11d5b729e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723828572580092397,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a38b1e2e1b12167875f857f3d80e7b4e,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:030986f0ddc53b18e5e09d98f86351494ff02ed8d6b0e901d908a4544c679c3a,PodSandboxId:31b42c4c94b6f6c09d5a6b52f1b812af66811b2a302205fb3a6b7ecb5a764d6c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723828556165805551,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a89700dd245c99cee73a27284c5b094,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:f97fbca9aeaa7f4efc01aa36e9842fcbe1317fa5826d11051aab16ce390474fb,PodSandboxId:ac31589ee8673c28232e093f596c034bb71506aeb2765639d7ee3ae75dc7e97a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723828556229024809,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j75vc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50262aeb-9d97-4093-a43f-cb24a5515abb,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:290a055bbcb3d1cf9eb284751a
1e49d8b00b30dffd90fe3456c00e2d0d23dadb,PodSandboxId:ea534f2443e4a9a38561d602ba9e916096ad3fd8fbd7e5bb5f9370fb199dfbe7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723828556115461754,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d9a187c472f17e2ba03b6daf392b7e4,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:417237a361000abc40887
a2662fb7b87d56264d8520dea58cafbba0151e2ce42,PodSandboxId:7ff7258bc28f6b1905834c430adc872d22465a481131c790cff4ae1dc0f9f4fc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723828556226312067,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-rhb6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea20ec0a-a16e-4703-bb54-2e54c31acd40,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes
.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8281ecd4daddf7726964e428d3d817eb8b5f5b72ebf8f05f4209903dabfadeaa,PodSandboxId:8667e68ae5f7bb3667dc32338de129e4dbeae550106e74bc76e6ced118844a0e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723828556124470990,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-94vkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1ce0b8c-c2c8-400a-a013-6eb89e550cd9,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aeafb73cca63549a23e7a8a77c52a26c0759572b031848d5098a1f5ef81b3993,PodSandboxId:080ca9d7f3bf76f833091d1838a68a575c1cde45e2b3ac2cb29200b42e6fcb48,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723828556050965217,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45c4aa250fdc29f3673166187d642d12,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40117a7184f2549debf482d1b96d53560d5bc57d1fa0fb46ea93007ee8f3d940,PodSandboxId:a58c1ee703662e3f9cc4701c7c92ce72463943104460dd68c4441ac04e9c9171,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723828555942745800,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32cbd9593cdf012e272df9a250d0e00c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2eaae77033399ae62f905b8ed70d8509b5839c5f8b6b80e7462075c55fcb114,PodSandboxId:f0e8a8ba4e74af4758b8a010b6164dd1a2a0a190dbcc343d150b4efc61b24d4a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723828550059770473,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-d6c7g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 255004b9-d05e-4686-9e9c-6ec6f7aae439,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f49214f24a1f9d4e237db072dea4cb4011708fed1d55a3518bae64afc9a36de,PodSandboxId:31ad2ee33305c2e247dc968727a480a35dd4e341f754742f3ab45df490c9d9b6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723828052424014199,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rcq66,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef4f9584-2155-48ce-80fa-30bac466b9f5,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d21ff55e0d1541ec985fefc0bbe41954428037be46427e4a0b9b9f6f59ff38b5,PodSandboxId:570a9af97580c893bd59ad0da740672446aad828cc2953120c5936aa6c511600,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723827909473320957,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-rhb6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea20ec0a-a16e-4703-bb54-2e54c31acd40,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eefbb289cdc64359e43e612fda427c74a5cd21d8765173cad59129f113b6faf,PodSandboxId:a96010807e82ab97d921d55fe01739b0b7dc0b5234c6f3d29ffd5e0d7ae9a661,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723827909453875946,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-d6c7g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 255004b9-d05e-4686-9e9c-6ec6f7aae439,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7c860bdbf8f8c3ce3e6ce12afcd55605f1afdf02828581465b57f088bf9fc24,PodSandboxId:850550a63d423740e066cd344cb8eca06c8ab1ce91d384e5962c2c137bc4c4e2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723827897695873378,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-94vkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1ce0b8c-c2c8-400a-a013-6eb89e550cd9,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aaf72ada1592eb9db02d4b420e3712769e6486d17ade2669153a86196aec79d,PodSandboxId:7fa8ce6eea9326aa2d9e78422a5917fa128392accfa0f9395fc23862f5324831,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723827894190049003,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j75vc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50262aeb-9d97-4093-a43f-cb24a5515abb,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c020d60e48e21e3f68c2ab1e7dbc4570867af80f727edd6529d1d59b17f11a6f,PodSandboxId:c5d6c0455efc0ceafe45e3e9defa6b3e39bf97e12d07d6d742a88b3cbeaf65b6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915a
f3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723827882765000311,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a89700dd245c99cee73a27284c5b094,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:547ba7c3099cf77e755ecafde9e451130522c4ac7ad82e4d1425c79c1f03ae1b,PodSandboxId:09ec8ad12f1f1dcd1c0e02dc275ba62dff2a2b5f6ca7e7fd781694c2d45e4a18,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
,State:CONTAINER_EXITED,CreatedAt:1723827882761308074,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45c4aa250fdc29f3673166187d642d12,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=332590c6-1d8c-4612-a989-c4b410d82a57 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 17:21:25 ha-764617 crio[3620]: time="2024-08-16 17:21:25.817635428Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=20c53e30-cc2e-4811-8cb9-a06b0bc54c93 name=/runtime.v1.RuntimeService/Version
	Aug 16 17:21:25 ha-764617 crio[3620]: time="2024-08-16 17:21:25.817726657Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=20c53e30-cc2e-4811-8cb9-a06b0bc54c93 name=/runtime.v1.RuntimeService/Version
	Aug 16 17:21:25 ha-764617 crio[3620]: time="2024-08-16 17:21:25.818873893Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6272ac4a-63b7-4a18-8b34-d0c02145a49c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 17:21:25 ha-764617 crio[3620]: time="2024-08-16 17:21:25.819391114Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828885819361677,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6272ac4a-63b7-4a18-8b34-d0c02145a49c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 17:21:25 ha-764617 crio[3620]: time="2024-08-16 17:21:25.819823560Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=27f3a076-5a19-45ae-b716-33c0502638c1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 17:21:25 ha-764617 crio[3620]: time="2024-08-16 17:21:25.819880555Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=27f3a076-5a19-45ae-b716-33c0502638c1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 17:21:25 ha-764617 crio[3620]: time="2024-08-16 17:21:25.820326167Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ac791931e4d13a44e95dbbe18074f060aa624cf0f580ca310128f507be6bbf03,PodSandboxId:651fc3ebab41e91c11dbd9ec45bb8b289a1439ce84ec71b882f60ed00ae65ff9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723828634581089417,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15a0a2d4-69d6-4a6b-9199-f8785e015c3b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dac1a1703e40dc64e13d71f786ff8b6bad8f232b19fdddbe915665a5c1a0627d,PodSandboxId:ea534f2443e4a9a38561d602ba9e916096ad3fd8fbd7e5bb5f9370fb199dfbe7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723828598579792512,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d9a187c472f17e2ba03b6daf392b7e4,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4f92a8547daed9b708e0671f69fe945f34065114cd65321f075d9534251caee,PodSandboxId:a58c1ee703662e3f9cc4701c7c92ce72463943104460dd68c4441ac04e9c9171,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723828597577658920,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32cbd9593cdf012e272df9a250d0e00c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d42c6410ce9e1888286c8b697f6379a5bfacef89f9bdbfb33bad67cf3d03394b,PodSandboxId:651fc3ebab41e91c11dbd9ec45bb8b289a1439ce84ec71b882f60ed00ae65ff9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723828592584796731,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15a0a2d4-69d6-4a6b-9199-f8785e015c3b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2a44b535a74ed775a0bd6458b6bac0ad19ba8996e3f1c7325d30c5a3ae67297,PodSandboxId:872ace8403f2540a30c549847c18e8f9f5807493bb8ea967789ea4a7e014933c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723828589495393551,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rcq66,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef4f9584-2155-48ce-80fa-30bac466b9f5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19098e85977205229a51b1b4c24778595dcbc936d499282e98c120e9ff36695c,PodSandboxId:fa54cec332137f89fe347c604ba465a243e0da76448cfa44dfbad6b11d5b729e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723828572580092397,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a38b1e2e1b12167875f857f3d80e7b4e,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:030986f0ddc53b18e5e09d98f86351494ff02ed8d6b0e901d908a4544c679c3a,PodSandboxId:31b42c4c94b6f6c09d5a6b52f1b812af66811b2a302205fb3a6b7ecb5a764d6c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723828556165805551,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a89700dd245c99cee73a27284c5b094,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:f97fbca9aeaa7f4efc01aa36e9842fcbe1317fa5826d11051aab16ce390474fb,PodSandboxId:ac31589ee8673c28232e093f596c034bb71506aeb2765639d7ee3ae75dc7e97a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723828556229024809,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j75vc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50262aeb-9d97-4093-a43f-cb24a5515abb,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:290a055bbcb3d1cf9eb284751a
1e49d8b00b30dffd90fe3456c00e2d0d23dadb,PodSandboxId:ea534f2443e4a9a38561d602ba9e916096ad3fd8fbd7e5bb5f9370fb199dfbe7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723828556115461754,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d9a187c472f17e2ba03b6daf392b7e4,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:417237a361000abc40887
a2662fb7b87d56264d8520dea58cafbba0151e2ce42,PodSandboxId:7ff7258bc28f6b1905834c430adc872d22465a481131c790cff4ae1dc0f9f4fc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723828556226312067,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-rhb6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea20ec0a-a16e-4703-bb54-2e54c31acd40,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes
.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8281ecd4daddf7726964e428d3d817eb8b5f5b72ebf8f05f4209903dabfadeaa,PodSandboxId:8667e68ae5f7bb3667dc32338de129e4dbeae550106e74bc76e6ced118844a0e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723828556124470990,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-94vkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1ce0b8c-c2c8-400a-a013-6eb89e550cd9,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aeafb73cca63549a23e7a8a77c52a26c0759572b031848d5098a1f5ef81b3993,PodSandboxId:080ca9d7f3bf76f833091d1838a68a575c1cde45e2b3ac2cb29200b42e6fcb48,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723828556050965217,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45c4aa250fdc29f3673166187d642d12,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40117a7184f2549debf482d1b96d53560d5bc57d1fa0fb46ea93007ee8f3d940,PodSandboxId:a58c1ee703662e3f9cc4701c7c92ce72463943104460dd68c4441ac04e9c9171,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723828555942745800,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32cbd9593cdf012e272df9a250d0e00c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2eaae77033399ae62f905b8ed70d8509b5839c5f8b6b80e7462075c55fcb114,PodSandboxId:f0e8a8ba4e74af4758b8a010b6164dd1a2a0a190dbcc343d150b4efc61b24d4a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723828550059770473,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-d6c7g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 255004b9-d05e-4686-9e9c-6ec6f7aae439,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f49214f24a1f9d4e237db072dea4cb4011708fed1d55a3518bae64afc9a36de,PodSandboxId:31ad2ee33305c2e247dc968727a480a35dd4e341f754742f3ab45df490c9d9b6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723828052424014199,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rcq66,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef4f9584-2155-48ce-80fa-30bac466b9f5,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d21ff55e0d1541ec985fefc0bbe41954428037be46427e4a0b9b9f6f59ff38b5,PodSandboxId:570a9af97580c893bd59ad0da740672446aad828cc2953120c5936aa6c511600,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723827909473320957,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-rhb6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea20ec0a-a16e-4703-bb54-2e54c31acd40,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eefbb289cdc64359e43e612fda427c74a5cd21d8765173cad59129f113b6faf,PodSandboxId:a96010807e82ab97d921d55fe01739b0b7dc0b5234c6f3d29ffd5e0d7ae9a661,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723827909453875946,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-d6c7g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 255004b9-d05e-4686-9e9c-6ec6f7aae439,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7c860bdbf8f8c3ce3e6ce12afcd55605f1afdf02828581465b57f088bf9fc24,PodSandboxId:850550a63d423740e066cd344cb8eca06c8ab1ce91d384e5962c2c137bc4c4e2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723827897695873378,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-94vkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1ce0b8c-c2c8-400a-a013-6eb89e550cd9,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aaf72ada1592eb9db02d4b420e3712769e6486d17ade2669153a86196aec79d,PodSandboxId:7fa8ce6eea9326aa2d9e78422a5917fa128392accfa0f9395fc23862f5324831,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723827894190049003,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j75vc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50262aeb-9d97-4093-a43f-cb24a5515abb,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c020d60e48e21e3f68c2ab1e7dbc4570867af80f727edd6529d1d59b17f11a6f,PodSandboxId:c5d6c0455efc0ceafe45e3e9defa6b3e39bf97e12d07d6d742a88b3cbeaf65b6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915a
f3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723827882765000311,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a89700dd245c99cee73a27284c5b094,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:547ba7c3099cf77e755ecafde9e451130522c4ac7ad82e4d1425c79c1f03ae1b,PodSandboxId:09ec8ad12f1f1dcd1c0e02dc275ba62dff2a2b5f6ca7e7fd781694c2d45e4a18,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
,State:CONTAINER_EXITED,CreatedAt:1723827882761308074,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45c4aa250fdc29f3673166187d642d12,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=27f3a076-5a19-45ae-b716-33c0502638c1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 17:21:25 ha-764617 crio[3620]: time="2024-08-16 17:21:25.860421627Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=99bb60e3-a857-4a1e-b272-eb3c7181adbf name=/runtime.v1.RuntimeService/Version
	Aug 16 17:21:25 ha-764617 crio[3620]: time="2024-08-16 17:21:25.860507088Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=99bb60e3-a857-4a1e-b272-eb3c7181adbf name=/runtime.v1.RuntimeService/Version
	Aug 16 17:21:25 ha-764617 crio[3620]: time="2024-08-16 17:21:25.861842875Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=03868eac-784a-4589-b9e7-dad74b218a25 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 17:21:25 ha-764617 crio[3620]: time="2024-08-16 17:21:25.862460389Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828885862432518,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=03868eac-784a-4589-b9e7-dad74b218a25 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 17:21:25 ha-764617 crio[3620]: time="2024-08-16 17:21:25.863008854Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=39133fa3-8933-4f2c-bcf7-94a356e9962c name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 17:21:25 ha-764617 crio[3620]: time="2024-08-16 17:21:25.863088960Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=39133fa3-8933-4f2c-bcf7-94a356e9962c name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 17:21:25 ha-764617 crio[3620]: time="2024-08-16 17:21:25.864439940Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ac791931e4d13a44e95dbbe18074f060aa624cf0f580ca310128f507be6bbf03,PodSandboxId:651fc3ebab41e91c11dbd9ec45bb8b289a1439ce84ec71b882f60ed00ae65ff9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723828634581089417,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15a0a2d4-69d6-4a6b-9199-f8785e015c3b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dac1a1703e40dc64e13d71f786ff8b6bad8f232b19fdddbe915665a5c1a0627d,PodSandboxId:ea534f2443e4a9a38561d602ba9e916096ad3fd8fbd7e5bb5f9370fb199dfbe7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723828598579792512,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d9a187c472f17e2ba03b6daf392b7e4,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4f92a8547daed9b708e0671f69fe945f34065114cd65321f075d9534251caee,PodSandboxId:a58c1ee703662e3f9cc4701c7c92ce72463943104460dd68c4441ac04e9c9171,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723828597577658920,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32cbd9593cdf012e272df9a250d0e00c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d42c6410ce9e1888286c8b697f6379a5bfacef89f9bdbfb33bad67cf3d03394b,PodSandboxId:651fc3ebab41e91c11dbd9ec45bb8b289a1439ce84ec71b882f60ed00ae65ff9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723828592584796731,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15a0a2d4-69d6-4a6b-9199-f8785e015c3b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2a44b535a74ed775a0bd6458b6bac0ad19ba8996e3f1c7325d30c5a3ae67297,PodSandboxId:872ace8403f2540a30c549847c18e8f9f5807493bb8ea967789ea4a7e014933c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723828589495393551,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rcq66,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef4f9584-2155-48ce-80fa-30bac466b9f5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19098e85977205229a51b1b4c24778595dcbc936d499282e98c120e9ff36695c,PodSandboxId:fa54cec332137f89fe347c604ba465a243e0da76448cfa44dfbad6b11d5b729e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723828572580092397,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a38b1e2e1b12167875f857f3d80e7b4e,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:030986f0ddc53b18e5e09d98f86351494ff02ed8d6b0e901d908a4544c679c3a,PodSandboxId:31b42c4c94b6f6c09d5a6b52f1b812af66811b2a302205fb3a6b7ecb5a764d6c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723828556165805551,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a89700dd245c99cee73a27284c5b094,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:f97fbca9aeaa7f4efc01aa36e9842fcbe1317fa5826d11051aab16ce390474fb,PodSandboxId:ac31589ee8673c28232e093f596c034bb71506aeb2765639d7ee3ae75dc7e97a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723828556229024809,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j75vc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50262aeb-9d97-4093-a43f-cb24a5515abb,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:290a055bbcb3d1cf9eb284751a
1e49d8b00b30dffd90fe3456c00e2d0d23dadb,PodSandboxId:ea534f2443e4a9a38561d602ba9e916096ad3fd8fbd7e5bb5f9370fb199dfbe7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723828556115461754,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d9a187c472f17e2ba03b6daf392b7e4,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:417237a361000abc40887
a2662fb7b87d56264d8520dea58cafbba0151e2ce42,PodSandboxId:7ff7258bc28f6b1905834c430adc872d22465a481131c790cff4ae1dc0f9f4fc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723828556226312067,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-rhb6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea20ec0a-a16e-4703-bb54-2e54c31acd40,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes
.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8281ecd4daddf7726964e428d3d817eb8b5f5b72ebf8f05f4209903dabfadeaa,PodSandboxId:8667e68ae5f7bb3667dc32338de129e4dbeae550106e74bc76e6ced118844a0e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723828556124470990,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-94vkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1ce0b8c-c2c8-400a-a013-6eb89e550cd9,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aeafb73cca63549a23e7a8a77c52a26c0759572b031848d5098a1f5ef81b3993,PodSandboxId:080ca9d7f3bf76f833091d1838a68a575c1cde45e2b3ac2cb29200b42e6fcb48,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723828556050965217,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45c4aa250fdc29f3673166187d642d12,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40117a7184f2549debf482d1b96d53560d5bc57d1fa0fb46ea93007ee8f3d940,PodSandboxId:a58c1ee703662e3f9cc4701c7c92ce72463943104460dd68c4441ac04e9c9171,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723828555942745800,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32cbd9593cdf012e272df9a250d0e00c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2eaae77033399ae62f905b8ed70d8509b5839c5f8b6b80e7462075c55fcb114,PodSandboxId:f0e8a8ba4e74af4758b8a010b6164dd1a2a0a190dbcc343d150b4efc61b24d4a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723828550059770473,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-d6c7g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 255004b9-d05e-4686-9e9c-6ec6f7aae439,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f49214f24a1f9d4e237db072dea4cb4011708fed1d55a3518bae64afc9a36de,PodSandboxId:31ad2ee33305c2e247dc968727a480a35dd4e341f754742f3ab45df490c9d9b6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723828052424014199,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rcq66,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef4f9584-2155-48ce-80fa-30bac466b9f5,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d21ff55e0d1541ec985fefc0bbe41954428037be46427e4a0b9b9f6f59ff38b5,PodSandboxId:570a9af97580c893bd59ad0da740672446aad828cc2953120c5936aa6c511600,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723827909473320957,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-rhb6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea20ec0a-a16e-4703-bb54-2e54c31acd40,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eefbb289cdc64359e43e612fda427c74a5cd21d8765173cad59129f113b6faf,PodSandboxId:a96010807e82ab97d921d55fe01739b0b7dc0b5234c6f3d29ffd5e0d7ae9a661,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723827909453875946,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-d6c7g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 255004b9-d05e-4686-9e9c-6ec6f7aae439,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7c860bdbf8f8c3ce3e6ce12afcd55605f1afdf02828581465b57f088bf9fc24,PodSandboxId:850550a63d423740e066cd344cb8eca06c8ab1ce91d384e5962c2c137bc4c4e2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723827897695873378,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-94vkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1ce0b8c-c2c8-400a-a013-6eb89e550cd9,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aaf72ada1592eb9db02d4b420e3712769e6486d17ade2669153a86196aec79d,PodSandboxId:7fa8ce6eea9326aa2d9e78422a5917fa128392accfa0f9395fc23862f5324831,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723827894190049003,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j75vc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50262aeb-9d97-4093-a43f-cb24a5515abb,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c020d60e48e21e3f68c2ab1e7dbc4570867af80f727edd6529d1d59b17f11a6f,PodSandboxId:c5d6c0455efc0ceafe45e3e9defa6b3e39bf97e12d07d6d742a88b3cbeaf65b6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915a
f3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723827882765000311,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a89700dd245c99cee73a27284c5b094,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:547ba7c3099cf77e755ecafde9e451130522c4ac7ad82e4d1425c79c1f03ae1b,PodSandboxId:09ec8ad12f1f1dcd1c0e02dc275ba62dff2a2b5f6ca7e7fd781694c2d45e4a18,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
,State:CONTAINER_EXITED,CreatedAt:1723827882761308074,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45c4aa250fdc29f3673166187d642d12,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=39133fa3-8933-4f2c-bcf7-94a356e9962c name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 17:21:25 ha-764617 crio[3620]: time="2024-08-16 17:21:25.908057830Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0477749b-1147-45b4-b643-66a0af1561e0 name=/runtime.v1.RuntimeService/Version
	Aug 16 17:21:25 ha-764617 crio[3620]: time="2024-08-16 17:21:25.908178827Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0477749b-1147-45b4-b643-66a0af1561e0 name=/runtime.v1.RuntimeService/Version
	Aug 16 17:21:25 ha-764617 crio[3620]: time="2024-08-16 17:21:25.909201281Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=076eacb8-5d6b-42b2-9b5f-1a5e1a0f329b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 17:21:25 ha-764617 crio[3620]: time="2024-08-16 17:21:25.909954505Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828885909927689,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=076eacb8-5d6b-42b2-9b5f-1a5e1a0f329b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 17:21:25 ha-764617 crio[3620]: time="2024-08-16 17:21:25.910511233Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a4455161-df33-4029-8a5e-afad7664c62b name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 17:21:25 ha-764617 crio[3620]: time="2024-08-16 17:21:25.910574001Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a4455161-df33-4029-8a5e-afad7664c62b name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 17:21:25 ha-764617 crio[3620]: time="2024-08-16 17:21:25.910980308Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ac791931e4d13a44e95dbbe18074f060aa624cf0f580ca310128f507be6bbf03,PodSandboxId:651fc3ebab41e91c11dbd9ec45bb8b289a1439ce84ec71b882f60ed00ae65ff9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723828634581089417,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15a0a2d4-69d6-4a6b-9199-f8785e015c3b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dac1a1703e40dc64e13d71f786ff8b6bad8f232b19fdddbe915665a5c1a0627d,PodSandboxId:ea534f2443e4a9a38561d602ba9e916096ad3fd8fbd7e5bb5f9370fb199dfbe7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723828598579792512,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d9a187c472f17e2ba03b6daf392b7e4,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4f92a8547daed9b708e0671f69fe945f34065114cd65321f075d9534251caee,PodSandboxId:a58c1ee703662e3f9cc4701c7c92ce72463943104460dd68c4441ac04e9c9171,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723828597577658920,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32cbd9593cdf012e272df9a250d0e00c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d42c6410ce9e1888286c8b697f6379a5bfacef89f9bdbfb33bad67cf3d03394b,PodSandboxId:651fc3ebab41e91c11dbd9ec45bb8b289a1439ce84ec71b882f60ed00ae65ff9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723828592584796731,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15a0a2d4-69d6-4a6b-9199-f8785e015c3b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2a44b535a74ed775a0bd6458b6bac0ad19ba8996e3f1c7325d30c5a3ae67297,PodSandboxId:872ace8403f2540a30c549847c18e8f9f5807493bb8ea967789ea4a7e014933c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723828589495393551,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rcq66,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef4f9584-2155-48ce-80fa-30bac466b9f5,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19098e85977205229a51b1b4c24778595dcbc936d499282e98c120e9ff36695c,PodSandboxId:fa54cec332137f89fe347c604ba465a243e0da76448cfa44dfbad6b11d5b729e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723828572580092397,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a38b1e2e1b12167875f857f3d80e7b4e,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:030986f0ddc53b18e5e09d98f86351494ff02ed8d6b0e901d908a4544c679c3a,PodSandboxId:31b42c4c94b6f6c09d5a6b52f1b812af66811b2a302205fb3a6b7ecb5a764d6c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723828556165805551,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a89700dd245c99cee73a27284c5b094,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:f97fbca9aeaa7f4efc01aa36e9842fcbe1317fa5826d11051aab16ce390474fb,PodSandboxId:ac31589ee8673c28232e093f596c034bb71506aeb2765639d7ee3ae75dc7e97a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723828556229024809,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j75vc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50262aeb-9d97-4093-a43f-cb24a5515abb,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:290a055bbcb3d1cf9eb284751a
1e49d8b00b30dffd90fe3456c00e2d0d23dadb,PodSandboxId:ea534f2443e4a9a38561d602ba9e916096ad3fd8fbd7e5bb5f9370fb199dfbe7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723828556115461754,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d9a187c472f17e2ba03b6daf392b7e4,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:417237a361000abc40887
a2662fb7b87d56264d8520dea58cafbba0151e2ce42,PodSandboxId:7ff7258bc28f6b1905834c430adc872d22465a481131c790cff4ae1dc0f9f4fc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723828556226312067,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-rhb6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea20ec0a-a16e-4703-bb54-2e54c31acd40,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes
.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8281ecd4daddf7726964e428d3d817eb8b5f5b72ebf8f05f4209903dabfadeaa,PodSandboxId:8667e68ae5f7bb3667dc32338de129e4dbeae550106e74bc76e6ced118844a0e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723828556124470990,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-94vkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1ce0b8c-c2c8-400a-a013-6eb89e550cd9,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aeafb73cca63549a23e7a8a77c52a26c0759572b031848d5098a1f5ef81b3993,PodSandboxId:080ca9d7f3bf76f833091d1838a68a575c1cde45e2b3ac2cb29200b42e6fcb48,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723828556050965217,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45c4aa250fdc29f3673166187d642d12,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40117a7184f2549debf482d1b96d53560d5bc57d1fa0fb46ea93007ee8f3d940,PodSandboxId:a58c1ee703662e3f9cc4701c7c92ce72463943104460dd68c4441ac04e9c9171,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723828555942745800,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32cbd9593cdf012e272df9a250d0e00c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2eaae77033399ae62f905b8ed70d8509b5839c5f8b6b80e7462075c55fcb114,PodSandboxId:f0e8a8ba4e74af4758b8a010b6164dd1a2a0a190dbcc343d150b4efc61b24d4a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723828550059770473,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-d6c7g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 255004b9-d05e-4686-9e9c-6ec6f7aae439,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f49214f24a1f9d4e237db072dea4cb4011708fed1d55a3518bae64afc9a36de,PodSandboxId:31ad2ee33305c2e247dc968727a480a35dd4e341f754742f3ab45df490c9d9b6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723828052424014199,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rcq66,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef4f9584-2155-48ce-80fa-30bac466b9f5,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d21ff55e0d1541ec985fefc0bbe41954428037be46427e4a0b9b9f6f59ff38b5,PodSandboxId:570a9af97580c893bd59ad0da740672446aad828cc2953120c5936aa6c511600,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723827909473320957,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-rhb6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea20ec0a-a16e-4703-bb54-2e54c31acd40,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eefbb289cdc64359e43e612fda427c74a5cd21d8765173cad59129f113b6faf,PodSandboxId:a96010807e82ab97d921d55fe01739b0b7dc0b5234c6f3d29ffd5e0d7ae9a661,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723827909453875946,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-d6c7g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 255004b9-d05e-4686-9e9c-6ec6f7aae439,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7c860bdbf8f8c3ce3e6ce12afcd55605f1afdf02828581465b57f088bf9fc24,PodSandboxId:850550a63d423740e066cd344cb8eca06c8ab1ce91d384e5962c2c137bc4c4e2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723827897695873378,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-94vkj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1ce0b8c-c2c8-400a-a013-6eb89e550cd9,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aaf72ada1592eb9db02d4b420e3712769e6486d17ade2669153a86196aec79d,PodSandboxId:7fa8ce6eea9326aa2d9e78422a5917fa128392accfa0f9395fc23862f5324831,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723827894190049003,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j75vc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50262aeb-9d97-4093-a43f-cb24a5515abb,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c020d60e48e21e3f68c2ab1e7dbc4570867af80f727edd6529d1d59b17f11a6f,PodSandboxId:c5d6c0455efc0ceafe45e3e9defa6b3e39bf97e12d07d6d742a88b3cbeaf65b6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915a
f3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723827882765000311,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a89700dd245c99cee73a27284c5b094,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:547ba7c3099cf77e755ecafde9e451130522c4ac7ad82e4d1425c79c1f03ae1b,PodSandboxId:09ec8ad12f1f1dcd1c0e02dc275ba62dff2a2b5f6ca7e7fd781694c2d45e4a18,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
,State:CONTAINER_EXITED,CreatedAt:1723827882761308074,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-764617,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45c4aa250fdc29f3673166187d642d12,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a4455161-df33-4029-8a5e-afad7664c62b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ac791931e4d13       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       4                   651fc3ebab41e       storage-provisioner
	dac1a1703e40d       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      4 minutes ago       Running             kube-controller-manager   2                   ea534f2443e4a       kube-controller-manager-ha-764617
	f4f92a8547dae       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      4 minutes ago       Running             kube-apiserver            3                   a58c1ee703662       kube-apiserver-ha-764617
	d42c6410ce9e1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Exited              storage-provisioner       3                   651fc3ebab41e       storage-provisioner
	e2a44b535a74e       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   872ace8403f25       busybox-7dff88458-rcq66
	19098e8597720       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      5 minutes ago       Running             kube-vip                  0                   fa54cec332137       kube-vip-ha-764617
	f97fbca9aeaa7       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      5 minutes ago       Running             kube-proxy                1                   ac31589ee8673       kube-proxy-j75vc
	417237a361000       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   7ff7258bc28f6       coredns-6f6b679f8f-rhb6h
	030986f0ddc53       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      5 minutes ago       Running             etcd                      1                   31b42c4c94b6f       etcd-ha-764617
	8281ecd4daddf       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      5 minutes ago       Running             kindnet-cni               1                   8667e68ae5f7b       kindnet-94vkj
	290a055bbcb3d       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      5 minutes ago       Exited              kube-controller-manager   1                   ea534f2443e4a       kube-controller-manager-ha-764617
	aeafb73cca635       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      5 minutes ago       Running             kube-scheduler            1                   080ca9d7f3bf7       kube-scheduler-ha-764617
	40117a7184f25       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      5 minutes ago       Exited              kube-apiserver            2                   a58c1ee703662       kube-apiserver-ha-764617
	d2eaae7703339       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   f0e8a8ba4e74a       coredns-6f6b679f8f-d6c7g
	8f49214f24a1f       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   31ad2ee33305c       busybox-7dff88458-rcq66
	d21ff55e0d154       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   570a9af97580c       coredns-6f6b679f8f-rhb6h
	8eefbb289cdc6       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   a96010807e82a       coredns-6f6b679f8f-d6c7g
	b7c860bdbf8f8       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    16 minutes ago      Exited              kindnet-cni               0                   850550a63d423       kindnet-94vkj
	1aaf72ada1592       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      16 minutes ago      Exited              kube-proxy                0                   7fa8ce6eea932       kube-proxy-j75vc
	c020d60e48e21       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      16 minutes ago      Exited              etcd                      0                   c5d6c0455efc0       etcd-ha-764617
	547ba7c3099cf       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      16 minutes ago      Exited              kube-scheduler            0                   09ec8ad12f1f1       kube-scheduler-ha-764617
	
	
	==> coredns [417237a361000abc40887a2662fb7b87d56264d8520dea58cafbba0151e2ce42] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:41244->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:41244->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:41228->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:41228->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:41222->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1466542355]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Aug-2024 17:16:08.046) (total time: 12917ms):
	Trace[1466542355]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:41222->10.96.0.1:443: read: connection reset by peer 12917ms (17:16:20.963)
	Trace[1466542355]: [12.917946118s] [12.917946118s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:41222->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [8eefbb289cdc64359e43e612fda427c74a5cd21d8765173cad59129f113b6faf] <==
	[INFO] 10.244.1.2:52681 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013622s
	[INFO] 10.244.1.2:34428 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000179694s
	[INFO] 10.244.1.2:38361 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000107495s
	[INFO] 10.244.0.4:33031 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000072835s
	[INFO] 10.244.0.4:46219 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00004433s
	[INFO] 10.244.2.2:36496 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000117578s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1915&timeout=6m47s&timeoutSeconds=407&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1915&timeout=5m42s&timeoutSeconds=342&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=1915": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=1915": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=1915": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=1915": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: Trace[447197809]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Aug-2024 17:14:01.648) (total time: 10064ms):
	Trace[447197809]: ---"Objects listed" error:Unauthorized 10064ms (17:14:11.713)
	Trace[447197809]: [10.064349415s] [10.064349415s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: Trace[1233712988]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Aug-2024 17:14:00.298) (total time: 11415ms):
	Trace[1233712988]: ---"Objects listed" error:Unauthorized 11415ms (17:14:11.714)
	Trace[1233712988]: [11.415834293s] [11.415834293s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces)
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [d21ff55e0d1541ec985fefc0bbe41954428037be46427e4a0b9b9f6f59ff38b5] <==
	[INFO] 10.244.2.2:33517 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00005988s
	[INFO] 10.244.1.2:58731 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000174613s
	[INFO] 10.244.1.2:43400 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000105057s
	[INFO] 10.244.1.2:41968 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000104182s
	[INFO] 10.244.0.4:46666 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121402s
	[INFO] 10.244.0.4:46004 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000066296s
	[INFO] 10.244.2.2:39282 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00010929s
	[INFO] 10.244.1.2:58290 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000151089s
	[INFO] 10.244.0.4:38377 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152447s
	[INFO] 10.244.0.4:57414 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000061601s
	[INFO] 10.244.2.2:49722 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000182712s
	[INFO] 10.244.2.2:47690 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00014162s
	[INFO] 10.244.2.2:41318 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000108034s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1915&timeout=6m31s&timeoutSeconds=391&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1930&timeout=8m14s&timeoutSeconds=494&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=1915": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=1915": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=1930": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=1930": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [d2eaae77033399ae62f905b8ed70d8509b5839c5f8b6b80e7462075c55fcb114] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1349058062]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Aug-2024 17:16:04.463) (total time: 10001ms):
	Trace[1349058062]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (17:16:14.464)
	Trace[1349058062]: [10.001638046s] [10.001638046s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1743112337]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Aug-2024 17:16:04.990) (total time: 10001ms):
	Trace[1743112337]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (17:16:14.991)
	Trace[1743112337]: [10.001244969s] [10.001244969s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-764617
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-764617
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd
	                    minikube.k8s.io/name=ha-764617
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_16T17_04_49_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 17:04:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-764617
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 17:21:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 17:16:41 +0000   Fri, 16 Aug 2024 17:04:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 17:16:41 +0000   Fri, 16 Aug 2024 17:04:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 17:16:41 +0000   Fri, 16 Aug 2024 17:04:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 17:16:41 +0000   Fri, 16 Aug 2024 17:05:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.18
	  Hostname:    ha-764617
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c56e74c3649b4538acc75a2edf2b5dea
	  System UUID:                c56e74c3-649b-4538-acc7-5a2edf2b5dea
	  Boot ID:                    b56c67cf-18b1-46e0-819e-927538c01209
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-rcq66              0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 coredns-6f6b679f8f-d6c7g             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 coredns-6f6b679f8f-rhb6h             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 etcd-ha-764617                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         16m
	  kube-system                 kindnet-94vkj                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      16m
	  kube-system                 kube-apiserver-ha-764617             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-ha-764617    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-j75vc                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-ha-764617             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-vip-ha-764617                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m44s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m46s                  kube-proxy       
	  Normal   Starting                 16m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  16m (x8 over 16m)      kubelet          Node ha-764617 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     16m (x7 over 16m)      kubelet          Node ha-764617 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    16m (x8 over 16m)      kubelet          Node ha-764617 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 16m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     16m                    kubelet          Node ha-764617 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    16m                    kubelet          Node ha-764617 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  16m                    kubelet          Node ha-764617 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 16m                    kubelet          Starting kubelet.
	  Normal   RegisteredNode           16m                    node-controller  Node ha-764617 event: Registered Node ha-764617 in Controller
	  Normal   NodeReady                16m                    kubelet          Node ha-764617 status is now: NodeReady
	  Normal   RegisteredNode           15m                    node-controller  Node ha-764617 event: Registered Node ha-764617 in Controller
	  Normal   RegisteredNode           14m                    node-controller  Node ha-764617 event: Registered Node ha-764617 in Controller
	  Warning  ContainerGCFailed        5m38s (x2 over 6m38s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             5m35s (x3 over 6m24s)  kubelet          Node ha-764617 status is now: NodeNotReady
	  Normal   RegisteredNode           4m54s                  node-controller  Node ha-764617 event: Registered Node ha-764617 in Controller
	  Normal   RegisteredNode           4m43s                  node-controller  Node ha-764617 event: Registered Node ha-764617 in Controller
	  Normal   RegisteredNode           3m14s                  node-controller  Node ha-764617 event: Registered Node ha-764617 in Controller
	
	
	Name:               ha-764617-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-764617-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd
	                    minikube.k8s.io/name=ha-764617
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_16T17_05_48_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 17:05:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-764617-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 17:21:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 17:17:21 +0000   Fri, 16 Aug 2024 17:16:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 17:17:21 +0000   Fri, 16 Aug 2024 17:16:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 17:17:21 +0000   Fri, 16 Aug 2024 17:16:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 17:17:21 +0000   Fri, 16 Aug 2024 17:16:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.184
	  Hostname:    ha-764617-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9b9263e99d3f46399a1ef68b5c9541da
	  System UUID:                9b9263e9-9d3f-4639-9a1e-f68b5c9541da
	  Boot ID:                    2f4561c3-220c-425a-ae31-ea31a2191f13
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-5kg62                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 etcd-ha-764617-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         15m
	  kube-system                 kindnet-7l8xt                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      15m
	  kube-system                 kube-apiserver-ha-764617-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-ha-764617-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-g5szr                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-ha-764617-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-vip-ha-764617-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m27s                  kube-proxy       
	  Normal  Starting                 15m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-764617-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-764617-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-764617-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                    node-controller  Node ha-764617-m02 event: Registered Node ha-764617-m02 in Controller
	  Normal  RegisteredNode           15m                    node-controller  Node ha-764617-m02 event: Registered Node ha-764617-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-764617-m02 event: Registered Node ha-764617-m02 in Controller
	  Normal  NodeNotReady             12m                    node-controller  Node ha-764617-m02 status is now: NodeNotReady
	  Normal  Starting                 5m13s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m13s (x8 over 5m13s)  kubelet          Node ha-764617-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m13s (x8 over 5m13s)  kubelet          Node ha-764617-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m13s (x7 over 5m13s)  kubelet          Node ha-764617-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m13s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m54s                  node-controller  Node ha-764617-m02 event: Registered Node ha-764617-m02 in Controller
	  Normal  RegisteredNode           4m43s                  node-controller  Node ha-764617-m02 event: Registered Node ha-764617-m02 in Controller
	  Normal  RegisteredNode           3m14s                  node-controller  Node ha-764617-m02 event: Registered Node ha-764617-m02 in Controller
	
	
	Name:               ha-764617-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-764617-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd
	                    minikube.k8s.io/name=ha-764617
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_16T17_08_05_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 17:08:05 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-764617-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 17:18:59 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 16 Aug 2024 17:18:39 +0000   Fri, 16 Aug 2024 17:19:43 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 16 Aug 2024 17:18:39 +0000   Fri, 16 Aug 2024 17:19:43 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 16 Aug 2024 17:18:39 +0000   Fri, 16 Aug 2024 17:19:43 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 16 Aug 2024 17:18:39 +0000   Fri, 16 Aug 2024 17:19:43 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.137
	  Hostname:    ha-764617-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6601760275c145fda2c7de8f57c611fa
	  System UUID:                66017602-75c1-45fd-a2c7-de8f57c611fa
	  Boot ID:                    e4e990ae-bfad-4760-916d-243430ff145a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-ckcfj    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m37s
	  kube-system                 kindnet-785hx              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-proxy-p9gpb           0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   Starting                 2m42s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  13m (x2 over 13m)      kubelet          Node ha-764617-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x2 over 13m)      kubelet          Node ha-764617-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x2 over 13m)      kubelet          Node ha-764617-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           13m                    node-controller  Node ha-764617-m04 event: Registered Node ha-764617-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-764617-m04 event: Registered Node ha-764617-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-764617-m04 event: Registered Node ha-764617-m04 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-764617-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m53s                  node-controller  Node ha-764617-m04 event: Registered Node ha-764617-m04 in Controller
	  Normal   RegisteredNode           4m43s                  node-controller  Node ha-764617-m04 event: Registered Node ha-764617-m04 in Controller
	  Normal   NodeNotReady             4m13s                  node-controller  Node ha-764617-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           3m14s                  node-controller  Node ha-764617-m04 event: Registered Node ha-764617-m04 in Controller
	  Normal   Starting                 2m47s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m47s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 2m47s                  kubelet          Node ha-764617-m04 has been rebooted, boot id: e4e990ae-bfad-4760-916d-243430ff145a
	  Normal   NodeHasSufficientMemory  2m47s (x2 over 2m47s)  kubelet          Node ha-764617-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m47s (x2 over 2m47s)  kubelet          Node ha-764617-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m47s (x2 over 2m47s)  kubelet          Node ha-764617-m04 status is now: NodeHasSufficientPID
	  Normal   NodeReady                2m47s                  kubelet          Node ha-764617-m04 status is now: NodeReady
	  Normal   NodeNotReady             103s                   node-controller  Node ha-764617-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +6.494535] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.053885] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056699] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.201898] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.107599] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.255485] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +3.757333] systemd-fstab-generator[760]: Ignoring "noauto" option for root device
	[  +4.397161] systemd-fstab-generator[897]: Ignoring "noauto" option for root device
	[  +0.059974] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.993084] systemd-fstab-generator[1320]: Ignoring "noauto" option for root device
	[  +0.077626] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.633091] kauditd_printk_skb: 18 callbacks suppressed
	[Aug16 17:05] kauditd_printk_skb: 41 callbacks suppressed
	[ +41.798128] kauditd_printk_skb: 26 callbacks suppressed
	[Aug16 17:15] systemd-fstab-generator[3533]: Ignoring "noauto" option for root device
	[  +0.147419] systemd-fstab-generator[3545]: Ignoring "noauto" option for root device
	[  +0.168608] systemd-fstab-generator[3559]: Ignoring "noauto" option for root device
	[  +0.136670] systemd-fstab-generator[3571]: Ignoring "noauto" option for root device
	[  +0.269453] systemd-fstab-generator[3599]: Ignoring "noauto" option for root device
	[  +3.441490] systemd-fstab-generator[3706]: Ignoring "noauto" option for root device
	[  +6.395201] kauditd_printk_skb: 132 callbacks suppressed
	[Aug16 17:16] kauditd_printk_skb: 75 callbacks suppressed
	[ +10.062557] kauditd_printk_skb: 1 callbacks suppressed
	[ +16.831551] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.360691] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [030986f0ddc53b18e5e09d98f86351494ff02ed8d6b0e901d908a4544c679c3a] <==
	{"level":"warn","ts":"2024-08-16T17:18:02.154841Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"44c87d00f43700c5","error":"Get \"https://192.168.39.253:2380/version\": dial tcp 192.168.39.253:2380: connect: connection refused"}
	{"level":"info","ts":"2024-08-16T17:18:05.014672Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"44c87d00f43700c5"}
	{"level":"info","ts":"2024-08-16T17:18:05.015338Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"d6d01a71dfc61a14","remote-peer-id":"44c87d00f43700c5"}
	{"level":"info","ts":"2024-08-16T17:18:05.015396Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"d6d01a71dfc61a14","remote-peer-id":"44c87d00f43700c5"}
	{"level":"info","ts":"2024-08-16T17:18:05.054001Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"d6d01a71dfc61a14","to":"44c87d00f43700c5","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-16T17:18:05.054266Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"d6d01a71dfc61a14","remote-peer-id":"44c87d00f43700c5"}
	{"level":"info","ts":"2024-08-16T17:18:05.070546Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"d6d01a71dfc61a14","to":"44c87d00f43700c5","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-16T17:18:05.070763Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"d6d01a71dfc61a14","remote-peer-id":"44c87d00f43700c5"}
	{"level":"info","ts":"2024-08-16T17:18:52.874276Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d6d01a71dfc61a14 switched to configuration voters=(13871824366512854471 15478900995660323348)"}
	{"level":"info","ts":"2024-08-16T17:18:52.876380Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"3959cc3c468ccbd1","local-member-id":"d6d01a71dfc61a14","removed-remote-peer-id":"44c87d00f43700c5","removed-remote-peer-urls":["https://192.168.39.253:2380"]}
	{"level":"info","ts":"2024-08-16T17:18:52.876488Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"44c87d00f43700c5"}
	{"level":"warn","ts":"2024-08-16T17:18:52.876860Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"44c87d00f43700c5"}
	{"level":"info","ts":"2024-08-16T17:18:52.876939Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"44c87d00f43700c5"}
	{"level":"warn","ts":"2024-08-16T17:18:52.877435Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"44c87d00f43700c5"}
	{"level":"info","ts":"2024-08-16T17:18:52.877504Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"44c87d00f43700c5"}
	{"level":"info","ts":"2024-08-16T17:18:52.877618Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"d6d01a71dfc61a14","remote-peer-id":"44c87d00f43700c5"}
	{"level":"warn","ts":"2024-08-16T17:18:52.877920Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"d6d01a71dfc61a14","remote-peer-id":"44c87d00f43700c5","error":"context canceled"}
	{"level":"warn","ts":"2024-08-16T17:18:52.877988Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"44c87d00f43700c5","error":"failed to read 44c87d00f43700c5 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-08-16T17:18:52.878103Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"d6d01a71dfc61a14","remote-peer-id":"44c87d00f43700c5"}
	{"level":"warn","ts":"2024-08-16T17:18:52.878445Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"d6d01a71dfc61a14","remote-peer-id":"44c87d00f43700c5","error":"context canceled"}
	{"level":"info","ts":"2024-08-16T17:18:52.881230Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"d6d01a71dfc61a14","remote-peer-id":"44c87d00f43700c5"}
	{"level":"info","ts":"2024-08-16T17:18:52.881257Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"44c87d00f43700c5"}
	{"level":"info","ts":"2024-08-16T17:18:52.881272Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"d6d01a71dfc61a14","removed-remote-peer-id":"44c87d00f43700c5"}
	{"level":"warn","ts":"2024-08-16T17:18:52.890325Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"d6d01a71dfc61a14","remote-peer-id-stream-handler":"d6d01a71dfc61a14","remote-peer-id-from":"44c87d00f43700c5"}
	{"level":"warn","ts":"2024-08-16T17:18:52.893742Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"d6d01a71dfc61a14","remote-peer-id-stream-handler":"d6d01a71dfc61a14","remote-peer-id-from":"44c87d00f43700c5"}
	
	
	==> etcd [c020d60e48e21e3f68c2ab1e7dbc4570867af80f727edd6529d1d59b17f11a6f] <==
	{"level":"warn","ts":"2024-08-16T17:14:13.562744Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"856.882335ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" limit:10000 ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-08-16T17:14:13.562755Z","caller":"traceutil/trace.go:171","msg":"trace[1004593196] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; }","duration":"856.894422ms","start":"2024-08-16T17:14:12.705858Z","end":"2024-08-16T17:14:13.562752Z","steps":["trace[1004593196] 'agreement among raft nodes before linearized reading'  (duration: 856.882164ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T17:14:13.562766Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T17:14:12.705851Z","time spent":"856.91213ms","remote":"127.0.0.1:47644","response type":"/etcdserverpb.KV/Range","request count":0,"request size":43,"response count":0,"response size":0,"request content":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" limit:10000 "}
	2024/08/16 17:14:13 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-16T17:14:13.623112Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.18:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-16T17:14:13.623226Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.18:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-16T17:14:13.623305Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"d6d01a71dfc61a14","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-16T17:14:13.623523Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"c0829ec3e89b55c7"}
	{"level":"info","ts":"2024-08-16T17:14:13.623588Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"c0829ec3e89b55c7"}
	{"level":"info","ts":"2024-08-16T17:14:13.623644Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"c0829ec3e89b55c7"}
	{"level":"info","ts":"2024-08-16T17:14:13.623762Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"d6d01a71dfc61a14","remote-peer-id":"c0829ec3e89b55c7"}
	{"level":"info","ts":"2024-08-16T17:14:13.623839Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"d6d01a71dfc61a14","remote-peer-id":"c0829ec3e89b55c7"}
	{"level":"info","ts":"2024-08-16T17:14:13.623893Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"d6d01a71dfc61a14","remote-peer-id":"c0829ec3e89b55c7"}
	{"level":"info","ts":"2024-08-16T17:14:13.623956Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"c0829ec3e89b55c7"}
	{"level":"info","ts":"2024-08-16T17:14:13.623965Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"44c87d00f43700c5"}
	{"level":"info","ts":"2024-08-16T17:14:13.623978Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"44c87d00f43700c5"}
	{"level":"info","ts":"2024-08-16T17:14:13.624021Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"44c87d00f43700c5"}
	{"level":"info","ts":"2024-08-16T17:14:13.624083Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"d6d01a71dfc61a14","remote-peer-id":"44c87d00f43700c5"}
	{"level":"info","ts":"2024-08-16T17:14:13.624176Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"d6d01a71dfc61a14","remote-peer-id":"44c87d00f43700c5"}
	{"level":"info","ts":"2024-08-16T17:14:13.624230Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"d6d01a71dfc61a14","remote-peer-id":"44c87d00f43700c5"}
	{"level":"info","ts":"2024-08-16T17:14:13.624287Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"44c87d00f43700c5"}
	{"level":"info","ts":"2024-08-16T17:14:13.626941Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.18:2380"}
	{"level":"info","ts":"2024-08-16T17:14:13.627026Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.18:2380"}
	{"level":"info","ts":"2024-08-16T17:14:13.627046Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-764617","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.18:2380"],"advertise-client-urls":["https://192.168.39.18:2379"]}
	{"level":"warn","ts":"2024-08-16T17:14:13.627033Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.915963497s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	
	
	==> kernel <==
	 17:21:26 up 17 min,  0 users,  load average: 0.21, 0.36, 0.33
	Linux ha-764617 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [8281ecd4daddf7726964e428d3d817eb8b5f5b72ebf8f05f4209903dabfadeaa] <==
	I0816 17:20:37.008218       1 main.go:322] Node ha-764617-m04 has CIDR [10.244.3.0/24] 
	I0816 17:20:47.002324       1 main.go:295] Handling node with IPs: map[192.168.39.18:{}]
	I0816 17:20:47.002392       1 main.go:299] handling current node
	I0816 17:20:47.002409       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0816 17:20:47.002417       1 main.go:322] Node ha-764617-m02 has CIDR [10.244.1.0/24] 
	I0816 17:20:47.002638       1 main.go:295] Handling node with IPs: map[192.168.39.137:{}]
	I0816 17:20:47.002656       1 main.go:322] Node ha-764617-m04 has CIDR [10.244.3.0/24] 
	I0816 17:20:57.000632       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0816 17:20:57.000680       1 main.go:322] Node ha-764617-m02 has CIDR [10.244.1.0/24] 
	I0816 17:20:57.000829       1 main.go:295] Handling node with IPs: map[192.168.39.137:{}]
	I0816 17:20:57.000864       1 main.go:322] Node ha-764617-m04 has CIDR [10.244.3.0/24] 
	I0816 17:20:57.000918       1 main.go:295] Handling node with IPs: map[192.168.39.18:{}]
	I0816 17:20:57.000933       1 main.go:299] handling current node
	I0816 17:21:07.009621       1 main.go:295] Handling node with IPs: map[192.168.39.137:{}]
	I0816 17:21:07.009697       1 main.go:322] Node ha-764617-m04 has CIDR [10.244.3.0/24] 
	I0816 17:21:07.009821       1 main.go:295] Handling node with IPs: map[192.168.39.18:{}]
	I0816 17:21:07.009922       1 main.go:299] handling current node
	I0816 17:21:07.009953       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0816 17:21:07.009973       1 main.go:322] Node ha-764617-m02 has CIDR [10.244.1.0/24] 
	I0816 17:21:17.002765       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0816 17:21:17.002879       1 main.go:322] Node ha-764617-m02 has CIDR [10.244.1.0/24] 
	I0816 17:21:17.003044       1 main.go:295] Handling node with IPs: map[192.168.39.137:{}]
	I0816 17:21:17.003069       1 main.go:322] Node ha-764617-m04 has CIDR [10.244.3.0/24] 
	I0816 17:21:17.003207       1 main.go:295] Handling node with IPs: map[192.168.39.18:{}]
	I0816 17:21:17.003238       1 main.go:299] handling current node
	
	
	==> kindnet [b7c860bdbf8f8c3ce3e6ce12afcd55605f1afdf02828581465b57f088bf9fc24] <==
	I0816 17:13:48.552243       1 main.go:295] Handling node with IPs: map[192.168.39.18:{}]
	I0816 17:13:48.552292       1 main.go:299] handling current node
	I0816 17:13:48.552314       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0816 17:13:48.552319       1 main.go:322] Node ha-764617-m02 has CIDR [10.244.1.0/24] 
	I0816 17:13:48.552473       1 main.go:295] Handling node with IPs: map[192.168.39.253:{}]
	I0816 17:13:48.552492       1 main.go:322] Node ha-764617-m03 has CIDR [10.244.2.0/24] 
	I0816 17:13:48.552553       1 main.go:295] Handling node with IPs: map[192.168.39.137:{}]
	I0816 17:13:48.552570       1 main.go:322] Node ha-764617-m04 has CIDR [10.244.3.0/24] 
	I0816 17:13:58.551071       1 main.go:295] Handling node with IPs: map[192.168.39.18:{}]
	I0816 17:13:58.551115       1 main.go:299] handling current node
	I0816 17:13:58.551176       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0816 17:13:58.551183       1 main.go:322] Node ha-764617-m02 has CIDR [10.244.1.0/24] 
	I0816 17:13:58.551405       1 main.go:295] Handling node with IPs: map[192.168.39.253:{}]
	I0816 17:13:58.551453       1 main.go:322] Node ha-764617-m03 has CIDR [10.244.2.0/24] 
	I0816 17:13:58.551629       1 main.go:295] Handling node with IPs: map[192.168.39.137:{}]
	I0816 17:13:58.551660       1 main.go:322] Node ha-764617-m04 has CIDR [10.244.3.0/24] 
	I0816 17:14:08.552049       1 main.go:295] Handling node with IPs: map[192.168.39.184:{}]
	I0816 17:14:08.552200       1 main.go:322] Node ha-764617-m02 has CIDR [10.244.1.0/24] 
	I0816 17:14:08.552373       1 main.go:295] Handling node with IPs: map[192.168.39.253:{}]
	I0816 17:14:08.552395       1 main.go:322] Node ha-764617-m03 has CIDR [10.244.2.0/24] 
	I0816 17:14:08.552487       1 main.go:295] Handling node with IPs: map[192.168.39.137:{}]
	I0816 17:14:08.552503       1 main.go:322] Node ha-764617-m04 has CIDR [10.244.3.0/24] 
	I0816 17:14:08.552594       1 main.go:295] Handling node with IPs: map[192.168.39.18:{}]
	I0816 17:14:08.552613       1 main.go:299] handling current node
	E0816 17:14:09.443722       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=1899&timeout=8m11s&timeoutSeconds=491&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	
	
	==> kube-apiserver [40117a7184f2549debf482d1b96d53560d5bc57d1fa0fb46ea93007ee8f3d940] <==
	I0816 17:15:56.615730       1 options.go:228] external host was not specified, using 192.168.39.18
	I0816 17:15:56.629059       1 server.go:142] Version: v1.31.0
	I0816 17:15:56.629244       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 17:15:57.455030       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0816 17:15:57.476247       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0816 17:15:57.484034       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0816 17:15:57.484180       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0816 17:15:57.484472       1 instance.go:232] Using reconciler: lease
	W0816 17:16:17.454716       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0816 17:16:17.454717       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0816 17:16:17.487271       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [f4f92a8547daed9b708e0671f69fe945f34065114cd65321f075d9534251caee] <==
	I0816 17:16:39.962559       1 establishing_controller.go:81] Starting EstablishingController
	I0816 17:16:39.962582       1 nonstructuralschema_controller.go:195] Starting NonStructuralSchemaConditionController
	I0816 17:16:39.962588       1 apiapproval_controller.go:189] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0816 17:16:39.962600       1 crd_finalizer.go:269] Starting CRDFinalizer
	I0816 17:16:40.063961       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0816 17:16:40.066023       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0816 17:16:40.067628       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0816 17:16:40.067928       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0816 17:16:40.070489       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0816 17:16:40.073483       1 shared_informer.go:320] Caches are synced for configmaps
	I0816 17:16:40.073687       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0816 17:16:40.074494       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0816 17:16:40.079054       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0816 17:16:40.079187       1 policy_source.go:224] refreshing policies
	I0816 17:16:40.081651       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0816 17:16:40.094544       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0816 17:16:40.097077       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0816 17:16:40.097230       1 aggregator.go:171] initial CRD sync complete...
	I0816 17:16:40.097272       1 autoregister_controller.go:144] Starting autoregister controller
	I0816 17:16:40.097300       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0816 17:16:40.097324       1 cache.go:39] Caches are synced for autoregister controller
	I0816 17:16:40.977701       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0816 17:16:41.708583       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.18 192.168.39.184]
	I0816 17:16:41.710170       1 controller.go:615] quota admission added evaluator for: endpoints
	I0816 17:16:41.732036       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [290a055bbcb3d1cf9eb284751a1e49d8b00b30dffd90fe3456c00e2d0d23dadb] <==
	I0816 17:15:57.347991       1 serving.go:386] Generated self-signed cert in-memory
	I0816 17:15:57.599844       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0816 17:15:57.599957       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 17:15:57.601665       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0816 17:15:57.601825       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0816 17:15:57.602324       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0816 17:15:57.602410       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0816 17:16:18.493788       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.18:8443/healthz\": dial tcp 192.168.39.18:8443: connect: connection refused"
	
	
	==> kube-controller-manager [dac1a1703e40dc64e13d71f786ff8b6bad8f232b19fdddbe915665a5c1a0627d] <==
	I0816 17:18:52.984957       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="13.475764ms"
	I0816 17:18:52.985032       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="38.379µs"
	I0816 17:19:03.741992       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-764617-m03"
	I0816 17:19:03.742612       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-764617-m04"
	E0816 17:19:23.359111       1 gc_controller.go:151] "Failed to get node" err="node \"ha-764617-m03\" not found" logger="pod-garbage-collector-controller" node="ha-764617-m03"
	E0816 17:19:23.359373       1 gc_controller.go:151] "Failed to get node" err="node \"ha-764617-m03\" not found" logger="pod-garbage-collector-controller" node="ha-764617-m03"
	E0816 17:19:23.359407       1 gc_controller.go:151] "Failed to get node" err="node \"ha-764617-m03\" not found" logger="pod-garbage-collector-controller" node="ha-764617-m03"
	E0816 17:19:23.359463       1 gc_controller.go:151] "Failed to get node" err="node \"ha-764617-m03\" not found" logger="pod-garbage-collector-controller" node="ha-764617-m03"
	E0816 17:19:23.359490       1 gc_controller.go:151] "Failed to get node" err="node \"ha-764617-m03\" not found" logger="pod-garbage-collector-controller" node="ha-764617-m03"
	E0816 17:19:43.360087       1 gc_controller.go:151] "Failed to get node" err="node \"ha-764617-m03\" not found" logger="pod-garbage-collector-controller" node="ha-764617-m03"
	E0816 17:19:43.360127       1 gc_controller.go:151] "Failed to get node" err="node \"ha-764617-m03\" not found" logger="pod-garbage-collector-controller" node="ha-764617-m03"
	E0816 17:19:43.360168       1 gc_controller.go:151] "Failed to get node" err="node \"ha-764617-m03\" not found" logger="pod-garbage-collector-controller" node="ha-764617-m03"
	E0816 17:19:43.360174       1 gc_controller.go:151] "Failed to get node" err="node \"ha-764617-m03\" not found" logger="pod-garbage-collector-controller" node="ha-764617-m03"
	E0816 17:19:43.360180       1 gc_controller.go:151] "Failed to get node" err="node \"ha-764617-m03\" not found" logger="pod-garbage-collector-controller" node="ha-764617-m03"
	I0816 17:19:43.420358       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-764617-m04"
	I0816 17:19:43.445900       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-764617-m04"
	I0816 17:19:43.461766       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="11.241264ms"
	I0816 17:19:43.463316       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="80.741µs"
	I0816 17:19:43.532070       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-764617-m04"
	I0816 17:19:48.573318       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-764617-m04"
	E0816 17:20:03.360348       1 gc_controller.go:151] "Failed to get node" err="node \"ha-764617-m03\" not found" logger="pod-garbage-collector-controller" node="ha-764617-m03"
	E0816 17:20:03.360484       1 gc_controller.go:151] "Failed to get node" err="node \"ha-764617-m03\" not found" logger="pod-garbage-collector-controller" node="ha-764617-m03"
	E0816 17:20:03.360512       1 gc_controller.go:151] "Failed to get node" err="node \"ha-764617-m03\" not found" logger="pod-garbage-collector-controller" node="ha-764617-m03"
	E0816 17:20:03.360536       1 gc_controller.go:151] "Failed to get node" err="node \"ha-764617-m03\" not found" logger="pod-garbage-collector-controller" node="ha-764617-m03"
	E0816 17:20:03.360559       1 gc_controller.go:151] "Failed to get node" err="node \"ha-764617-m03\" not found" logger="pod-garbage-collector-controller" node="ha-764617-m03"
	
	
	==> kube-proxy [1aaf72ada1592eb9db02d4b420e3712769e6486d17ade2669153a86196aec79d] <==
	E0816 17:12:49.380013       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-764617&resourceVersion=1888\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:12:56.419670       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1822": dial tcp 192.168.39.254:8443: connect: no route to host
	E0816 17:12:56.419745       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1822\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:12:56.419833       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1893": dial tcp 192.168.39.254:8443: connect: no route to host
	E0816 17:12:56.419867       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1893\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:12:56.419980       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-764617&resourceVersion=1888": dial tcp 192.168.39.254:8443: connect: no route to host
	E0816 17:12:56.420093       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-764617&resourceVersion=1888\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:13:05.061614       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1893": dial tcp 192.168.39.254:8443: connect: no route to host
	E0816 17:13:05.061752       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1893\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:13:05.061634       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1822": dial tcp 192.168.39.254:8443: connect: no route to host
	W0816 17:13:05.061845       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-764617&resourceVersion=1888": dial tcp 192.168.39.254:8443: connect: no route to host
	E0816 17:13:05.061851       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1822\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	E0816 17:13:05.061862       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-764617&resourceVersion=1888\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:13:20.420096       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1822": dial tcp 192.168.39.254:8443: connect: no route to host
	E0816 17:13:20.420208       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1822\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:13:26.563815       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-764617&resourceVersion=1888": dial tcp 192.168.39.254:8443: connect: no route to host
	E0816 17:13:26.563914       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-764617&resourceVersion=1888\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:13:26.564083       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1893": dial tcp 192.168.39.254:8443: connect: no route to host
	E0816 17:13:26.564182       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1893\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:14:03.428978       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-764617&resourceVersion=1888": dial tcp 192.168.39.254:8443: connect: no route to host
	E0816 17:14:03.429261       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-764617&resourceVersion=1888\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:14:03.429083       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1893": dial tcp 192.168.39.254:8443: connect: no route to host
	E0816 17:14:03.429416       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1893\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0816 17:14:06.500827       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1822": dial tcp 192.168.39.254:8443: connect: no route to host
	E0816 17:14:06.500917       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1822\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-proxy [f97fbca9aeaa7f4efc01aa36e9842fcbe1317fa5826d11051aab16ce390474fb] <==
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0816 17:16:00.164339       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-764617\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0816 17:16:03.235627       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-764617\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0816 17:16:06.308200       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-764617\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0816 17:16:12.451678       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-764617\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0816 17:16:21.667673       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-764617\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0816 17:16:39.662307       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.18"]
	E0816 17:16:39.662531       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0816 17:16:40.129982       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0816 17:16:40.130116       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0816 17:16:40.130286       1 server_linux.go:169] "Using iptables Proxier"
	I0816 17:16:40.133388       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0816 17:16:40.133849       1 server.go:483] "Version info" version="v1.31.0"
	I0816 17:16:40.133901       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 17:16:40.136174       1 config.go:197] "Starting service config controller"
	I0816 17:16:40.136397       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0816 17:16:40.136461       1 config.go:104] "Starting endpoint slice config controller"
	I0816 17:16:40.136494       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0816 17:16:40.137291       1 config.go:326] "Starting node config controller"
	I0816 17:16:40.137331       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0816 17:16:40.237367       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0816 17:16:40.237442       1 shared_informer.go:320] Caches are synced for service config
	I0816 17:16:40.237924       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [547ba7c3099cf77e755ecafde9e451130522c4ac7ad82e4d1425c79c1f03ae1b] <==
	W0816 17:04:46.812070       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 17:04:46.812114       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0816 17:04:48.641797       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0816 17:07:29.208916       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-rvd47\": pod busybox-7dff88458-rvd47 is already assigned to node \"ha-764617-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-rvd47" node="ha-764617-m03"
	E0816 17:07:29.209097       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-rvd47\": pod busybox-7dff88458-rvd47 is already assigned to node \"ha-764617-m03\"" pod="default/busybox-7dff88458-rvd47"
	E0816 17:07:29.210073       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-rcq66\": pod busybox-7dff88458-rcq66 is already assigned to node \"ha-764617\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-rcq66" node="ha-764617"
	E0816 17:07:29.218500       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-rcq66\": pod busybox-7dff88458-rcq66 is already assigned to node \"ha-764617\"" pod="default/busybox-7dff88458-rcq66"
	E0816 17:08:05.463041       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-785hx\": pod kindnet-785hx is already assigned to node \"ha-764617-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-785hx" node="ha-764617-m04"
	E0816 17:08:05.468950       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 82c775a8-d580-4201-9da7-790a5a95ef6f(kube-system/kindnet-785hx) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-785hx"
	E0816 17:08:05.469002       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-785hx\": pod kindnet-785hx is already assigned to node \"ha-764617-m04\"" pod="kube-system/kindnet-785hx"
	I0816 17:08:05.469055       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-785hx" node="ha-764617-m04"
	E0816 17:14:04.583240       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0816 17:14:04.846054       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0816 17:14:05.519747       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0816 17:14:07.227009       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0816 17:14:08.262040       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0816 17:14:09.631030       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0816 17:14:10.772187       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0816 17:14:11.268912       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0816 17:14:11.374786       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0816 17:14:11.570866       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0816 17:14:12.701254       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0816 17:14:13.171368       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0816 17:14:13.491001       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0816 17:14:13.554502       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [aeafb73cca63549a23e7a8a77c52a26c0759572b031848d5098a1f5ef81b3993] <==
	W0816 17:16:34.378890       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.18:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.18:8443: connect: connection refused
	E0816 17:16:34.378966       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.18:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.18:8443: connect: connection refused" logger="UnhandledError"
	W0816 17:16:34.479671       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.18:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.18:8443: connect: connection refused
	E0816 17:16:34.479744       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.39.18:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.18:8443: connect: connection refused" logger="UnhandledError"
	W0816 17:16:34.572313       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.18:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.18:8443: connect: connection refused
	E0816 17:16:34.572360       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.18:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.18:8443: connect: connection refused" logger="UnhandledError"
	W0816 17:16:36.136767       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.18:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.18:8443: connect: connection refused
	E0816 17:16:36.136930       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.18:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.18:8443: connect: connection refused" logger="UnhandledError"
	W0816 17:16:36.611952       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.18:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.18:8443: connect: connection refused
	E0816 17:16:36.612004       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.39.18:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.18:8443: connect: connection refused" logger="UnhandledError"
	W0816 17:16:36.622913       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.18:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.18:8443: connect: connection refused
	E0816 17:16:36.622976       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.18:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.18:8443: connect: connection refused" logger="UnhandledError"
	W0816 17:16:37.284461       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.18:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.18:8443: connect: connection refused
	E0816 17:16:37.284523       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.18:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.18:8443: connect: connection refused" logger="UnhandledError"
	W0816 17:16:37.402497       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.18:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.18:8443: connect: connection refused
	E0816 17:16:37.402564       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.18:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.18:8443: connect: connection refused" logger="UnhandledError"
	W0816 17:16:40.005972       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 17:16:40.006487       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:16:40.006719       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 17:16:40.006791       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0816 17:16:55.706093       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0816 17:18:49.563095       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-ckcfj\": pod busybox-7dff88458-ckcfj is already assigned to node \"ha-764617-m04\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-ckcfj" node="ha-764617-m04"
	E0816 17:18:49.567256       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod b79ac2c1-540f-4952-ab43-73acdf91d9ba(default/busybox-7dff88458-ckcfj) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-ckcfj"
	E0816 17:18:49.570057       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-ckcfj\": pod busybox-7dff88458-ckcfj is already assigned to node \"ha-764617-m04\"" pod="default/busybox-7dff88458-ckcfj"
	I0816 17:18:49.570251       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-ckcfj" node="ha-764617-m04"
	
	
	==> kubelet <==
	Aug 16 17:19:48 ha-764617 kubelet[1328]: E0816 17:19:48.943617    1328 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828788943012712,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:19:48 ha-764617 kubelet[1328]: E0816 17:19:48.943658    1328 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828788943012712,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:19:58 ha-764617 kubelet[1328]: E0816 17:19:58.946815    1328 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828798946311533,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:19:58 ha-764617 kubelet[1328]: E0816 17:19:58.947253    1328 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828798946311533,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:20:08 ha-764617 kubelet[1328]: E0816 17:20:08.949519    1328 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828808949079395,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:20:08 ha-764617 kubelet[1328]: E0816 17:20:08.949829    1328 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828808949079395,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:20:18 ha-764617 kubelet[1328]: E0816 17:20:18.953560    1328 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828818952742866,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:20:18 ha-764617 kubelet[1328]: E0816 17:20:18.953653    1328 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828818952742866,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:20:28 ha-764617 kubelet[1328]: E0816 17:20:28.956382    1328 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828828955659171,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:20:28 ha-764617 kubelet[1328]: E0816 17:20:28.956704    1328 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828828955659171,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:20:38 ha-764617 kubelet[1328]: E0816 17:20:38.959112    1328 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828838958662460,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:20:38 ha-764617 kubelet[1328]: E0816 17:20:38.959646    1328 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828838958662460,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:20:48 ha-764617 kubelet[1328]: E0816 17:20:48.598588    1328 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 17:20:48 ha-764617 kubelet[1328]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 17:20:48 ha-764617 kubelet[1328]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 17:20:48 ha-764617 kubelet[1328]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 17:20:48 ha-764617 kubelet[1328]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 17:20:48 ha-764617 kubelet[1328]: E0816 17:20:48.963360    1328 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828848961233770,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:20:48 ha-764617 kubelet[1328]: E0816 17:20:48.963385    1328 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828848961233770,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:20:58 ha-764617 kubelet[1328]: E0816 17:20:58.965458    1328 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828858965035543,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:20:58 ha-764617 kubelet[1328]: E0816 17:20:58.965511    1328 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828858965035543,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:21:08 ha-764617 kubelet[1328]: E0816 17:21:08.967026    1328 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828868966571659,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:21:08 ha-764617 kubelet[1328]: E0816 17:21:08.967366    1328 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828868966571659,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:21:18 ha-764617 kubelet[1328]: E0816 17:21:18.970194    1328 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828878969601922,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:21:18 ha-764617 kubelet[1328]: E0816 17:21:18.970300    1328 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723828878969601922,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 17:21:25.510447   36462 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19461-9545/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-764617 -n ha-764617
helpers_test.go:261: (dbg) Run:  kubectl --context ha-764617 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.64s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (331.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-797386
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-797386
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-797386: exit status 82 (2m1.76929293s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-797386-m03"  ...
	* Stopping node "multinode-797386-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-797386" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-797386 --wait=true -v=8 --alsologtostderr
E0816 17:38:21.061589   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/functional-654639/client.crt: no such file or directory" logger="UnhandledError"
E0816 17:41:12.269026   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/client.crt: no such file or directory" logger="UnhandledError"
E0816 17:41:24.129626   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/functional-654639/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-797386 --wait=true -v=8 --alsologtostderr: (3m27.384276853s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-797386
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-797386 -n multinode-797386
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-797386 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-797386 logs -n 25: (1.401903062s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-797386 ssh -n                                                                 | multinode-797386 | jenkins | v1.33.1 | 16 Aug 24 17:35 UTC | 16 Aug 24 17:35 UTC |
	|         | multinode-797386-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-797386 cp multinode-797386-m02:/home/docker/cp-test.txt                       | multinode-797386 | jenkins | v1.33.1 | 16 Aug 24 17:35 UTC | 16 Aug 24 17:35 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3908969690/001/cp-test_multinode-797386-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-797386 ssh -n                                                                 | multinode-797386 | jenkins | v1.33.1 | 16 Aug 24 17:35 UTC | 16 Aug 24 17:35 UTC |
	|         | multinode-797386-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-797386 cp multinode-797386-m02:/home/docker/cp-test.txt                       | multinode-797386 | jenkins | v1.33.1 | 16 Aug 24 17:35 UTC | 16 Aug 24 17:35 UTC |
	|         | multinode-797386:/home/docker/cp-test_multinode-797386-m02_multinode-797386.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-797386 ssh -n                                                                 | multinode-797386 | jenkins | v1.33.1 | 16 Aug 24 17:35 UTC | 16 Aug 24 17:35 UTC |
	|         | multinode-797386-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-797386 ssh -n multinode-797386 sudo cat                                       | multinode-797386 | jenkins | v1.33.1 | 16 Aug 24 17:35 UTC | 16 Aug 24 17:35 UTC |
	|         | /home/docker/cp-test_multinode-797386-m02_multinode-797386.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-797386 cp multinode-797386-m02:/home/docker/cp-test.txt                       | multinode-797386 | jenkins | v1.33.1 | 16 Aug 24 17:35 UTC | 16 Aug 24 17:35 UTC |
	|         | multinode-797386-m03:/home/docker/cp-test_multinode-797386-m02_multinode-797386-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-797386 ssh -n                                                                 | multinode-797386 | jenkins | v1.33.1 | 16 Aug 24 17:35 UTC | 16 Aug 24 17:35 UTC |
	|         | multinode-797386-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-797386 ssh -n multinode-797386-m03 sudo cat                                   | multinode-797386 | jenkins | v1.33.1 | 16 Aug 24 17:35 UTC | 16 Aug 24 17:35 UTC |
	|         | /home/docker/cp-test_multinode-797386-m02_multinode-797386-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-797386 cp testdata/cp-test.txt                                                | multinode-797386 | jenkins | v1.33.1 | 16 Aug 24 17:35 UTC | 16 Aug 24 17:35 UTC |
	|         | multinode-797386-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-797386 ssh -n                                                                 | multinode-797386 | jenkins | v1.33.1 | 16 Aug 24 17:35 UTC | 16 Aug 24 17:35 UTC |
	|         | multinode-797386-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-797386 cp multinode-797386-m03:/home/docker/cp-test.txt                       | multinode-797386 | jenkins | v1.33.1 | 16 Aug 24 17:35 UTC | 16 Aug 24 17:35 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3908969690/001/cp-test_multinode-797386-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-797386 ssh -n                                                                 | multinode-797386 | jenkins | v1.33.1 | 16 Aug 24 17:35 UTC | 16 Aug 24 17:35 UTC |
	|         | multinode-797386-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-797386 cp multinode-797386-m03:/home/docker/cp-test.txt                       | multinode-797386 | jenkins | v1.33.1 | 16 Aug 24 17:35 UTC | 16 Aug 24 17:35 UTC |
	|         | multinode-797386:/home/docker/cp-test_multinode-797386-m03_multinode-797386.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-797386 ssh -n                                                                 | multinode-797386 | jenkins | v1.33.1 | 16 Aug 24 17:35 UTC | 16 Aug 24 17:35 UTC |
	|         | multinode-797386-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-797386 ssh -n multinode-797386 sudo cat                                       | multinode-797386 | jenkins | v1.33.1 | 16 Aug 24 17:35 UTC | 16 Aug 24 17:35 UTC |
	|         | /home/docker/cp-test_multinode-797386-m03_multinode-797386.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-797386 cp multinode-797386-m03:/home/docker/cp-test.txt                       | multinode-797386 | jenkins | v1.33.1 | 16 Aug 24 17:35 UTC | 16 Aug 24 17:35 UTC |
	|         | multinode-797386-m02:/home/docker/cp-test_multinode-797386-m03_multinode-797386-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-797386 ssh -n                                                                 | multinode-797386 | jenkins | v1.33.1 | 16 Aug 24 17:35 UTC | 16 Aug 24 17:35 UTC |
	|         | multinode-797386-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-797386 ssh -n multinode-797386-m02 sudo cat                                   | multinode-797386 | jenkins | v1.33.1 | 16 Aug 24 17:35 UTC | 16 Aug 24 17:35 UTC |
	|         | /home/docker/cp-test_multinode-797386-m03_multinode-797386-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-797386 node stop m03                                                          | multinode-797386 | jenkins | v1.33.1 | 16 Aug 24 17:35 UTC | 16 Aug 24 17:35 UTC |
	| node    | multinode-797386 node start                                                             | multinode-797386 | jenkins | v1.33.1 | 16 Aug 24 17:35 UTC | 16 Aug 24 17:36 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-797386                                                                | multinode-797386 | jenkins | v1.33.1 | 16 Aug 24 17:36 UTC |                     |
	| stop    | -p multinode-797386                                                                     | multinode-797386 | jenkins | v1.33.1 | 16 Aug 24 17:36 UTC |                     |
	| start   | -p multinode-797386                                                                     | multinode-797386 | jenkins | v1.33.1 | 16 Aug 24 17:38 UTC | 16 Aug 24 17:41 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-797386                                                                | multinode-797386 | jenkins | v1.33.1 | 16 Aug 24 17:41 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 17:38:16
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 17:38:16.837227   45790 out.go:345] Setting OutFile to fd 1 ...
	I0816 17:38:16.837335   45790 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:38:16.837348   45790 out.go:358] Setting ErrFile to fd 2...
	I0816 17:38:16.837353   45790 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:38:16.837521   45790 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-9545/.minikube/bin
	I0816 17:38:16.838083   45790 out.go:352] Setting JSON to false
	I0816 17:38:16.839021   45790 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4795,"bootTime":1723825102,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0816 17:38:16.839079   45790 start.go:139] virtualization: kvm guest
	I0816 17:38:16.841196   45790 out.go:177] * [multinode-797386] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0816 17:38:16.842645   45790 notify.go:220] Checking for updates...
	I0816 17:38:16.842650   45790 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 17:38:16.844113   45790 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 17:38:16.845458   45790 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19461-9545/kubeconfig
	I0816 17:38:16.846610   45790 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19461-9545/.minikube
	I0816 17:38:16.847732   45790 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 17:38:16.848881   45790 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 17:38:16.850870   45790 config.go:182] Loaded profile config "multinode-797386": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 17:38:16.850945   45790 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 17:38:16.851366   45790 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:38:16.851411   45790 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:38:16.866398   45790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42447
	I0816 17:38:16.866792   45790 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:38:16.867386   45790 main.go:141] libmachine: Using API Version  1
	I0816 17:38:16.867406   45790 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:38:16.867775   45790 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:38:16.868013   45790 main.go:141] libmachine: (multinode-797386) Calling .DriverName
	I0816 17:38:16.902858   45790 out.go:177] * Using the kvm2 driver based on existing profile
	I0816 17:38:16.904046   45790 start.go:297] selected driver: kvm2
	I0816 17:38:16.904069   45790 start.go:901] validating driver "kvm2" against &{Name:multinode-797386 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:multinode-797386 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.218 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.71 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 17:38:16.904232   45790 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 17:38:16.904604   45790 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 17:38:16.904710   45790 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19461-9545/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 17:38:16.920164   45790 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0816 17:38:16.921047   45790 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 17:38:16.921128   45790 cni.go:84] Creating CNI manager for ""
	I0816 17:38:16.921138   45790 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0816 17:38:16.921221   45790 start.go:340] cluster config:
	{Name:multinode-797386 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-797386 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.218 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.71 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 17:38:16.921400   45790 iso.go:125] acquiring lock: {Name:mke35866c1d0f078e1f027c2d727692722810e04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 17:38:16.923090   45790 out.go:177] * Starting "multinode-797386" primary control-plane node in "multinode-797386" cluster
	I0816 17:38:16.924158   45790 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 17:38:16.924186   45790 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0816 17:38:16.924195   45790 cache.go:56] Caching tarball of preloaded images
	I0816 17:38:16.924269   45790 preload.go:172] Found /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 17:38:16.924280   45790 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0816 17:38:16.924402   45790 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/multinode-797386/config.json ...
	I0816 17:38:16.924640   45790 start.go:360] acquireMachinesLock for multinode-797386: {Name:mke1ffa1f2d6d714bdd85e184816ba8f4dfd08f1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 17:38:16.924693   45790 start.go:364] duration metric: took 29.995µs to acquireMachinesLock for "multinode-797386"
	I0816 17:38:16.924713   45790 start.go:96] Skipping create...Using existing machine configuration
	I0816 17:38:16.924725   45790 fix.go:54] fixHost starting: 
	I0816 17:38:16.925011   45790 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:38:16.925042   45790 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:38:16.939606   45790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41975
	I0816 17:38:16.940051   45790 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:38:16.940537   45790 main.go:141] libmachine: Using API Version  1
	I0816 17:38:16.940558   45790 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:38:16.940901   45790 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:38:16.941064   45790 main.go:141] libmachine: (multinode-797386) Calling .DriverName
	I0816 17:38:16.941240   45790 main.go:141] libmachine: (multinode-797386) Calling .GetState
	I0816 17:38:16.942867   45790 fix.go:112] recreateIfNeeded on multinode-797386: state=Running err=<nil>
	W0816 17:38:16.942898   45790 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 17:38:16.944796   45790 out.go:177] * Updating the running kvm2 "multinode-797386" VM ...
	I0816 17:38:16.946013   45790 machine.go:93] provisionDockerMachine start ...
	I0816 17:38:16.946033   45790 main.go:141] libmachine: (multinode-797386) Calling .DriverName
	I0816 17:38:16.946237   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHHostname
	I0816 17:38:16.948944   45790 main.go:141] libmachine: (multinode-797386) DBG | domain multinode-797386 has defined MAC address 52:54:00:2f:3e:a1 in network mk-multinode-797386
	I0816 17:38:16.949405   45790 main.go:141] libmachine: (multinode-797386) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:3e:a1", ip: ""} in network mk-multinode-797386: {Iface:virbr1 ExpiryTime:2024-08-16 18:32:52 +0000 UTC Type:0 Mac:52:54:00:2f:3e:a1 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:multinode-797386 Clientid:01:52:54:00:2f:3e:a1}
	I0816 17:38:16.949447   45790 main.go:141] libmachine: (multinode-797386) DBG | domain multinode-797386 has defined IP address 192.168.39.218 and MAC address 52:54:00:2f:3e:a1 in network mk-multinode-797386
	I0816 17:38:16.949531   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHPort
	I0816 17:38:16.949729   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHKeyPath
	I0816 17:38:16.949909   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHKeyPath
	I0816 17:38:16.950072   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHUsername
	I0816 17:38:16.950368   45790 main.go:141] libmachine: Using SSH client type: native
	I0816 17:38:16.950553   45790 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.218 22 <nil> <nil>}
	I0816 17:38:16.950566   45790 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 17:38:17.069477   45790 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-797386
	
	I0816 17:38:17.069512   45790 main.go:141] libmachine: (multinode-797386) Calling .GetMachineName
	I0816 17:38:17.069749   45790 buildroot.go:166] provisioning hostname "multinode-797386"
	I0816 17:38:17.069772   45790 main.go:141] libmachine: (multinode-797386) Calling .GetMachineName
	I0816 17:38:17.069987   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHHostname
	I0816 17:38:17.073086   45790 main.go:141] libmachine: (multinode-797386) DBG | domain multinode-797386 has defined MAC address 52:54:00:2f:3e:a1 in network mk-multinode-797386
	I0816 17:38:17.073500   45790 main.go:141] libmachine: (multinode-797386) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:3e:a1", ip: ""} in network mk-multinode-797386: {Iface:virbr1 ExpiryTime:2024-08-16 18:32:52 +0000 UTC Type:0 Mac:52:54:00:2f:3e:a1 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:multinode-797386 Clientid:01:52:54:00:2f:3e:a1}
	I0816 17:38:17.073533   45790 main.go:141] libmachine: (multinode-797386) DBG | domain multinode-797386 has defined IP address 192.168.39.218 and MAC address 52:54:00:2f:3e:a1 in network mk-multinode-797386
	I0816 17:38:17.073689   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHPort
	I0816 17:38:17.073872   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHKeyPath
	I0816 17:38:17.074037   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHKeyPath
	I0816 17:38:17.074203   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHUsername
	I0816 17:38:17.074469   45790 main.go:141] libmachine: Using SSH client type: native
	I0816 17:38:17.074623   45790 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.218 22 <nil> <nil>}
	I0816 17:38:17.074641   45790 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-797386 && echo "multinode-797386" | sudo tee /etc/hostname
	I0816 17:38:17.204595   45790 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-797386
	
	I0816 17:38:17.204628   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHHostname
	I0816 17:38:17.207746   45790 main.go:141] libmachine: (multinode-797386) DBG | domain multinode-797386 has defined MAC address 52:54:00:2f:3e:a1 in network mk-multinode-797386
	I0816 17:38:17.208199   45790 main.go:141] libmachine: (multinode-797386) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:3e:a1", ip: ""} in network mk-multinode-797386: {Iface:virbr1 ExpiryTime:2024-08-16 18:32:52 +0000 UTC Type:0 Mac:52:54:00:2f:3e:a1 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:multinode-797386 Clientid:01:52:54:00:2f:3e:a1}
	I0816 17:38:17.208233   45790 main.go:141] libmachine: (multinode-797386) DBG | domain multinode-797386 has defined IP address 192.168.39.218 and MAC address 52:54:00:2f:3e:a1 in network mk-multinode-797386
	I0816 17:38:17.208443   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHPort
	I0816 17:38:17.208639   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHKeyPath
	I0816 17:38:17.208780   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHKeyPath
	I0816 17:38:17.208948   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHUsername
	I0816 17:38:17.209088   45790 main.go:141] libmachine: Using SSH client type: native
	I0816 17:38:17.209260   45790 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.218 22 <nil> <nil>}
	I0816 17:38:17.209276   45790 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-797386' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-797386/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-797386' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 17:38:17.321252   45790 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 17:38:17.321280   45790 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19461-9545/.minikube CaCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19461-9545/.minikube}
	I0816 17:38:17.321317   45790 buildroot.go:174] setting up certificates
	I0816 17:38:17.321328   45790 provision.go:84] configureAuth start
	I0816 17:38:17.321342   45790 main.go:141] libmachine: (multinode-797386) Calling .GetMachineName
	I0816 17:38:17.321642   45790 main.go:141] libmachine: (multinode-797386) Calling .GetIP
	I0816 17:38:17.324113   45790 main.go:141] libmachine: (multinode-797386) DBG | domain multinode-797386 has defined MAC address 52:54:00:2f:3e:a1 in network mk-multinode-797386
	I0816 17:38:17.324446   45790 main.go:141] libmachine: (multinode-797386) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:3e:a1", ip: ""} in network mk-multinode-797386: {Iface:virbr1 ExpiryTime:2024-08-16 18:32:52 +0000 UTC Type:0 Mac:52:54:00:2f:3e:a1 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:multinode-797386 Clientid:01:52:54:00:2f:3e:a1}
	I0816 17:38:17.324476   45790 main.go:141] libmachine: (multinode-797386) DBG | domain multinode-797386 has defined IP address 192.168.39.218 and MAC address 52:54:00:2f:3e:a1 in network mk-multinode-797386
	I0816 17:38:17.324601   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHHostname
	I0816 17:38:17.326884   45790 main.go:141] libmachine: (multinode-797386) DBG | domain multinode-797386 has defined MAC address 52:54:00:2f:3e:a1 in network mk-multinode-797386
	I0816 17:38:17.327240   45790 main.go:141] libmachine: (multinode-797386) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:3e:a1", ip: ""} in network mk-multinode-797386: {Iface:virbr1 ExpiryTime:2024-08-16 18:32:52 +0000 UTC Type:0 Mac:52:54:00:2f:3e:a1 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:multinode-797386 Clientid:01:52:54:00:2f:3e:a1}
	I0816 17:38:17.327273   45790 main.go:141] libmachine: (multinode-797386) DBG | domain multinode-797386 has defined IP address 192.168.39.218 and MAC address 52:54:00:2f:3e:a1 in network mk-multinode-797386
	I0816 17:38:17.327372   45790 provision.go:143] copyHostCerts
	I0816 17:38:17.327401   45790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem
	I0816 17:38:17.327440   45790 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem, removing ...
	I0816 17:38:17.327455   45790 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem
	I0816 17:38:17.327521   45790 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem (1082 bytes)
	I0816 17:38:17.327617   45790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem
	I0816 17:38:17.327634   45790 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem, removing ...
	I0816 17:38:17.327639   45790 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem
	I0816 17:38:17.327663   45790 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem (1123 bytes)
	I0816 17:38:17.327748   45790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem
	I0816 17:38:17.327765   45790 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem, removing ...
	I0816 17:38:17.327771   45790 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem
	I0816 17:38:17.327800   45790 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem (1679 bytes)
	I0816 17:38:17.327887   45790 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem org=jenkins.multinode-797386 san=[127.0.0.1 192.168.39.218 localhost minikube multinode-797386]
	I0816 17:38:17.449642   45790 provision.go:177] copyRemoteCerts
	I0816 17:38:17.449705   45790 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 17:38:17.449727   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHHostname
	I0816 17:38:17.452434   45790 main.go:141] libmachine: (multinode-797386) DBG | domain multinode-797386 has defined MAC address 52:54:00:2f:3e:a1 in network mk-multinode-797386
	I0816 17:38:17.452879   45790 main.go:141] libmachine: (multinode-797386) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:3e:a1", ip: ""} in network mk-multinode-797386: {Iface:virbr1 ExpiryTime:2024-08-16 18:32:52 +0000 UTC Type:0 Mac:52:54:00:2f:3e:a1 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:multinode-797386 Clientid:01:52:54:00:2f:3e:a1}
	I0816 17:38:17.452911   45790 main.go:141] libmachine: (multinode-797386) DBG | domain multinode-797386 has defined IP address 192.168.39.218 and MAC address 52:54:00:2f:3e:a1 in network mk-multinode-797386
	I0816 17:38:17.453140   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHPort
	I0816 17:38:17.453345   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHKeyPath
	I0816 17:38:17.453563   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHUsername
	I0816 17:38:17.453706   45790 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/multinode-797386/id_rsa Username:docker}
	I0816 17:38:17.538254   45790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 17:38:17.538313   45790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 17:38:17.577141   45790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 17:38:17.577210   45790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0816 17:38:17.601934   45790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 17:38:17.601988   45790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 17:38:17.635299   45790 provision.go:87] duration metric: took 313.959165ms to configureAuth
	I0816 17:38:17.635321   45790 buildroot.go:189] setting minikube options for container-runtime
	I0816 17:38:17.635543   45790 config.go:182] Loaded profile config "multinode-797386": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 17:38:17.635609   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHHostname
	I0816 17:38:17.638159   45790 main.go:141] libmachine: (multinode-797386) DBG | domain multinode-797386 has defined MAC address 52:54:00:2f:3e:a1 in network mk-multinode-797386
	I0816 17:38:17.638553   45790 main.go:141] libmachine: (multinode-797386) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:3e:a1", ip: ""} in network mk-multinode-797386: {Iface:virbr1 ExpiryTime:2024-08-16 18:32:52 +0000 UTC Type:0 Mac:52:54:00:2f:3e:a1 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:multinode-797386 Clientid:01:52:54:00:2f:3e:a1}
	I0816 17:38:17.638590   45790 main.go:141] libmachine: (multinode-797386) DBG | domain multinode-797386 has defined IP address 192.168.39.218 and MAC address 52:54:00:2f:3e:a1 in network mk-multinode-797386
	I0816 17:38:17.638772   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHPort
	I0816 17:38:17.638984   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHKeyPath
	I0816 17:38:17.639168   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHKeyPath
	I0816 17:38:17.639319   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHUsername
	I0816 17:38:17.639488   45790 main.go:141] libmachine: Using SSH client type: native
	I0816 17:38:17.639700   45790 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.218 22 <nil> <nil>}
	I0816 17:38:17.639716   45790 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 17:39:48.412501   45790 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 17:39:48.412555   45790 machine.go:96] duration metric: took 1m31.466526132s to provisionDockerMachine
	I0816 17:39:48.412591   45790 start.go:293] postStartSetup for "multinode-797386" (driver="kvm2")
	I0816 17:39:48.412646   45790 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 17:39:48.412686   45790 main.go:141] libmachine: (multinode-797386) Calling .DriverName
	I0816 17:39:48.413132   45790 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 17:39:48.413168   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHHostname
	I0816 17:39:48.416296   45790 main.go:141] libmachine: (multinode-797386) DBG | domain multinode-797386 has defined MAC address 52:54:00:2f:3e:a1 in network mk-multinode-797386
	I0816 17:39:48.416796   45790 main.go:141] libmachine: (multinode-797386) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:3e:a1", ip: ""} in network mk-multinode-797386: {Iface:virbr1 ExpiryTime:2024-08-16 18:32:52 +0000 UTC Type:0 Mac:52:54:00:2f:3e:a1 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:multinode-797386 Clientid:01:52:54:00:2f:3e:a1}
	I0816 17:39:48.416823   45790 main.go:141] libmachine: (multinode-797386) DBG | domain multinode-797386 has defined IP address 192.168.39.218 and MAC address 52:54:00:2f:3e:a1 in network mk-multinode-797386
	I0816 17:39:48.417030   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHPort
	I0816 17:39:48.417232   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHKeyPath
	I0816 17:39:48.417401   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHUsername
	I0816 17:39:48.417541   45790 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/multinode-797386/id_rsa Username:docker}
	I0816 17:39:48.503856   45790 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 17:39:48.507834   45790 command_runner.go:130] > NAME=Buildroot
	I0816 17:39:48.507852   45790 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0816 17:39:48.507859   45790 command_runner.go:130] > ID=buildroot
	I0816 17:39:48.507869   45790 command_runner.go:130] > VERSION_ID=2023.02.9
	I0816 17:39:48.507876   45790 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0816 17:39:48.507936   45790 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 17:39:48.507962   45790 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/addons for local assets ...
	I0816 17:39:48.508036   45790 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/files for local assets ...
	I0816 17:39:48.508112   45790 filesync.go:149] local asset: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem -> 167532.pem in /etc/ssl/certs
	I0816 17:39:48.508121   45790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem -> /etc/ssl/certs/167532.pem
	I0816 17:39:48.508198   45790 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 17:39:48.517032   45790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /etc/ssl/certs/167532.pem (1708 bytes)
	I0816 17:39:48.539799   45790 start.go:296] duration metric: took 127.191585ms for postStartSetup
	I0816 17:39:48.539860   45790 fix.go:56] duration metric: took 1m31.615139668s for fixHost
	I0816 17:39:48.539893   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHHostname
	I0816 17:39:48.542819   45790 main.go:141] libmachine: (multinode-797386) DBG | domain multinode-797386 has defined MAC address 52:54:00:2f:3e:a1 in network mk-multinode-797386
	I0816 17:39:48.543187   45790 main.go:141] libmachine: (multinode-797386) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:3e:a1", ip: ""} in network mk-multinode-797386: {Iface:virbr1 ExpiryTime:2024-08-16 18:32:52 +0000 UTC Type:0 Mac:52:54:00:2f:3e:a1 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:multinode-797386 Clientid:01:52:54:00:2f:3e:a1}
	I0816 17:39:48.543216   45790 main.go:141] libmachine: (multinode-797386) DBG | domain multinode-797386 has defined IP address 192.168.39.218 and MAC address 52:54:00:2f:3e:a1 in network mk-multinode-797386
	I0816 17:39:48.543391   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHPort
	I0816 17:39:48.543624   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHKeyPath
	I0816 17:39:48.543835   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHKeyPath
	I0816 17:39:48.543971   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHUsername
	I0816 17:39:48.544145   45790 main.go:141] libmachine: Using SSH client type: native
	I0816 17:39:48.544303   45790 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.218 22 <nil> <nil>}
	I0816 17:39:48.544312   45790 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 17:39:48.656917   45790 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723829988.629867005
	
	I0816 17:39:48.656937   45790 fix.go:216] guest clock: 1723829988.629867005
	I0816 17:39:48.656954   45790 fix.go:229] Guest: 2024-08-16 17:39:48.629867005 +0000 UTC Remote: 2024-08-16 17:39:48.539871648 +0000 UTC m=+91.738991900 (delta=89.995357ms)
	I0816 17:39:48.656982   45790 fix.go:200] guest clock delta is within tolerance: 89.995357ms
	I0816 17:39:48.656988   45790 start.go:83] releasing machines lock for "multinode-797386", held for 1m31.732283366s
	I0816 17:39:48.657008   45790 main.go:141] libmachine: (multinode-797386) Calling .DriverName
	I0816 17:39:48.657255   45790 main.go:141] libmachine: (multinode-797386) Calling .GetIP
	I0816 17:39:48.659940   45790 main.go:141] libmachine: (multinode-797386) DBG | domain multinode-797386 has defined MAC address 52:54:00:2f:3e:a1 in network mk-multinode-797386
	I0816 17:39:48.660305   45790 main.go:141] libmachine: (multinode-797386) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:3e:a1", ip: ""} in network mk-multinode-797386: {Iface:virbr1 ExpiryTime:2024-08-16 18:32:52 +0000 UTC Type:0 Mac:52:54:00:2f:3e:a1 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:multinode-797386 Clientid:01:52:54:00:2f:3e:a1}
	I0816 17:39:48.660330   45790 main.go:141] libmachine: (multinode-797386) DBG | domain multinode-797386 has defined IP address 192.168.39.218 and MAC address 52:54:00:2f:3e:a1 in network mk-multinode-797386
	I0816 17:39:48.660463   45790 main.go:141] libmachine: (multinode-797386) Calling .DriverName
	I0816 17:39:48.660958   45790 main.go:141] libmachine: (multinode-797386) Calling .DriverName
	I0816 17:39:48.661128   45790 main.go:141] libmachine: (multinode-797386) Calling .DriverName
	I0816 17:39:48.661202   45790 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 17:39:48.661254   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHHostname
	I0816 17:39:48.661372   45790 ssh_runner.go:195] Run: cat /version.json
	I0816 17:39:48.661394   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHHostname
	I0816 17:39:48.663936   45790 main.go:141] libmachine: (multinode-797386) DBG | domain multinode-797386 has defined MAC address 52:54:00:2f:3e:a1 in network mk-multinode-797386
	I0816 17:39:48.664161   45790 main.go:141] libmachine: (multinode-797386) DBG | domain multinode-797386 has defined MAC address 52:54:00:2f:3e:a1 in network mk-multinode-797386
	I0816 17:39:48.664437   45790 main.go:141] libmachine: (multinode-797386) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:3e:a1", ip: ""} in network mk-multinode-797386: {Iface:virbr1 ExpiryTime:2024-08-16 18:32:52 +0000 UTC Type:0 Mac:52:54:00:2f:3e:a1 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:multinode-797386 Clientid:01:52:54:00:2f:3e:a1}
	I0816 17:39:48.664464   45790 main.go:141] libmachine: (multinode-797386) DBG | domain multinode-797386 has defined IP address 192.168.39.218 and MAC address 52:54:00:2f:3e:a1 in network mk-multinode-797386
	I0816 17:39:48.664581   45790 main.go:141] libmachine: (multinode-797386) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:3e:a1", ip: ""} in network mk-multinode-797386: {Iface:virbr1 ExpiryTime:2024-08-16 18:32:52 +0000 UTC Type:0 Mac:52:54:00:2f:3e:a1 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:multinode-797386 Clientid:01:52:54:00:2f:3e:a1}
	I0816 17:39:48.664603   45790 main.go:141] libmachine: (multinode-797386) DBG | domain multinode-797386 has defined IP address 192.168.39.218 and MAC address 52:54:00:2f:3e:a1 in network mk-multinode-797386
	I0816 17:39:48.664653   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHPort
	I0816 17:39:48.664735   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHPort
	I0816 17:39:48.664813   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHKeyPath
	I0816 17:39:48.664883   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHKeyPath
	I0816 17:39:48.664937   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHUsername
	I0816 17:39:48.665042   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHUsername
	I0816 17:39:48.665091   45790 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/multinode-797386/id_rsa Username:docker}
	I0816 17:39:48.665158   45790 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/multinode-797386/id_rsa Username:docker}
	I0816 17:39:48.745092   45790 command_runner.go:130] > {"iso_version": "v1.33.1-1723740674-19452", "kicbase_version": "v0.0.44-1723650208-19443", "minikube_version": "v1.33.1", "commit": "3bcdc720eef782394bf386d06fca73d1934e08fb"}
	I0816 17:39:48.745329   45790 ssh_runner.go:195] Run: systemctl --version
	I0816 17:39:48.786038   45790 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0816 17:39:48.786103   45790 command_runner.go:130] > systemd 252 (252)
	I0816 17:39:48.786140   45790 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0816 17:39:48.786211   45790 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 17:39:48.941095   45790 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0816 17:39:48.946586   45790 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0816 17:39:48.946733   45790 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 17:39:48.946803   45790 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 17:39:48.955875   45790 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0816 17:39:48.955895   45790 start.go:495] detecting cgroup driver to use...
	I0816 17:39:48.955955   45790 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 17:39:48.971461   45790 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 17:39:48.984534   45790 docker.go:217] disabling cri-docker service (if available) ...
	I0816 17:39:48.984603   45790 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 17:39:48.997732   45790 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 17:39:49.011264   45790 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 17:39:49.149843   45790 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 17:39:49.286037   45790 docker.go:233] disabling docker service ...
	I0816 17:39:49.286116   45790 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 17:39:49.303105   45790 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 17:39:49.316640   45790 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 17:39:49.455302   45790 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 17:39:49.600322   45790 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 17:39:49.613527   45790 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 17:39:49.630926   45790 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0816 17:39:49.631349   45790 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 17:39:49.631397   45790 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:39:49.641394   45790 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 17:39:49.641453   45790 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:39:49.651109   45790 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:39:49.660887   45790 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:39:49.670607   45790 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 17:39:49.680288   45790 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:39:49.689719   45790 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:39:49.699617   45790 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:39:49.709148   45790 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 17:39:49.717798   45790 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0816 17:39:49.717858   45790 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 17:39:49.726334   45790 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 17:39:49.867195   45790 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 17:39:57.859806   45790 ssh_runner.go:235] Completed: sudo systemctl restart crio: (7.992568521s)
	I0816 17:39:57.859837   45790 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 17:39:57.859879   45790 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 17:39:57.864472   45790 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0816 17:39:57.864493   45790 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0816 17:39:57.864505   45790 command_runner.go:130] > Device: 0,22	Inode: 1333        Links: 1
	I0816 17:39:57.864514   45790 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0816 17:39:57.864519   45790 command_runner.go:130] > Access: 2024-08-16 17:39:57.730238821 +0000
	I0816 17:39:57.864529   45790 command_runner.go:130] > Modify: 2024-08-16 17:39:57.730238821 +0000
	I0816 17:39:57.864535   45790 command_runner.go:130] > Change: 2024-08-16 17:39:57.730238821 +0000
	I0816 17:39:57.864539   45790 command_runner.go:130] >  Birth: -
	I0816 17:39:57.864582   45790 start.go:563] Will wait 60s for crictl version
	I0816 17:39:57.864640   45790 ssh_runner.go:195] Run: which crictl
	I0816 17:39:57.867942   45790 command_runner.go:130] > /usr/bin/crictl
	I0816 17:39:57.868095   45790 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 17:39:57.901662   45790 command_runner.go:130] > Version:  0.1.0
	I0816 17:39:57.901692   45790 command_runner.go:130] > RuntimeName:  cri-o
	I0816 17:39:57.901700   45790 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0816 17:39:57.901708   45790 command_runner.go:130] > RuntimeApiVersion:  v1
	I0816 17:39:57.902711   45790 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 17:39:57.902792   45790 ssh_runner.go:195] Run: crio --version
	I0816 17:39:57.930494   45790 command_runner.go:130] > crio version 1.29.1
	I0816 17:39:57.930522   45790 command_runner.go:130] > Version:        1.29.1
	I0816 17:39:57.930531   45790 command_runner.go:130] > GitCommit:      unknown
	I0816 17:39:57.930538   45790 command_runner.go:130] > GitCommitDate:  unknown
	I0816 17:39:57.930544   45790 command_runner.go:130] > GitTreeState:   clean
	I0816 17:39:57.930553   45790 command_runner.go:130] > BuildDate:      2024-08-15T22:11:01Z
	I0816 17:39:57.930560   45790 command_runner.go:130] > GoVersion:      go1.21.6
	I0816 17:39:57.930566   45790 command_runner.go:130] > Compiler:       gc
	I0816 17:39:57.930572   45790 command_runner.go:130] > Platform:       linux/amd64
	I0816 17:39:57.930578   45790 command_runner.go:130] > Linkmode:       dynamic
	I0816 17:39:57.930599   45790 command_runner.go:130] > BuildTags:      
	I0816 17:39:57.930608   45790 command_runner.go:130] >   containers_image_ostree_stub
	I0816 17:39:57.930614   45790 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0816 17:39:57.930623   45790 command_runner.go:130] >   btrfs_noversion
	I0816 17:39:57.930630   45790 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0816 17:39:57.930637   45790 command_runner.go:130] >   libdm_no_deferred_remove
	I0816 17:39:57.930642   45790 command_runner.go:130] >   seccomp
	I0816 17:39:57.930651   45790 command_runner.go:130] > LDFlags:          unknown
	I0816 17:39:57.930658   45790 command_runner.go:130] > SeccompEnabled:   true
	I0816 17:39:57.930666   45790 command_runner.go:130] > AppArmorEnabled:  false
	I0816 17:39:57.931771   45790 ssh_runner.go:195] Run: crio --version
	I0816 17:39:57.957761   45790 command_runner.go:130] > crio version 1.29.1
	I0816 17:39:57.957788   45790 command_runner.go:130] > Version:        1.29.1
	I0816 17:39:57.957797   45790 command_runner.go:130] > GitCommit:      unknown
	I0816 17:39:57.957804   45790 command_runner.go:130] > GitCommitDate:  unknown
	I0816 17:39:57.957809   45790 command_runner.go:130] > GitTreeState:   clean
	I0816 17:39:57.957818   45790 command_runner.go:130] > BuildDate:      2024-08-15T22:11:01Z
	I0816 17:39:57.957825   45790 command_runner.go:130] > GoVersion:      go1.21.6
	I0816 17:39:57.957831   45790 command_runner.go:130] > Compiler:       gc
	I0816 17:39:57.957838   45790 command_runner.go:130] > Platform:       linux/amd64
	I0816 17:39:57.957844   45790 command_runner.go:130] > Linkmode:       dynamic
	I0816 17:39:57.957851   45790 command_runner.go:130] > BuildTags:      
	I0816 17:39:57.957895   45790 command_runner.go:130] >   containers_image_ostree_stub
	I0816 17:39:57.957906   45790 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0816 17:39:57.957912   45790 command_runner.go:130] >   btrfs_noversion
	I0816 17:39:57.957916   45790 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0816 17:39:57.957921   45790 command_runner.go:130] >   libdm_no_deferred_remove
	I0816 17:39:57.957925   45790 command_runner.go:130] >   seccomp
	I0816 17:39:57.957932   45790 command_runner.go:130] > LDFlags:          unknown
	I0816 17:39:57.957936   45790 command_runner.go:130] > SeccompEnabled:   true
	I0816 17:39:57.957942   45790 command_runner.go:130] > AppArmorEnabled:  false
	I0816 17:39:57.960912   45790 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 17:39:57.962069   45790 main.go:141] libmachine: (multinode-797386) Calling .GetIP
	I0816 17:39:57.964726   45790 main.go:141] libmachine: (multinode-797386) DBG | domain multinode-797386 has defined MAC address 52:54:00:2f:3e:a1 in network mk-multinode-797386
	I0816 17:39:57.965139   45790 main.go:141] libmachine: (multinode-797386) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:3e:a1", ip: ""} in network mk-multinode-797386: {Iface:virbr1 ExpiryTime:2024-08-16 18:32:52 +0000 UTC Type:0 Mac:52:54:00:2f:3e:a1 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:multinode-797386 Clientid:01:52:54:00:2f:3e:a1}
	I0816 17:39:57.965170   45790 main.go:141] libmachine: (multinode-797386) DBG | domain multinode-797386 has defined IP address 192.168.39.218 and MAC address 52:54:00:2f:3e:a1 in network mk-multinode-797386
	I0816 17:39:57.965368   45790 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0816 17:39:57.969385   45790 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0816 17:39:57.969498   45790 kubeadm.go:883] updating cluster {Name:multinode-797386 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.0 ClusterName:multinode-797386 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.218 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.71 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 17:39:57.969676   45790 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 17:39:57.969732   45790 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 17:39:58.011427   45790 command_runner.go:130] > {
	I0816 17:39:58.011453   45790 command_runner.go:130] >   "images": [
	I0816 17:39:58.011467   45790 command_runner.go:130] >     {
	I0816 17:39:58.011480   45790 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0816 17:39:58.011491   45790 command_runner.go:130] >       "repoTags": [
	I0816 17:39:58.011500   45790 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0816 17:39:58.011505   45790 command_runner.go:130] >       ],
	I0816 17:39:58.011511   45790 command_runner.go:130] >       "repoDigests": [
	I0816 17:39:58.011533   45790 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0816 17:39:58.011548   45790 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0816 17:39:58.011565   45790 command_runner.go:130] >       ],
	I0816 17:39:58.011575   45790 command_runner.go:130] >       "size": "87165492",
	I0816 17:39:58.011581   45790 command_runner.go:130] >       "uid": null,
	I0816 17:39:58.011591   45790 command_runner.go:130] >       "username": "",
	I0816 17:39:58.011601   45790 command_runner.go:130] >       "spec": null,
	I0816 17:39:58.011611   45790 command_runner.go:130] >       "pinned": false
	I0816 17:39:58.011618   45790 command_runner.go:130] >     },
	I0816 17:39:58.011624   45790 command_runner.go:130] >     {
	I0816 17:39:58.011632   45790 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0816 17:39:58.011639   45790 command_runner.go:130] >       "repoTags": [
	I0816 17:39:58.011647   45790 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0816 17:39:58.011654   45790 command_runner.go:130] >       ],
	I0816 17:39:58.011660   45790 command_runner.go:130] >       "repoDigests": [
	I0816 17:39:58.011671   45790 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0816 17:39:58.011682   45790 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0816 17:39:58.011691   45790 command_runner.go:130] >       ],
	I0816 17:39:58.011699   45790 command_runner.go:130] >       "size": "87190579",
	I0816 17:39:58.011708   45790 command_runner.go:130] >       "uid": null,
	I0816 17:39:58.011720   45790 command_runner.go:130] >       "username": "",
	I0816 17:39:58.011729   45790 command_runner.go:130] >       "spec": null,
	I0816 17:39:58.011735   45790 command_runner.go:130] >       "pinned": false
	I0816 17:39:58.011743   45790 command_runner.go:130] >     },
	I0816 17:39:58.011749   45790 command_runner.go:130] >     {
	I0816 17:39:58.011764   45790 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0816 17:39:58.011773   45790 command_runner.go:130] >       "repoTags": [
	I0816 17:39:58.011780   45790 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0816 17:39:58.011787   45790 command_runner.go:130] >       ],
	I0816 17:39:58.011796   45790 command_runner.go:130] >       "repoDigests": [
	I0816 17:39:58.011817   45790 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0816 17:39:58.011832   45790 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0816 17:39:58.011841   45790 command_runner.go:130] >       ],
	I0816 17:39:58.011855   45790 command_runner.go:130] >       "size": "1363676",
	I0816 17:39:58.011863   45790 command_runner.go:130] >       "uid": null,
	I0816 17:39:58.011870   45790 command_runner.go:130] >       "username": "",
	I0816 17:39:58.011876   45790 command_runner.go:130] >       "spec": null,
	I0816 17:39:58.011881   45790 command_runner.go:130] >       "pinned": false
	I0816 17:39:58.011886   45790 command_runner.go:130] >     },
	I0816 17:39:58.011890   45790 command_runner.go:130] >     {
	I0816 17:39:58.011899   45790 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0816 17:39:58.011904   45790 command_runner.go:130] >       "repoTags": [
	I0816 17:39:58.011911   45790 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0816 17:39:58.011916   45790 command_runner.go:130] >       ],
	I0816 17:39:58.011922   45790 command_runner.go:130] >       "repoDigests": [
	I0816 17:39:58.011933   45790 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0816 17:39:58.011953   45790 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0816 17:39:58.011958   45790 command_runner.go:130] >       ],
	I0816 17:39:58.011965   45790 command_runner.go:130] >       "size": "31470524",
	I0816 17:39:58.011970   45790 command_runner.go:130] >       "uid": null,
	I0816 17:39:58.011975   45790 command_runner.go:130] >       "username": "",
	I0816 17:39:58.011981   45790 command_runner.go:130] >       "spec": null,
	I0816 17:39:58.011986   45790 command_runner.go:130] >       "pinned": false
	I0816 17:39:58.011991   45790 command_runner.go:130] >     },
	I0816 17:39:58.011997   45790 command_runner.go:130] >     {
	I0816 17:39:58.012005   45790 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0816 17:39:58.012014   45790 command_runner.go:130] >       "repoTags": [
	I0816 17:39:58.012021   45790 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0816 17:39:58.012027   45790 command_runner.go:130] >       ],
	I0816 17:39:58.012037   45790 command_runner.go:130] >       "repoDigests": [
	I0816 17:39:58.012050   45790 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0816 17:39:58.012063   45790 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0816 17:39:58.012068   45790 command_runner.go:130] >       ],
	I0816 17:39:58.012076   45790 command_runner.go:130] >       "size": "61245718",
	I0816 17:39:58.012082   45790 command_runner.go:130] >       "uid": null,
	I0816 17:39:58.012088   45790 command_runner.go:130] >       "username": "nonroot",
	I0816 17:39:58.012100   45790 command_runner.go:130] >       "spec": null,
	I0816 17:39:58.012108   45790 command_runner.go:130] >       "pinned": false
	I0816 17:39:58.012113   45790 command_runner.go:130] >     },
	I0816 17:39:58.012121   45790 command_runner.go:130] >     {
	I0816 17:39:58.012130   45790 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0816 17:39:58.012139   45790 command_runner.go:130] >       "repoTags": [
	I0816 17:39:58.012146   45790 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0816 17:39:58.012154   45790 command_runner.go:130] >       ],
	I0816 17:39:58.012160   45790 command_runner.go:130] >       "repoDigests": [
	I0816 17:39:58.012169   45790 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0816 17:39:58.012182   45790 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0816 17:39:58.012188   45790 command_runner.go:130] >       ],
	I0816 17:39:58.012198   45790 command_runner.go:130] >       "size": "149009664",
	I0816 17:39:58.012204   45790 command_runner.go:130] >       "uid": {
	I0816 17:39:58.012212   45790 command_runner.go:130] >         "value": "0"
	I0816 17:39:58.012217   45790 command_runner.go:130] >       },
	I0816 17:39:58.012226   45790 command_runner.go:130] >       "username": "",
	I0816 17:39:58.012233   45790 command_runner.go:130] >       "spec": null,
	I0816 17:39:58.012243   45790 command_runner.go:130] >       "pinned": false
	I0816 17:39:58.012249   45790 command_runner.go:130] >     },
	I0816 17:39:58.012255   45790 command_runner.go:130] >     {
	I0816 17:39:58.012265   45790 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0816 17:39:58.012274   45790 command_runner.go:130] >       "repoTags": [
	I0816 17:39:58.012282   45790 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0816 17:39:58.012291   45790 command_runner.go:130] >       ],
	I0816 17:39:58.012298   45790 command_runner.go:130] >       "repoDigests": [
	I0816 17:39:58.012311   45790 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0816 17:39:58.012327   45790 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0816 17:39:58.012336   45790 command_runner.go:130] >       ],
	I0816 17:39:58.012341   45790 command_runner.go:130] >       "size": "95233506",
	I0816 17:39:58.012346   45790 command_runner.go:130] >       "uid": {
	I0816 17:39:58.012353   45790 command_runner.go:130] >         "value": "0"
	I0816 17:39:58.012358   45790 command_runner.go:130] >       },
	I0816 17:39:58.012366   45790 command_runner.go:130] >       "username": "",
	I0816 17:39:58.012371   45790 command_runner.go:130] >       "spec": null,
	I0816 17:39:58.012380   45790 command_runner.go:130] >       "pinned": false
	I0816 17:39:58.012394   45790 command_runner.go:130] >     },
	I0816 17:39:58.012403   45790 command_runner.go:130] >     {
	I0816 17:39:58.012413   45790 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0816 17:39:58.012422   45790 command_runner.go:130] >       "repoTags": [
	I0816 17:39:58.012429   45790 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0816 17:39:58.012437   45790 command_runner.go:130] >       ],
	I0816 17:39:58.012443   45790 command_runner.go:130] >       "repoDigests": [
	I0816 17:39:58.012473   45790 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0816 17:39:58.012489   45790 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0816 17:39:58.012495   45790 command_runner.go:130] >       ],
	I0816 17:39:58.012502   45790 command_runner.go:130] >       "size": "89437512",
	I0816 17:39:58.012511   45790 command_runner.go:130] >       "uid": {
	I0816 17:39:58.012517   45790 command_runner.go:130] >         "value": "0"
	I0816 17:39:58.012525   45790 command_runner.go:130] >       },
	I0816 17:39:58.012530   45790 command_runner.go:130] >       "username": "",
	I0816 17:39:58.012536   45790 command_runner.go:130] >       "spec": null,
	I0816 17:39:58.012541   45790 command_runner.go:130] >       "pinned": false
	I0816 17:39:58.012546   45790 command_runner.go:130] >     },
	I0816 17:39:58.012551   45790 command_runner.go:130] >     {
	I0816 17:39:58.012565   45790 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0816 17:39:58.012570   45790 command_runner.go:130] >       "repoTags": [
	I0816 17:39:58.012578   45790 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0816 17:39:58.012584   45790 command_runner.go:130] >       ],
	I0816 17:39:58.012589   45790 command_runner.go:130] >       "repoDigests": [
	I0816 17:39:58.012610   45790 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0816 17:39:58.012631   45790 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0816 17:39:58.012638   45790 command_runner.go:130] >       ],
	I0816 17:39:58.012643   45790 command_runner.go:130] >       "size": "92728217",
	I0816 17:39:58.012648   45790 command_runner.go:130] >       "uid": null,
	I0816 17:39:58.012653   45790 command_runner.go:130] >       "username": "",
	I0816 17:39:58.012659   45790 command_runner.go:130] >       "spec": null,
	I0816 17:39:58.012664   45790 command_runner.go:130] >       "pinned": false
	I0816 17:39:58.012669   45790 command_runner.go:130] >     },
	I0816 17:39:58.012673   45790 command_runner.go:130] >     {
	I0816 17:39:58.012681   45790 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0816 17:39:58.012686   45790 command_runner.go:130] >       "repoTags": [
	I0816 17:39:58.012704   45790 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0816 17:39:58.012712   45790 command_runner.go:130] >       ],
	I0816 17:39:58.012718   45790 command_runner.go:130] >       "repoDigests": [
	I0816 17:39:58.012730   45790 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0816 17:39:58.012741   45790 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0816 17:39:58.012750   45790 command_runner.go:130] >       ],
	I0816 17:39:58.012756   45790 command_runner.go:130] >       "size": "68420936",
	I0816 17:39:58.012761   45790 command_runner.go:130] >       "uid": {
	I0816 17:39:58.012765   45790 command_runner.go:130] >         "value": "0"
	I0816 17:39:58.012768   45790 command_runner.go:130] >       },
	I0816 17:39:58.012772   45790 command_runner.go:130] >       "username": "",
	I0816 17:39:58.012776   45790 command_runner.go:130] >       "spec": null,
	I0816 17:39:58.012783   45790 command_runner.go:130] >       "pinned": false
	I0816 17:39:58.012786   45790 command_runner.go:130] >     },
	I0816 17:39:58.012789   45790 command_runner.go:130] >     {
	I0816 17:39:58.012795   45790 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0816 17:39:58.012801   45790 command_runner.go:130] >       "repoTags": [
	I0816 17:39:58.012806   45790 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0816 17:39:58.012812   45790 command_runner.go:130] >       ],
	I0816 17:39:58.012816   45790 command_runner.go:130] >       "repoDigests": [
	I0816 17:39:58.012822   45790 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0816 17:39:58.012829   45790 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0816 17:39:58.012834   45790 command_runner.go:130] >       ],
	I0816 17:39:58.012838   45790 command_runner.go:130] >       "size": "742080",
	I0816 17:39:58.012842   45790 command_runner.go:130] >       "uid": {
	I0816 17:39:58.012846   45790 command_runner.go:130] >         "value": "65535"
	I0816 17:39:58.012850   45790 command_runner.go:130] >       },
	I0816 17:39:58.012854   45790 command_runner.go:130] >       "username": "",
	I0816 17:39:58.012859   45790 command_runner.go:130] >       "spec": null,
	I0816 17:39:58.012863   45790 command_runner.go:130] >       "pinned": true
	I0816 17:39:58.012867   45790 command_runner.go:130] >     }
	I0816 17:39:58.012870   45790 command_runner.go:130] >   ]
	I0816 17:39:58.012873   45790 command_runner.go:130] > }
	I0816 17:39:58.013194   45790 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 17:39:58.013211   45790 crio.go:433] Images already preloaded, skipping extraction
	I0816 17:39:58.013279   45790 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 17:39:58.043584   45790 command_runner.go:130] > {
	I0816 17:39:58.043615   45790 command_runner.go:130] >   "images": [
	I0816 17:39:58.043619   45790 command_runner.go:130] >     {
	I0816 17:39:58.043626   45790 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0816 17:39:58.043630   45790 command_runner.go:130] >       "repoTags": [
	I0816 17:39:58.043636   45790 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0816 17:39:58.043639   45790 command_runner.go:130] >       ],
	I0816 17:39:58.043643   45790 command_runner.go:130] >       "repoDigests": [
	I0816 17:39:58.043651   45790 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0816 17:39:58.043658   45790 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0816 17:39:58.043661   45790 command_runner.go:130] >       ],
	I0816 17:39:58.043667   45790 command_runner.go:130] >       "size": "87165492",
	I0816 17:39:58.043673   45790 command_runner.go:130] >       "uid": null,
	I0816 17:39:58.043678   45790 command_runner.go:130] >       "username": "",
	I0816 17:39:58.043688   45790 command_runner.go:130] >       "spec": null,
	I0816 17:39:58.043695   45790 command_runner.go:130] >       "pinned": false
	I0816 17:39:58.043700   45790 command_runner.go:130] >     },
	I0816 17:39:58.043706   45790 command_runner.go:130] >     {
	I0816 17:39:58.043714   45790 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0816 17:39:58.043724   45790 command_runner.go:130] >       "repoTags": [
	I0816 17:39:58.043731   45790 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0816 17:39:58.043738   45790 command_runner.go:130] >       ],
	I0816 17:39:58.043742   45790 command_runner.go:130] >       "repoDigests": [
	I0816 17:39:58.043749   45790 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0816 17:39:58.043756   45790 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0816 17:39:58.043760   45790 command_runner.go:130] >       ],
	I0816 17:39:58.043766   45790 command_runner.go:130] >       "size": "87190579",
	I0816 17:39:58.043769   45790 command_runner.go:130] >       "uid": null,
	I0816 17:39:58.043782   45790 command_runner.go:130] >       "username": "",
	I0816 17:39:58.043791   45790 command_runner.go:130] >       "spec": null,
	I0816 17:39:58.043801   45790 command_runner.go:130] >       "pinned": false
	I0816 17:39:58.043806   45790 command_runner.go:130] >     },
	I0816 17:39:58.043811   45790 command_runner.go:130] >     {
	I0816 17:39:58.043823   45790 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0816 17:39:58.043829   45790 command_runner.go:130] >       "repoTags": [
	I0816 17:39:58.043837   45790 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0816 17:39:58.043841   45790 command_runner.go:130] >       ],
	I0816 17:39:58.043849   45790 command_runner.go:130] >       "repoDigests": [
	I0816 17:39:58.043859   45790 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0816 17:39:58.043866   45790 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0816 17:39:58.043872   45790 command_runner.go:130] >       ],
	I0816 17:39:58.043876   45790 command_runner.go:130] >       "size": "1363676",
	I0816 17:39:58.043884   45790 command_runner.go:130] >       "uid": null,
	I0816 17:39:58.043893   45790 command_runner.go:130] >       "username": "",
	I0816 17:39:58.043902   45790 command_runner.go:130] >       "spec": null,
	I0816 17:39:58.043912   45790 command_runner.go:130] >       "pinned": false
	I0816 17:39:58.043918   45790 command_runner.go:130] >     },
	I0816 17:39:58.043925   45790 command_runner.go:130] >     {
	I0816 17:39:58.043931   45790 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0816 17:39:58.043937   45790 command_runner.go:130] >       "repoTags": [
	I0816 17:39:58.043942   45790 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0816 17:39:58.043948   45790 command_runner.go:130] >       ],
	I0816 17:39:58.043951   45790 command_runner.go:130] >       "repoDigests": [
	I0816 17:39:58.043961   45790 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0816 17:39:58.044031   45790 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0816 17:39:58.044039   45790 command_runner.go:130] >       ],
	I0816 17:39:58.044043   45790 command_runner.go:130] >       "size": "31470524",
	I0816 17:39:58.044047   45790 command_runner.go:130] >       "uid": null,
	I0816 17:39:58.044050   45790 command_runner.go:130] >       "username": "",
	I0816 17:39:58.044055   45790 command_runner.go:130] >       "spec": null,
	I0816 17:39:58.044061   45790 command_runner.go:130] >       "pinned": false
	I0816 17:39:58.044069   45790 command_runner.go:130] >     },
	I0816 17:39:58.044075   45790 command_runner.go:130] >     {
	I0816 17:39:58.044088   45790 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0816 17:39:58.044097   45790 command_runner.go:130] >       "repoTags": [
	I0816 17:39:58.044104   45790 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0816 17:39:58.044113   45790 command_runner.go:130] >       ],
	I0816 17:39:58.044119   45790 command_runner.go:130] >       "repoDigests": [
	I0816 17:39:58.044138   45790 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0816 17:39:58.044151   45790 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0816 17:39:58.044161   45790 command_runner.go:130] >       ],
	I0816 17:39:58.044167   45790 command_runner.go:130] >       "size": "61245718",
	I0816 17:39:58.044174   45790 command_runner.go:130] >       "uid": null,
	I0816 17:39:58.044187   45790 command_runner.go:130] >       "username": "nonroot",
	I0816 17:39:58.044196   45790 command_runner.go:130] >       "spec": null,
	I0816 17:39:58.044203   45790 command_runner.go:130] >       "pinned": false
	I0816 17:39:58.044210   45790 command_runner.go:130] >     },
	I0816 17:39:58.044216   45790 command_runner.go:130] >     {
	I0816 17:39:58.044227   45790 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0816 17:39:58.044235   45790 command_runner.go:130] >       "repoTags": [
	I0816 17:39:58.044240   45790 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0816 17:39:58.044248   45790 command_runner.go:130] >       ],
	I0816 17:39:58.044256   45790 command_runner.go:130] >       "repoDigests": [
	I0816 17:39:58.044270   45790 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0816 17:39:58.044284   45790 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0816 17:39:58.044293   45790 command_runner.go:130] >       ],
	I0816 17:39:58.044299   45790 command_runner.go:130] >       "size": "149009664",
	I0816 17:39:58.044306   45790 command_runner.go:130] >       "uid": {
	I0816 17:39:58.044312   45790 command_runner.go:130] >         "value": "0"
	I0816 17:39:58.044324   45790 command_runner.go:130] >       },
	I0816 17:39:58.044333   45790 command_runner.go:130] >       "username": "",
	I0816 17:39:58.044338   45790 command_runner.go:130] >       "spec": null,
	I0816 17:39:58.044342   45790 command_runner.go:130] >       "pinned": false
	I0816 17:39:58.044346   45790 command_runner.go:130] >     },
	I0816 17:39:58.044353   45790 command_runner.go:130] >     {
	I0816 17:39:58.044363   45790 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0816 17:39:58.044372   45790 command_runner.go:130] >       "repoTags": [
	I0816 17:39:58.044381   45790 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0816 17:39:58.044389   45790 command_runner.go:130] >       ],
	I0816 17:39:58.044396   45790 command_runner.go:130] >       "repoDigests": [
	I0816 17:39:58.044410   45790 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0816 17:39:58.044423   45790 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0816 17:39:58.044431   45790 command_runner.go:130] >       ],
	I0816 17:39:58.044438   45790 command_runner.go:130] >       "size": "95233506",
	I0816 17:39:58.044445   45790 command_runner.go:130] >       "uid": {
	I0816 17:39:58.044449   45790 command_runner.go:130] >         "value": "0"
	I0816 17:39:58.044454   45790 command_runner.go:130] >       },
	I0816 17:39:58.044461   45790 command_runner.go:130] >       "username": "",
	I0816 17:39:58.044469   45790 command_runner.go:130] >       "spec": null,
	I0816 17:39:58.044483   45790 command_runner.go:130] >       "pinned": false
	I0816 17:39:58.044492   45790 command_runner.go:130] >     },
	I0816 17:39:58.044497   45790 command_runner.go:130] >     {
	I0816 17:39:58.044509   45790 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0816 17:39:58.044518   45790 command_runner.go:130] >       "repoTags": [
	I0816 17:39:58.044527   45790 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0816 17:39:58.044539   45790 command_runner.go:130] >       ],
	I0816 17:39:58.044543   45790 command_runner.go:130] >       "repoDigests": [
	I0816 17:39:58.044571   45790 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0816 17:39:58.044587   45790 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0816 17:39:58.044598   45790 command_runner.go:130] >       ],
	I0816 17:39:58.044606   45790 command_runner.go:130] >       "size": "89437512",
	I0816 17:39:58.044613   45790 command_runner.go:130] >       "uid": {
	I0816 17:39:58.044618   45790 command_runner.go:130] >         "value": "0"
	I0816 17:39:58.044638   45790 command_runner.go:130] >       },
	I0816 17:39:58.044644   45790 command_runner.go:130] >       "username": "",
	I0816 17:39:58.044653   45790 command_runner.go:130] >       "spec": null,
	I0816 17:39:58.044659   45790 command_runner.go:130] >       "pinned": false
	I0816 17:39:58.044667   45790 command_runner.go:130] >     },
	I0816 17:39:58.044672   45790 command_runner.go:130] >     {
	I0816 17:39:58.044684   45790 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0816 17:39:58.044694   45790 command_runner.go:130] >       "repoTags": [
	I0816 17:39:58.044702   45790 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0816 17:39:58.044711   45790 command_runner.go:130] >       ],
	I0816 17:39:58.044718   45790 command_runner.go:130] >       "repoDigests": [
	I0816 17:39:58.044732   45790 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0816 17:39:58.044749   45790 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0816 17:39:58.044757   45790 command_runner.go:130] >       ],
	I0816 17:39:58.044763   45790 command_runner.go:130] >       "size": "92728217",
	I0816 17:39:58.044769   45790 command_runner.go:130] >       "uid": null,
	I0816 17:39:58.044773   45790 command_runner.go:130] >       "username": "",
	I0816 17:39:58.044782   45790 command_runner.go:130] >       "spec": null,
	I0816 17:39:58.044791   45790 command_runner.go:130] >       "pinned": false
	I0816 17:39:58.044796   45790 command_runner.go:130] >     },
	I0816 17:39:58.044805   45790 command_runner.go:130] >     {
	I0816 17:39:58.044814   45790 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0816 17:39:58.044829   45790 command_runner.go:130] >       "repoTags": [
	I0816 17:39:58.044941   45790 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0816 17:39:58.044958   45790 command_runner.go:130] >       ],
	I0816 17:39:58.044966   45790 command_runner.go:130] >       "repoDigests": [
	I0816 17:39:58.044981   45790 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0816 17:39:58.044994   45790 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0816 17:39:58.045000   45790 command_runner.go:130] >       ],
	I0816 17:39:58.045007   45790 command_runner.go:130] >       "size": "68420936",
	I0816 17:39:58.045016   45790 command_runner.go:130] >       "uid": {
	I0816 17:39:58.045023   45790 command_runner.go:130] >         "value": "0"
	I0816 17:39:58.045032   45790 command_runner.go:130] >       },
	I0816 17:39:58.045039   45790 command_runner.go:130] >       "username": "",
	I0816 17:39:58.045048   45790 command_runner.go:130] >       "spec": null,
	I0816 17:39:58.045055   45790 command_runner.go:130] >       "pinned": false
	I0816 17:39:58.045061   45790 command_runner.go:130] >     },
	I0816 17:39:58.045067   45790 command_runner.go:130] >     {
	I0816 17:39:58.045077   45790 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0816 17:39:58.045151   45790 command_runner.go:130] >       "repoTags": [
	I0816 17:39:58.045168   45790 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0816 17:39:58.045178   45790 command_runner.go:130] >       ],
	I0816 17:39:58.045185   45790 command_runner.go:130] >       "repoDigests": [
	I0816 17:39:58.045199   45790 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0816 17:39:58.045210   45790 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0816 17:39:58.045217   45790 command_runner.go:130] >       ],
	I0816 17:39:58.045223   45790 command_runner.go:130] >       "size": "742080",
	I0816 17:39:58.045232   45790 command_runner.go:130] >       "uid": {
	I0816 17:39:58.045240   45790 command_runner.go:130] >         "value": "65535"
	I0816 17:39:58.045249   45790 command_runner.go:130] >       },
	I0816 17:39:58.045257   45790 command_runner.go:130] >       "username": "",
	I0816 17:39:58.045265   45790 command_runner.go:130] >       "spec": null,
	I0816 17:39:58.045272   45790 command_runner.go:130] >       "pinned": true
	I0816 17:39:58.045280   45790 command_runner.go:130] >     }
	I0816 17:39:58.045285   45790 command_runner.go:130] >   ]
	I0816 17:39:58.045293   45790 command_runner.go:130] > }
	I0816 17:39:58.045521   45790 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 17:39:58.045545   45790 cache_images.go:84] Images are preloaded, skipping loading
	I0816 17:39:58.045553   45790 kubeadm.go:934] updating node { 192.168.39.218 8443 v1.31.0 crio true true} ...
	I0816 17:39:58.045666   45790 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-797386 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.218
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:multinode-797386 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 17:39:58.046037   45790 ssh_runner.go:195] Run: crio config
	I0816 17:39:58.087059   45790 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0816 17:39:58.087098   45790 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0816 17:39:58.087109   45790 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0816 17:39:58.087115   45790 command_runner.go:130] > #
	I0816 17:39:58.087126   45790 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0816 17:39:58.087136   45790 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0816 17:39:58.087148   45790 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0816 17:39:58.087158   45790 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0816 17:39:58.087164   45790 command_runner.go:130] > # reload'.
	I0816 17:39:58.087174   45790 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0816 17:39:58.087185   45790 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0816 17:39:58.087195   45790 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0816 17:39:58.087209   45790 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0816 17:39:58.087214   45790 command_runner.go:130] > [crio]
	I0816 17:39:58.087223   45790 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0816 17:39:58.087233   45790 command_runner.go:130] > # containers images, in this directory.
	I0816 17:39:58.087243   45790 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0816 17:39:58.087259   45790 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0816 17:39:58.087270   45790 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0816 17:39:58.087282   45790 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0816 17:39:58.087453   45790 command_runner.go:130] > # imagestore = ""
	I0816 17:39:58.087480   45790 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0816 17:39:58.087493   45790 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0816 17:39:58.087604   45790 command_runner.go:130] > storage_driver = "overlay"
	I0816 17:39:58.087622   45790 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0816 17:39:58.087631   45790 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0816 17:39:58.087640   45790 command_runner.go:130] > storage_option = [
	I0816 17:39:58.087716   45790 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0816 17:39:58.087747   45790 command_runner.go:130] > ]
	I0816 17:39:58.087758   45790 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0816 17:39:58.087779   45790 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0816 17:39:58.087948   45790 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0816 17:39:58.087960   45790 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0816 17:39:58.087969   45790 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0816 17:39:58.087977   45790 command_runner.go:130] > # always happen on a node reboot
	I0816 17:39:58.088211   45790 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0816 17:39:58.088236   45790 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0816 17:39:58.088257   45790 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0816 17:39:58.088266   45790 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0816 17:39:58.088328   45790 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0816 17:39:58.088347   45790 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0816 17:39:58.088362   45790 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0816 17:39:58.088577   45790 command_runner.go:130] > # internal_wipe = true
	I0816 17:39:58.088607   45790 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0816 17:39:58.088629   45790 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0816 17:39:58.088881   45790 command_runner.go:130] > # internal_repair = false
	I0816 17:39:58.088893   45790 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0816 17:39:58.088903   45790 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0816 17:39:58.088912   45790 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0816 17:39:58.089143   45790 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0816 17:39:58.089155   45790 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0816 17:39:58.089161   45790 command_runner.go:130] > [crio.api]
	I0816 17:39:58.089170   45790 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0816 17:39:58.089362   45790 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0816 17:39:58.089379   45790 command_runner.go:130] > # IP address on which the stream server will listen.
	I0816 17:39:58.089584   45790 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0816 17:39:58.089598   45790 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0816 17:39:58.089605   45790 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0816 17:39:58.089898   45790 command_runner.go:130] > # stream_port = "0"
	I0816 17:39:58.089909   45790 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0816 17:39:58.090130   45790 command_runner.go:130] > # stream_enable_tls = false
	I0816 17:39:58.090141   45790 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0816 17:39:58.090312   45790 command_runner.go:130] > # stream_idle_timeout = ""
	I0816 17:39:58.090326   45790 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0816 17:39:58.090337   45790 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0816 17:39:58.090345   45790 command_runner.go:130] > # minutes.
	I0816 17:39:58.090522   45790 command_runner.go:130] > # stream_tls_cert = ""
	I0816 17:39:58.090544   45790 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0816 17:39:58.090555   45790 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0816 17:39:58.090760   45790 command_runner.go:130] > # stream_tls_key = ""
	I0816 17:39:58.090779   45790 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0816 17:39:58.090790   45790 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0816 17:39:58.090835   45790 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0816 17:39:58.090931   45790 command_runner.go:130] > # stream_tls_ca = ""
	I0816 17:39:58.090945   45790 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0816 17:39:58.091055   45790 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0816 17:39:58.091071   45790 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0816 17:39:58.091167   45790 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0816 17:39:58.091182   45790 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0816 17:39:58.091194   45790 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0816 17:39:58.091205   45790 command_runner.go:130] > [crio.runtime]
	I0816 17:39:58.091217   45790 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0816 17:39:58.091229   45790 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0816 17:39:58.091238   45790 command_runner.go:130] > # "nofile=1024:2048"
	I0816 17:39:58.091252   45790 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0816 17:39:58.091274   45790 command_runner.go:130] > # default_ulimits = [
	I0816 17:39:58.091394   45790 command_runner.go:130] > # ]
	I0816 17:39:58.091409   45790 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0816 17:39:58.091611   45790 command_runner.go:130] > # no_pivot = false
	I0816 17:39:58.091626   45790 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0816 17:39:58.091637   45790 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0816 17:39:58.091913   45790 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0816 17:39:58.091927   45790 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0816 17:39:58.091935   45790 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0816 17:39:58.091947   45790 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0816 17:39:58.092024   45790 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0816 17:39:58.092036   45790 command_runner.go:130] > # Cgroup setting for conmon
	I0816 17:39:58.092047   45790 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0816 17:39:58.092119   45790 command_runner.go:130] > conmon_cgroup = "pod"
	I0816 17:39:58.092129   45790 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0816 17:39:58.092134   45790 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0816 17:39:58.092141   45790 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0816 17:39:58.092144   45790 command_runner.go:130] > conmon_env = [
	I0816 17:39:58.092229   45790 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0816 17:39:58.092299   45790 command_runner.go:130] > ]
	I0816 17:39:58.092317   45790 command_runner.go:130] > # Additional environment variables to set for all the
	I0816 17:39:58.092328   45790 command_runner.go:130] > # containers. These are overridden if set in the
	I0816 17:39:58.092338   45790 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0816 17:39:58.092348   45790 command_runner.go:130] > # default_env = [
	I0816 17:39:58.092448   45790 command_runner.go:130] > # ]
	I0816 17:39:58.092463   45790 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0816 17:39:58.092475   45790 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0816 17:39:58.092682   45790 command_runner.go:130] > # selinux = false
	I0816 17:39:58.092697   45790 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0816 17:39:58.092708   45790 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0816 17:39:58.092720   45790 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0816 17:39:58.092894   45790 command_runner.go:130] > # seccomp_profile = ""
	I0816 17:39:58.092909   45790 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0816 17:39:58.092920   45790 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0816 17:39:58.092930   45790 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0816 17:39:58.092940   45790 command_runner.go:130] > # which might increase security.
	I0816 17:39:58.092948   45790 command_runner.go:130] > # This option is currently deprecated,
	I0816 17:39:58.092960   45790 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0816 17:39:58.093031   45790 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0816 17:39:58.093048   45790 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0816 17:39:58.093059   45790 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0816 17:39:58.093073   45790 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0816 17:39:58.093086   45790 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0816 17:39:58.093098   45790 command_runner.go:130] > # This option supports live configuration reload.
	I0816 17:39:58.093256   45790 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0816 17:39:58.093273   45790 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0816 17:39:58.093278   45790 command_runner.go:130] > # the cgroup blockio controller.
	I0816 17:39:58.093420   45790 command_runner.go:130] > # blockio_config_file = ""
	I0816 17:39:58.093436   45790 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0816 17:39:58.093443   45790 command_runner.go:130] > # blockio parameters.
	I0816 17:39:58.093672   45790 command_runner.go:130] > # blockio_reload = false
	I0816 17:39:58.093689   45790 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0816 17:39:58.093695   45790 command_runner.go:130] > # irqbalance daemon.
	I0816 17:39:58.093979   45790 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0816 17:39:58.093997   45790 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0816 17:39:58.094007   45790 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0816 17:39:58.094019   45790 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0816 17:39:58.094203   45790 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0816 17:39:58.094218   45790 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0816 17:39:58.094227   45790 command_runner.go:130] > # This option supports live configuration reload.
	I0816 17:39:58.094378   45790 command_runner.go:130] > # rdt_config_file = ""
	I0816 17:39:58.094390   45790 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0816 17:39:58.094497   45790 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0816 17:39:58.094542   45790 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0816 17:39:58.094735   45790 command_runner.go:130] > # separate_pull_cgroup = ""
	I0816 17:39:58.094748   45790 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0816 17:39:58.094759   45790 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0816 17:39:58.094768   45790 command_runner.go:130] > # will be added.
	I0816 17:39:58.094832   45790 command_runner.go:130] > # default_capabilities = [
	I0816 17:39:58.094869   45790 command_runner.go:130] > # 	"CHOWN",
	I0816 17:39:58.094887   45790 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0816 17:39:58.094905   45790 command_runner.go:130] > # 	"FSETID",
	I0816 17:39:58.094957   45790 command_runner.go:130] > # 	"FOWNER",
	I0816 17:39:58.094967   45790 command_runner.go:130] > # 	"SETGID",
	I0816 17:39:58.094990   45790 command_runner.go:130] > # 	"SETUID",
	I0816 17:39:58.095000   45790 command_runner.go:130] > # 	"SETPCAP",
	I0816 17:39:58.095007   45790 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0816 17:39:58.095034   45790 command_runner.go:130] > # 	"KILL",
	I0816 17:39:58.095043   45790 command_runner.go:130] > # ]
	I0816 17:39:58.095057   45790 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0816 17:39:58.095070   45790 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0816 17:39:58.095080   45790 command_runner.go:130] > # add_inheritable_capabilities = false
	I0816 17:39:58.095093   45790 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0816 17:39:58.095106   45790 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0816 17:39:58.095116   45790 command_runner.go:130] > default_sysctls = [
	I0816 17:39:58.095126   45790 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0816 17:39:58.095134   45790 command_runner.go:130] > ]
	I0816 17:39:58.095144   45790 command_runner.go:130] > # List of devices on the host that a
	I0816 17:39:58.095166   45790 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0816 17:39:58.095176   45790 command_runner.go:130] > # allowed_devices = [
	I0816 17:39:58.095184   45790 command_runner.go:130] > # 	"/dev/fuse",
	I0816 17:39:58.095192   45790 command_runner.go:130] > # ]
	I0816 17:39:58.095201   45790 command_runner.go:130] > # List of additional devices. specified as
	I0816 17:39:58.095218   45790 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0816 17:39:58.095229   45790 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0816 17:39:58.095240   45790 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0816 17:39:58.095249   45790 command_runner.go:130] > # additional_devices = [
	I0816 17:39:58.095253   45790 command_runner.go:130] > # ]
	I0816 17:39:58.095263   45790 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0816 17:39:58.095276   45790 command_runner.go:130] > # cdi_spec_dirs = [
	I0816 17:39:58.095283   45790 command_runner.go:130] > # 	"/etc/cdi",
	I0816 17:39:58.095291   45790 command_runner.go:130] > # 	"/var/run/cdi",
	I0816 17:39:58.095318   45790 command_runner.go:130] > # ]
	I0816 17:39:58.095327   45790 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0816 17:39:58.095338   45790 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0816 17:39:58.095348   45790 command_runner.go:130] > # Defaults to false.
	I0816 17:39:58.095361   45790 command_runner.go:130] > # device_ownership_from_security_context = false
	I0816 17:39:58.095375   45790 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0816 17:39:58.095387   45790 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0816 17:39:58.095395   45790 command_runner.go:130] > # hooks_dir = [
	I0816 17:39:58.095405   45790 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0816 17:39:58.095414   45790 command_runner.go:130] > # ]
	I0816 17:39:58.095425   45790 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0816 17:39:58.095438   45790 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0816 17:39:58.095450   45790 command_runner.go:130] > # its default mounts from the following two files:
	I0816 17:39:58.095457   45790 command_runner.go:130] > #
	I0816 17:39:58.095468   45790 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0816 17:39:58.095482   45790 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0816 17:39:58.095491   45790 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0816 17:39:58.095497   45790 command_runner.go:130] > #
	I0816 17:39:58.095509   45790 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0816 17:39:58.095519   45790 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0816 17:39:58.095531   45790 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0816 17:39:58.095540   45790 command_runner.go:130] > #      only add mounts it finds in this file.
	I0816 17:39:58.095550   45790 command_runner.go:130] > #
	I0816 17:39:58.095559   45790 command_runner.go:130] > # default_mounts_file = ""
	I0816 17:39:58.095569   45790 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0816 17:39:58.095583   45790 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0816 17:39:58.095592   45790 command_runner.go:130] > pids_limit = 1024
	I0816 17:39:58.095602   45790 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0816 17:39:58.095614   45790 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0816 17:39:58.095625   45790 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0816 17:39:58.095640   45790 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0816 17:39:58.095649   45790 command_runner.go:130] > # log_size_max = -1
	I0816 17:39:58.095660   45790 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0816 17:39:58.095673   45790 command_runner.go:130] > # log_to_journald = false
	I0816 17:39:58.095686   45790 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0816 17:39:58.095697   45790 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0816 17:39:58.095709   45790 command_runner.go:130] > # Path to directory for container attach sockets.
	I0816 17:39:58.095719   45790 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0816 17:39:58.095727   45790 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0816 17:39:58.095736   45790 command_runner.go:130] > # bind_mount_prefix = ""
	I0816 17:39:58.095745   45790 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0816 17:39:58.095754   45790 command_runner.go:130] > # read_only = false
	I0816 17:39:58.095764   45790 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0816 17:39:58.095776   45790 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0816 17:39:58.095786   45790 command_runner.go:130] > # live configuration reload.
	I0816 17:39:58.095793   45790 command_runner.go:130] > # log_level = "info"
	I0816 17:39:58.095804   45790 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0816 17:39:58.095813   45790 command_runner.go:130] > # This option supports live configuration reload.
	I0816 17:39:58.095823   45790 command_runner.go:130] > # log_filter = ""
	I0816 17:39:58.095832   45790 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0816 17:39:58.095858   45790 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0816 17:39:58.095869   45790 command_runner.go:130] > # separated by comma.
	I0816 17:39:58.095883   45790 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0816 17:39:58.095894   45790 command_runner.go:130] > # uid_mappings = ""
	I0816 17:39:58.095904   45790 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0816 17:39:58.095915   45790 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0816 17:39:58.095924   45790 command_runner.go:130] > # separated by comma.
	I0816 17:39:58.095936   45790 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0816 17:39:58.095944   45790 command_runner.go:130] > # gid_mappings = ""
	I0816 17:39:58.095953   45790 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0816 17:39:58.095966   45790 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0816 17:39:58.095979   45790 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0816 17:39:58.095994   45790 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0816 17:39:58.096005   45790 command_runner.go:130] > # minimum_mappable_uid = -1
	I0816 17:39:58.096017   45790 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0816 17:39:58.096030   45790 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0816 17:39:58.096042   45790 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0816 17:39:58.096057   45790 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0816 17:39:58.096067   45790 command_runner.go:130] > # minimum_mappable_gid = -1
	I0816 17:39:58.096077   45790 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0816 17:39:58.096086   45790 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0816 17:39:58.096094   45790 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0816 17:39:58.096107   45790 command_runner.go:130] > # ctr_stop_timeout = 30
	I0816 17:39:58.096120   45790 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0816 17:39:58.096133   45790 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0816 17:39:58.096143   45790 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0816 17:39:58.096154   45790 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0816 17:39:58.096160   45790 command_runner.go:130] > drop_infra_ctr = false
	I0816 17:39:58.096173   45790 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0816 17:39:58.096185   45790 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0816 17:39:58.096195   45790 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0816 17:39:58.096206   45790 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0816 17:39:58.096217   45790 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0816 17:39:58.096229   45790 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0816 17:39:58.096240   45790 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0816 17:39:58.096247   45790 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0816 17:39:58.096254   45790 command_runner.go:130] > # shared_cpuset = ""
	I0816 17:39:58.096263   45790 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0816 17:39:58.096274   45790 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0816 17:39:58.096281   45790 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0816 17:39:58.096300   45790 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0816 17:39:58.096312   45790 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0816 17:39:58.096322   45790 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0816 17:39:58.096335   45790 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0816 17:39:58.096343   45790 command_runner.go:130] > # enable_criu_support = false
	I0816 17:39:58.096351   45790 command_runner.go:130] > # Enable/disable the generation of the container,
	I0816 17:39:58.096364   45790 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0816 17:39:58.096373   45790 command_runner.go:130] > # enable_pod_events = false
	I0816 17:39:58.096383   45790 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0816 17:39:58.096397   45790 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0816 17:39:58.096408   45790 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0816 17:39:58.096417   45790 command_runner.go:130] > # default_runtime = "runc"
	I0816 17:39:58.096426   45790 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0816 17:39:58.096440   45790 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0816 17:39:58.096454   45790 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0816 17:39:58.096465   45790 command_runner.go:130] > # creation as a file is not desired either.
	I0816 17:39:58.096480   45790 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0816 17:39:58.096494   45790 command_runner.go:130] > # the hostname is being managed dynamically.
	I0816 17:39:58.096504   45790 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0816 17:39:58.096511   45790 command_runner.go:130] > # ]
	I0816 17:39:58.096523   45790 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0816 17:39:58.096536   45790 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0816 17:39:58.096549   45790 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0816 17:39:58.096560   45790 command_runner.go:130] > # Each entry in the table should follow the format:
	I0816 17:39:58.096567   45790 command_runner.go:130] > #
	I0816 17:39:58.096591   45790 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0816 17:39:58.096604   45790 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0816 17:39:58.096649   45790 command_runner.go:130] > # runtime_type = "oci"
	I0816 17:39:58.096662   45790 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0816 17:39:58.096670   45790 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0816 17:39:58.096680   45790 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0816 17:39:58.096687   45790 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0816 17:39:58.096696   45790 command_runner.go:130] > # monitor_env = []
	I0816 17:39:58.096703   45790 command_runner.go:130] > # privileged_without_host_devices = false
	I0816 17:39:58.096710   45790 command_runner.go:130] > # allowed_annotations = []
	I0816 17:39:58.096715   45790 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0816 17:39:58.096722   45790 command_runner.go:130] > # Where:
	I0816 17:39:58.096730   45790 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0816 17:39:58.096742   45790 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0816 17:39:58.096755   45790 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0816 17:39:58.096768   45790 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0816 17:39:58.096777   45790 command_runner.go:130] > #   in $PATH.
	I0816 17:39:58.096786   45790 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0816 17:39:58.096797   45790 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0816 17:39:58.096810   45790 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0816 17:39:58.096820   45790 command_runner.go:130] > #   state.
	I0816 17:39:58.096830   45790 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0816 17:39:58.096841   45790 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0816 17:39:58.096854   45790 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0816 17:39:58.096865   45790 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0816 17:39:58.096876   45790 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0816 17:39:58.096889   45790 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0816 17:39:58.096901   45790 command_runner.go:130] > #   The currently recognized values are:
	I0816 17:39:58.096911   45790 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0816 17:39:58.096924   45790 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0816 17:39:58.096937   45790 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0816 17:39:58.096949   45790 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0816 17:39:58.096964   45790 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0816 17:39:58.096977   45790 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0816 17:39:58.096990   45790 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0816 17:39:58.097000   45790 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0816 17:39:58.097012   45790 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0816 17:39:58.097025   45790 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0816 17:39:58.097034   45790 command_runner.go:130] > #   deprecated option "conmon".
	I0816 17:39:58.097045   45790 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0816 17:39:58.097056   45790 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0816 17:39:58.097068   45790 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0816 17:39:58.097079   45790 command_runner.go:130] > #   should be moved to the container's cgroup
	I0816 17:39:58.097094   45790 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0816 17:39:58.097114   45790 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0816 17:39:58.097130   45790 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0816 17:39:58.097141   45790 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0816 17:39:58.097147   45790 command_runner.go:130] > #
	I0816 17:39:58.097154   45790 command_runner.go:130] > # Using the seccomp notifier feature:
	I0816 17:39:58.097163   45790 command_runner.go:130] > #
	I0816 17:39:58.097172   45790 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0816 17:39:58.097185   45790 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0816 17:39:58.097193   45790 command_runner.go:130] > #
	I0816 17:39:58.097203   45790 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0816 17:39:58.097214   45790 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0816 17:39:58.097218   45790 command_runner.go:130] > #
	I0816 17:39:58.097227   45790 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0816 17:39:58.097235   45790 command_runner.go:130] > # feature.
	I0816 17:39:58.097241   45790 command_runner.go:130] > #
	I0816 17:39:58.097253   45790 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0816 17:39:58.097266   45790 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0816 17:39:58.097278   45790 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0816 17:39:58.097290   45790 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0816 17:39:58.097304   45790 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0816 17:39:58.097310   45790 command_runner.go:130] > #
	I0816 17:39:58.097319   45790 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0816 17:39:58.097336   45790 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0816 17:39:58.097344   45790 command_runner.go:130] > #
	I0816 17:39:58.097355   45790 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0816 17:39:58.097366   45790 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0816 17:39:58.097374   45790 command_runner.go:130] > #
	I0816 17:39:58.097383   45790 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0816 17:39:58.097392   45790 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0816 17:39:58.097396   45790 command_runner.go:130] > # limitation.
	I0816 17:39:58.097401   45790 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0816 17:39:58.097405   45790 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0816 17:39:58.097411   45790 command_runner.go:130] > runtime_type = "oci"
	I0816 17:39:58.097418   45790 command_runner.go:130] > runtime_root = "/run/runc"
	I0816 17:39:58.097424   45790 command_runner.go:130] > runtime_config_path = ""
	I0816 17:39:58.097431   45790 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0816 17:39:58.097437   45790 command_runner.go:130] > monitor_cgroup = "pod"
	I0816 17:39:58.097444   45790 command_runner.go:130] > monitor_exec_cgroup = ""
	I0816 17:39:58.097450   45790 command_runner.go:130] > monitor_env = [
	I0816 17:39:58.097459   45790 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0816 17:39:58.097464   45790 command_runner.go:130] > ]
	I0816 17:39:58.097471   45790 command_runner.go:130] > privileged_without_host_devices = false
	I0816 17:39:58.097480   45790 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0816 17:39:58.097488   45790 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0816 17:39:58.097501   45790 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0816 17:39:58.097516   45790 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0816 17:39:58.097531   45790 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0816 17:39:58.097542   45790 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0816 17:39:58.097557   45790 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0816 17:39:58.097571   45790 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0816 17:39:58.097583   45790 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0816 17:39:58.097594   45790 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0816 17:39:58.097603   45790 command_runner.go:130] > # Example:
	I0816 17:39:58.097611   45790 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0816 17:39:58.097619   45790 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0816 17:39:58.097627   45790 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0816 17:39:58.097634   45790 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0816 17:39:58.097639   45790 command_runner.go:130] > # cpuset = 0
	I0816 17:39:58.097645   45790 command_runner.go:130] > # cpushares = "0-1"
	I0816 17:39:58.097650   45790 command_runner.go:130] > # Where:
	I0816 17:39:58.097660   45790 command_runner.go:130] > # The workload name is workload-type.
	I0816 17:39:58.097671   45790 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0816 17:39:58.097680   45790 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0816 17:39:58.097689   45790 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0816 17:39:58.097700   45790 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0816 17:39:58.097709   45790 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0816 17:39:58.097717   45790 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0816 17:39:58.097727   45790 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0816 17:39:58.097733   45790 command_runner.go:130] > # Default value is set to true
	I0816 17:39:58.097740   45790 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0816 17:39:58.097748   45790 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0816 17:39:58.097756   45790 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0816 17:39:58.097764   45790 command_runner.go:130] > # Default value is set to 'false'
	I0816 17:39:58.097770   45790 command_runner.go:130] > # disable_hostport_mapping = false
	I0816 17:39:58.097780   45790 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0816 17:39:58.097785   45790 command_runner.go:130] > #
	I0816 17:39:58.097794   45790 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0816 17:39:58.097814   45790 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0816 17:39:58.097825   45790 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0816 17:39:58.097831   45790 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0816 17:39:58.097836   45790 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0816 17:39:58.097839   45790 command_runner.go:130] > [crio.image]
	I0816 17:39:58.097844   45790 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0816 17:39:58.097849   45790 command_runner.go:130] > # default_transport = "docker://"
	I0816 17:39:58.097856   45790 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0816 17:39:58.097862   45790 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0816 17:39:58.097866   45790 command_runner.go:130] > # global_auth_file = ""
	I0816 17:39:58.097871   45790 command_runner.go:130] > # The image used to instantiate infra containers.
	I0816 17:39:58.097875   45790 command_runner.go:130] > # This option supports live configuration reload.
	I0816 17:39:58.097883   45790 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0816 17:39:58.097889   45790 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0816 17:39:58.097895   45790 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0816 17:39:58.097900   45790 command_runner.go:130] > # This option supports live configuration reload.
	I0816 17:39:58.097907   45790 command_runner.go:130] > # pause_image_auth_file = ""
	I0816 17:39:58.097913   45790 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0816 17:39:58.097921   45790 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0816 17:39:58.097930   45790 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0816 17:39:58.097938   45790 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0816 17:39:58.097942   45790 command_runner.go:130] > # pause_command = "/pause"
	I0816 17:39:58.097949   45790 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0816 17:39:58.097955   45790 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0816 17:39:58.097963   45790 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0816 17:39:58.097968   45790 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0816 17:39:58.097976   45790 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0816 17:39:58.097981   45790 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0816 17:39:58.097985   45790 command_runner.go:130] > # pinned_images = [
	I0816 17:39:58.097989   45790 command_runner.go:130] > # ]
	I0816 17:39:58.097994   45790 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0816 17:39:58.098002   45790 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0816 17:39:58.098008   45790 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0816 17:39:58.098016   45790 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0816 17:39:58.098021   45790 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0816 17:39:58.098027   45790 command_runner.go:130] > # signature_policy = ""
	I0816 17:39:58.098032   45790 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0816 17:39:58.098040   45790 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0816 17:39:58.098047   45790 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0816 17:39:58.098055   45790 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0816 17:39:58.098061   45790 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0816 17:39:58.098066   45790 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0816 17:39:58.098071   45790 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0816 17:39:58.098081   45790 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0816 17:39:58.098084   45790 command_runner.go:130] > # changing them here.
	I0816 17:39:58.098088   45790 command_runner.go:130] > # insecure_registries = [
	I0816 17:39:58.098092   45790 command_runner.go:130] > # ]
	I0816 17:39:58.098098   45790 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0816 17:39:58.098105   45790 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0816 17:39:58.098110   45790 command_runner.go:130] > # image_volumes = "mkdir"
	I0816 17:39:58.098117   45790 command_runner.go:130] > # Temporary directory to use for storing big files
	I0816 17:39:58.098121   45790 command_runner.go:130] > # big_files_temporary_dir = ""
	I0816 17:39:58.098127   45790 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0816 17:39:58.098131   45790 command_runner.go:130] > # CNI plugins.
	I0816 17:39:58.098135   45790 command_runner.go:130] > [crio.network]
	I0816 17:39:58.098140   45790 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0816 17:39:58.098148   45790 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0816 17:39:58.098153   45790 command_runner.go:130] > # cni_default_network = ""
	I0816 17:39:58.098158   45790 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0816 17:39:58.098164   45790 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0816 17:39:58.098170   45790 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0816 17:39:58.098176   45790 command_runner.go:130] > # plugin_dirs = [
	I0816 17:39:58.098180   45790 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0816 17:39:58.098183   45790 command_runner.go:130] > # ]
	I0816 17:39:58.098188   45790 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0816 17:39:58.098194   45790 command_runner.go:130] > [crio.metrics]
	I0816 17:39:58.098198   45790 command_runner.go:130] > # Globally enable or disable metrics support.
	I0816 17:39:58.098203   45790 command_runner.go:130] > enable_metrics = true
	I0816 17:39:58.098209   45790 command_runner.go:130] > # Specify enabled metrics collectors.
	I0816 17:39:58.098214   45790 command_runner.go:130] > # Per default all metrics are enabled.
	I0816 17:39:58.098220   45790 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0816 17:39:58.098226   45790 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0816 17:39:58.098234   45790 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0816 17:39:58.098239   45790 command_runner.go:130] > # metrics_collectors = [
	I0816 17:39:58.098243   45790 command_runner.go:130] > # 	"operations",
	I0816 17:39:58.098248   45790 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0816 17:39:58.098253   45790 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0816 17:39:58.098256   45790 command_runner.go:130] > # 	"operations_errors",
	I0816 17:39:58.098264   45790 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0816 17:39:58.098270   45790 command_runner.go:130] > # 	"image_pulls_by_name",
	I0816 17:39:58.098278   45790 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0816 17:39:58.098282   45790 command_runner.go:130] > # 	"image_pulls_failures",
	I0816 17:39:58.098286   45790 command_runner.go:130] > # 	"image_pulls_successes",
	I0816 17:39:58.098296   45790 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0816 17:39:58.098303   45790 command_runner.go:130] > # 	"image_layer_reuse",
	I0816 17:39:58.098307   45790 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0816 17:39:58.098311   45790 command_runner.go:130] > # 	"containers_oom_total",
	I0816 17:39:58.098315   45790 command_runner.go:130] > # 	"containers_oom",
	I0816 17:39:58.098318   45790 command_runner.go:130] > # 	"processes_defunct",
	I0816 17:39:58.098322   45790 command_runner.go:130] > # 	"operations_total",
	I0816 17:39:58.098326   45790 command_runner.go:130] > # 	"operations_latency_seconds",
	I0816 17:39:58.098331   45790 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0816 17:39:58.098335   45790 command_runner.go:130] > # 	"operations_errors_total",
	I0816 17:39:58.098342   45790 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0816 17:39:58.098346   45790 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0816 17:39:58.098351   45790 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0816 17:39:58.098355   45790 command_runner.go:130] > # 	"image_pulls_success_total",
	I0816 17:39:58.098361   45790 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0816 17:39:58.098365   45790 command_runner.go:130] > # 	"containers_oom_count_total",
	I0816 17:39:58.098370   45790 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0816 17:39:58.098375   45790 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0816 17:39:58.098382   45790 command_runner.go:130] > # ]
	I0816 17:39:58.098387   45790 command_runner.go:130] > # The port on which the metrics server will listen.
	I0816 17:39:58.098391   45790 command_runner.go:130] > # metrics_port = 9090
	I0816 17:39:58.098395   45790 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0816 17:39:58.098399   45790 command_runner.go:130] > # metrics_socket = ""
	I0816 17:39:58.098404   45790 command_runner.go:130] > # The certificate for the secure metrics server.
	I0816 17:39:58.098413   45790 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0816 17:39:58.098422   45790 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0816 17:39:58.098432   45790 command_runner.go:130] > # certificate on any modification event.
	I0816 17:39:58.098438   45790 command_runner.go:130] > # metrics_cert = ""
	I0816 17:39:58.098449   45790 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0816 17:39:58.098457   45790 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0816 17:39:58.098461   45790 command_runner.go:130] > # metrics_key = ""
	I0816 17:39:58.098469   45790 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0816 17:39:58.098474   45790 command_runner.go:130] > [crio.tracing]
	I0816 17:39:58.098482   45790 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0816 17:39:58.098486   45790 command_runner.go:130] > # enable_tracing = false
	I0816 17:39:58.098495   45790 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0816 17:39:58.098505   45790 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0816 17:39:58.098518   45790 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0816 17:39:58.098526   45790 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0816 17:39:58.098530   45790 command_runner.go:130] > # CRI-O NRI configuration.
	I0816 17:39:58.098538   45790 command_runner.go:130] > [crio.nri]
	I0816 17:39:58.098545   45790 command_runner.go:130] > # Globally enable or disable NRI.
	I0816 17:39:58.098554   45790 command_runner.go:130] > # enable_nri = false
	I0816 17:39:58.098560   45790 command_runner.go:130] > # NRI socket to listen on.
	I0816 17:39:58.098570   45790 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0816 17:39:58.098579   45790 command_runner.go:130] > # NRI plugin directory to use.
	I0816 17:39:58.098590   45790 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0816 17:39:58.098598   45790 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0816 17:39:58.098609   45790 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0816 17:39:58.098620   45790 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0816 17:39:58.098629   45790 command_runner.go:130] > # nri_disable_connections = false
	I0816 17:39:58.098634   45790 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0816 17:39:58.098641   45790 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0816 17:39:58.098645   45790 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0816 17:39:58.098652   45790 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0816 17:39:58.098658   45790 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0816 17:39:58.098663   45790 command_runner.go:130] > [crio.stats]
	I0816 17:39:58.098670   45790 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0816 17:39:58.098680   45790 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0816 17:39:58.098687   45790 command_runner.go:130] > # stats_collection_period = 0
	I0816 17:39:58.098715   45790 command_runner.go:130] ! time="2024-08-16 17:39:58.051659931Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0816 17:39:58.098738   45790 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0816 17:39:58.098868   45790 cni.go:84] Creating CNI manager for ""
	I0816 17:39:58.098878   45790 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0816 17:39:58.098889   45790 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 17:39:58.098915   45790 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.218 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-797386 NodeName:multinode-797386 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.218"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.218 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 17:39:58.099090   45790 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.218
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-797386"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.218
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.218"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 17:39:58.099158   45790 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 17:39:58.108732   45790 command_runner.go:130] > kubeadm
	I0816 17:39:58.108749   45790 command_runner.go:130] > kubectl
	I0816 17:39:58.108755   45790 command_runner.go:130] > kubelet
	I0816 17:39:58.108872   45790 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 17:39:58.108930   45790 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 17:39:58.117711   45790 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0816 17:39:58.132111   45790 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 17:39:58.146287   45790 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0816 17:39:58.161543   45790 ssh_runner.go:195] Run: grep 192.168.39.218	control-plane.minikube.internal$ /etc/hosts
	I0816 17:39:58.164797   45790 command_runner.go:130] > 192.168.39.218	control-plane.minikube.internal
	I0816 17:39:58.164853   45790 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 17:39:58.300749   45790 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 17:39:58.315462   45790 certs.go:68] Setting up /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/multinode-797386 for IP: 192.168.39.218
	I0816 17:39:58.315488   45790 certs.go:194] generating shared ca certs ...
	I0816 17:39:58.315506   45790 certs.go:226] acquiring lock for ca certs: {Name:mk1e000575c94c138513704c2900b8a68810eb65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:39:58.315680   45790 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key
	I0816 17:39:58.315718   45790 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key
	I0816 17:39:58.315729   45790 certs.go:256] generating profile certs ...
	I0816 17:39:58.315801   45790 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/multinode-797386/client.key
	I0816 17:39:58.315856   45790 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/multinode-797386/apiserver.key.e5b1fba5
	I0816 17:39:58.315889   45790 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/multinode-797386/proxy-client.key
	I0816 17:39:58.315899   45790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0816 17:39:58.315912   45790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0816 17:39:58.315923   45790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0816 17:39:58.315933   45790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0816 17:39:58.315945   45790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/multinode-797386/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0816 17:39:58.315959   45790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/multinode-797386/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0816 17:39:58.315972   45790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/multinode-797386/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0816 17:39:58.315986   45790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/multinode-797386/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0816 17:39:58.316049   45790 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem (1338 bytes)
	W0816 17:39:58.316076   45790 certs.go:480] ignoring /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753_empty.pem, impossibly tiny 0 bytes
	I0816 17:39:58.316085   45790 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 17:39:58.316107   45790 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem (1082 bytes)
	I0816 17:39:58.316128   45790 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem (1123 bytes)
	I0816 17:39:58.316148   45790 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem (1679 bytes)
	I0816 17:39:58.316185   45790 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem (1708 bytes)
	I0816 17:39:58.316212   45790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem -> /usr/share/ca-certificates/16753.pem
	I0816 17:39:58.316226   45790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem -> /usr/share/ca-certificates/167532.pem
	I0816 17:39:58.316238   45790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0816 17:39:58.316875   45790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 17:39:58.338921   45790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 17:39:58.360319   45790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 17:39:58.382234   45790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 17:39:58.404599   45790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/multinode-797386/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0816 17:39:58.426747   45790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/multinode-797386/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 17:39:58.447847   45790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/multinode-797386/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 17:39:58.469629   45790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/multinode-797386/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 17:39:58.491056   45790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem --> /usr/share/ca-certificates/16753.pem (1338 bytes)
	I0816 17:39:58.512795   45790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /usr/share/ca-certificates/167532.pem (1708 bytes)
	I0816 17:39:58.534479   45790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 17:39:58.555704   45790 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 17:39:58.570418   45790 ssh_runner.go:195] Run: openssl version
	I0816 17:39:58.575519   45790 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0816 17:39:58.575709   45790 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 17:39:58.585300   45790 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 17:39:58.589857   45790 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 16 16:49 /usr/share/ca-certificates/minikubeCA.pem
	I0816 17:39:58.589878   45790 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:49 /usr/share/ca-certificates/minikubeCA.pem
	I0816 17:39:58.589915   45790 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 17:39:58.594966   45790 command_runner.go:130] > b5213941
	I0816 17:39:58.595010   45790 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 17:39:58.603431   45790 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16753.pem && ln -fs /usr/share/ca-certificates/16753.pem /etc/ssl/certs/16753.pem"
	I0816 17:39:58.612612   45790 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16753.pem
	I0816 17:39:58.616399   45790 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 16 17:00 /usr/share/ca-certificates/16753.pem
	I0816 17:39:58.616425   45790 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 17:00 /usr/share/ca-certificates/16753.pem
	I0816 17:39:58.616463   45790 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16753.pem
	I0816 17:39:58.621314   45790 command_runner.go:130] > 51391683
	I0816 17:39:58.621404   45790 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16753.pem /etc/ssl/certs/51391683.0"
	I0816 17:39:58.629544   45790 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167532.pem && ln -fs /usr/share/ca-certificates/167532.pem /etc/ssl/certs/167532.pem"
	I0816 17:39:58.638770   45790 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167532.pem
	I0816 17:39:58.642547   45790 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 16 17:00 /usr/share/ca-certificates/167532.pem
	I0816 17:39:58.642574   45790 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 17:00 /usr/share/ca-certificates/167532.pem
	I0816 17:39:58.642610   45790 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167532.pem
	I0816 17:39:58.647550   45790 command_runner.go:130] > 3ec20f2e
	I0816 17:39:58.647594   45790 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167532.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 17:39:58.655704   45790 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 17:39:58.659342   45790 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 17:39:58.659361   45790 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0816 17:39:58.659369   45790 command_runner.go:130] > Device: 253,1	Inode: 1056278     Links: 1
	I0816 17:39:58.659381   45790 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0816 17:39:58.659393   45790 command_runner.go:130] > Access: 2024-08-16 17:33:08.818485768 +0000
	I0816 17:39:58.659405   45790 command_runner.go:130] > Modify: 2024-08-16 17:33:08.818485768 +0000
	I0816 17:39:58.659413   45790 command_runner.go:130] > Change: 2024-08-16 17:33:08.818485768 +0000
	I0816 17:39:58.659424   45790 command_runner.go:130] >  Birth: 2024-08-16 17:33:08.818485768 +0000
	I0816 17:39:58.659470   45790 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 17:39:58.664301   45790 command_runner.go:130] > Certificate will not expire
	I0816 17:39:58.664489   45790 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 17:39:58.669261   45790 command_runner.go:130] > Certificate will not expire
	I0816 17:39:58.669422   45790 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 17:39:58.674109   45790 command_runner.go:130] > Certificate will not expire
	I0816 17:39:58.674259   45790 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 17:39:58.679047   45790 command_runner.go:130] > Certificate will not expire
	I0816 17:39:58.679091   45790 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 17:39:58.683677   45790 command_runner.go:130] > Certificate will not expire
	I0816 17:39:58.683763   45790 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 17:39:58.688832   45790 command_runner.go:130] > Certificate will not expire
	I0816 17:39:58.688893   45790 kubeadm.go:392] StartCluster: {Name:multinode-797386 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:multinode-797386 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.218 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.71 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 17:39:58.689003   45790 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 17:39:58.689051   45790 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 17:39:58.723885   45790 command_runner.go:130] > cdce23922d9c79c1305ead80951b322992a6c4263cccc427b5eef22d407760ac
	I0816 17:39:58.723910   45790 command_runner.go:130] > ab5b49957caceff30c5edc18fe05235aed3ed6346fff1257d81ab0332e4414b0
	I0816 17:39:58.723918   45790 command_runner.go:130] > d7de7a6593d24c1ff3638eb2b2d773183f0d8a3dc6dc55377f15f79d1c5a1b11
	I0816 17:39:58.723944   45790 command_runner.go:130] > 40703b34f4634a7257846aa83155677fe4db38b0df5ae116f3ef7e14e7ced732
	I0816 17:39:58.723956   45790 command_runner.go:130] > a7c3646b332f95294341f2049499a2f5ec6798771184ced80274c75132ee031a
	I0816 17:39:58.723962   45790 command_runner.go:130] > a9a8478e7a74ec1f31463e518d0ac01e55e49029495ecbd94952d75b02e5e31f
	I0816 17:39:58.723973   45790 command_runner.go:130] > c48fc7ab9fc9bd52528c2098fbf029f5d200bc571e1de1f6d6ef946967e93e1d
	I0816 17:39:58.723983   45790 command_runner.go:130] > 6c6740953e611ccc938310422e50ac5f9346f75cad1f1a8641b062847b43647f
	I0816 17:39:58.724023   45790 cri.go:89] found id: "cdce23922d9c79c1305ead80951b322992a6c4263cccc427b5eef22d407760ac"
	I0816 17:39:58.724036   45790 cri.go:89] found id: "ab5b49957caceff30c5edc18fe05235aed3ed6346fff1257d81ab0332e4414b0"
	I0816 17:39:58.724042   45790 cri.go:89] found id: "d7de7a6593d24c1ff3638eb2b2d773183f0d8a3dc6dc55377f15f79d1c5a1b11"
	I0816 17:39:58.724049   45790 cri.go:89] found id: "40703b34f4634a7257846aa83155677fe4db38b0df5ae116f3ef7e14e7ced732"
	I0816 17:39:58.724054   45790 cri.go:89] found id: "a7c3646b332f95294341f2049499a2f5ec6798771184ced80274c75132ee031a"
	I0816 17:39:58.724060   45790 cri.go:89] found id: "a9a8478e7a74ec1f31463e518d0ac01e55e49029495ecbd94952d75b02e5e31f"
	I0816 17:39:58.724065   45790 cri.go:89] found id: "c48fc7ab9fc9bd52528c2098fbf029f5d200bc571e1de1f6d6ef946967e93e1d"
	I0816 17:39:58.724068   45790 cri.go:89] found id: "6c6740953e611ccc938310422e50ac5f9346f75cad1f1a8641b062847b43647f"
	I0816 17:39:58.724071   45790 cri.go:89] found id: ""
	I0816 17:39:58.724117   45790 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 16 17:41:44 multinode-797386 crio[2782]: time="2024-08-16 17:41:44.824421481Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723830104824397490,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=54698640-3f5d-4865-8b98-3b8fe16ade39 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 17:41:44 multinode-797386 crio[2782]: time="2024-08-16 17:41:44.824900905Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4ce94913-e9ad-4eee-a47e-89c6a0d6210c name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 17:41:44 multinode-797386 crio[2782]: time="2024-08-16 17:41:44.824949840Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4ce94913-e9ad-4eee-a47e-89c6a0d6210c name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 17:41:44 multinode-797386 crio[2782]: time="2024-08-16 17:41:44.825324786Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:098cb4d42f45971f744870cab8252004b646b1cae854b99ed067069c91d0a919,PodSandboxId:69a58bde7f31e870042d7bd2a0b242639d66d1e2275ce8c9da9267164a8a8589,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723830039464008494,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-6986q,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6043b81f-fa83-40b5-9674-cf22bb48ad7a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05ab432af1c9ceb8c8581aa18daa8602c8d5ffab88a3c85c24b28e75eb810a16,PodSandboxId:b81b19f5f5d080de6ca34dc0ad182b3ba32a3425f2375075c8330d6b4d5d59f2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723830006003627531,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ksr6k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0a46b8f-ea93-42d6-a11c-be45c46b3090,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ac3da3f42414f32fe9de3cb0e5b73ddb03e164893d0f0c5ec697f791f0c6d65,PodSandboxId:2d4df22c0989e95cf8a46ba7bf1b7286d50b35cfab8fd3504db38afff2dbdfc2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723830005863909944,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-bskwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a6d6155-5571-4393-9e73-83a08e87cbf5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1c359d855cf881c9dd98bbfc89a234c44c8df8963d9283cd6e479c25c66a0b6,PodSandboxId:eff1f1e5ad9f410a1eb62b74d5907d137204f487b8094452006f4782d0239e47,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723830005836122349,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 365b0a49-c86a-46f0-bf0f-3b84e1cf9ac2,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e50cd41dedf63835f7ab9caeedc1516aa542aba5eb4fba13647d34bbc9737997,PodSandboxId:50e8d156417dd83b93cc20d37a0a4fcdf187c58ebc894f5dd455adcf0c2d6402,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723830005694420111,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tdmh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b81a140-0be4-49c7-8d0b-1ebef6efbdb2,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df7fce7cad33c0a2bcf3266bec644bd9c040b5cb853854e79bff3ab38f60e9b2,PodSandboxId:e8e4d3319b6d68a5c76b6481687e4d8610a8ad5d1a68389d83764b2b19f0bea0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723830000940003177,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-797386,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f362b6a924856b76b521c8598769e769,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60af2ace6d078854a1463f8016e4ab1e4b7bae447c85c8ca8e8133634455d135,PodSandboxId:4ab9f1526277a13ad2c8e657d92025f32aa77e6de04fc3ad5ceb6b0438e5ad58,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723830000888587770,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-797386,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76c9d6f117fa769b3b818b0419b322dc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:789e8d05ce4d8643459c20ed36046ab525699960cea246d15807f7a5f98866f3,PodSandboxId:ff3bb9a3535c889166d54b86e10b096c38e35a23a28ac57b37e6ea95e20238e2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723830000865318712,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-797386,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c7a4368fb8cd946d884c2df1c461975,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e5a8f300597d04f1ee5f27ae2cc632a1587c6299983e4a6de87814f96e37c65,PodSandboxId:f72fd728d5193ee968665a0dd3cfdb2264323cc2d77d4a72c1aeadefde6facf4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723830000804216173,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-797386,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5737bb43c1673c7c014a490a4465a36e,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4d0ad9bebe5d46ff8559086079a818bce5894de7bc569ec34f5c83a0da2b450,PodSandboxId:f95d33ca1bb337df421b29f762df2a024b547b370b2e99736426dde2275e3d94,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723829672341815218,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-6986q,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6043b81f-fa83-40b5-9674-cf22bb48ad7a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdce23922d9c79c1305ead80951b322992a6c4263cccc427b5eef22d407760ac,PodSandboxId:293fa56faeea3340fb90217cfe3bdff948214c0089a764c14a6110acaa73ed85,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723829620219710468,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-bskwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a6d6155-5571-4393-9e73-83a08e87cbf5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab5b49957caceff30c5edc18fe05235aed3ed6346fff1257d81ab0332e4414b0,PodSandboxId:ad2ceb5c4d227b17acf99d8a12f45df9b83e4344c225d1b75bbffcb317e98bc1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723829618677357958,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 365b0a49-c86a-46f0-bf0f-3b84e1cf9ac2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7de7a6593d24c1ff3638eb2b2d773183f0d8a3dc6dc55377f15f79d1c5a1b11,PodSandboxId:b847be3789c2c0c4e5451846d9ec79f6e0ee5c208a86780955e67d7a9c7ce2ad,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723829607086102880,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ksr6k,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: b0a46b8f-ea93-42d6-a11c-be45c46b3090,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40703b34f4634a7257846aa83155677fe4db38b0df5ae116f3ef7e14e7ced732,PodSandboxId:000e360ba74addf02c020b3454196d03f64bdb827f18333fc95773fe5a167496,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723829603514937370,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tdmh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 6b81a140-0be4-49c7-8d0b-1ebef6efbdb2,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9a8478e7a74ec1f31463e518d0ac01e55e49029495ecbd94952d75b02e5e31f,PodSandboxId:564e02472f5b7c1f7201ff4264735c6e2aa18bb61088a92e82238567eb565b85,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723829592230868888,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-797386,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 3c7a4368fb8cd946d884c2df1c461975,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7c3646b332f95294341f2049499a2f5ec6798771184ced80274c75132ee031a,PodSandboxId:99cc5c92b21797b13b182c2713b158ded6209c0016ea8ebab3d84d6a55c9bc7b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723829592235985468,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-797386,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76c9d6f117fa769b3b81
8b0419b322dc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c48fc7ab9fc9bd52528c2098fbf029f5d200bc571e1de1f6d6ef946967e93e1d,PodSandboxId:08901405eaafce26555a9e8d717b4722961b1ca8809aa5c9a84f3b3476868752,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723829592225164976,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-797386,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5737bb43c1673c7c014a490a4465a36e,},
Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c6740953e611ccc938310422e50ac5f9346f75cad1f1a8641b062847b43647f,PodSandboxId:eadfe17cfad28e4b89ba41ea017afb8b50faa9bf87cc270f2586977426a03a8d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723829592013528926,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-797386,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f362b6a924856b76b521c8598769e769,},Annotations:map
[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4ce94913-e9ad-4eee-a47e-89c6a0d6210c name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 17:41:44 multinode-797386 crio[2782]: time="2024-08-16 17:41:44.863208186Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2b225c58-979e-4a59-a2c6-a13e01c0c9e4 name=/runtime.v1.RuntimeService/Version
	Aug 16 17:41:44 multinode-797386 crio[2782]: time="2024-08-16 17:41:44.863328804Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2b225c58-979e-4a59-a2c6-a13e01c0c9e4 name=/runtime.v1.RuntimeService/Version
	Aug 16 17:41:44 multinode-797386 crio[2782]: time="2024-08-16 17:41:44.864249723Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=23a40bcb-2b80-43c7-a83c-af66bdc0e125 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 17:41:44 multinode-797386 crio[2782]: time="2024-08-16 17:41:44.864788580Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723830104864755246,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=23a40bcb-2b80-43c7-a83c-af66bdc0e125 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 17:41:44 multinode-797386 crio[2782]: time="2024-08-16 17:41:44.865330929Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1367ebdb-451a-4a60-ac93-db222a0ce07c name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 17:41:44 multinode-797386 crio[2782]: time="2024-08-16 17:41:44.865405881Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1367ebdb-451a-4a60-ac93-db222a0ce07c name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 17:41:44 multinode-797386 crio[2782]: time="2024-08-16 17:41:44.865997656Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:098cb4d42f45971f744870cab8252004b646b1cae854b99ed067069c91d0a919,PodSandboxId:69a58bde7f31e870042d7bd2a0b242639d66d1e2275ce8c9da9267164a8a8589,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723830039464008494,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-6986q,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6043b81f-fa83-40b5-9674-cf22bb48ad7a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05ab432af1c9ceb8c8581aa18daa8602c8d5ffab88a3c85c24b28e75eb810a16,PodSandboxId:b81b19f5f5d080de6ca34dc0ad182b3ba32a3425f2375075c8330d6b4d5d59f2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723830006003627531,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ksr6k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0a46b8f-ea93-42d6-a11c-be45c46b3090,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ac3da3f42414f32fe9de3cb0e5b73ddb03e164893d0f0c5ec697f791f0c6d65,PodSandboxId:2d4df22c0989e95cf8a46ba7bf1b7286d50b35cfab8fd3504db38afff2dbdfc2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723830005863909944,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-bskwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a6d6155-5571-4393-9e73-83a08e87cbf5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1c359d855cf881c9dd98bbfc89a234c44c8df8963d9283cd6e479c25c66a0b6,PodSandboxId:eff1f1e5ad9f410a1eb62b74d5907d137204f487b8094452006f4782d0239e47,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723830005836122349,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 365b0a49-c86a-46f0-bf0f-3b84e1cf9ac2,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e50cd41dedf63835f7ab9caeedc1516aa542aba5eb4fba13647d34bbc9737997,PodSandboxId:50e8d156417dd83b93cc20d37a0a4fcdf187c58ebc894f5dd455adcf0c2d6402,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723830005694420111,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tdmh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b81a140-0be4-49c7-8d0b-1ebef6efbdb2,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df7fce7cad33c0a2bcf3266bec644bd9c040b5cb853854e79bff3ab38f60e9b2,PodSandboxId:e8e4d3319b6d68a5c76b6481687e4d8610a8ad5d1a68389d83764b2b19f0bea0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723830000940003177,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-797386,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f362b6a924856b76b521c8598769e769,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60af2ace6d078854a1463f8016e4ab1e4b7bae447c85c8ca8e8133634455d135,PodSandboxId:4ab9f1526277a13ad2c8e657d92025f32aa77e6de04fc3ad5ceb6b0438e5ad58,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723830000888587770,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-797386,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76c9d6f117fa769b3b818b0419b322dc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:789e8d05ce4d8643459c20ed36046ab525699960cea246d15807f7a5f98866f3,PodSandboxId:ff3bb9a3535c889166d54b86e10b096c38e35a23a28ac57b37e6ea95e20238e2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723830000865318712,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-797386,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c7a4368fb8cd946d884c2df1c461975,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e5a8f300597d04f1ee5f27ae2cc632a1587c6299983e4a6de87814f96e37c65,PodSandboxId:f72fd728d5193ee968665a0dd3cfdb2264323cc2d77d4a72c1aeadefde6facf4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723830000804216173,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-797386,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5737bb43c1673c7c014a490a4465a36e,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4d0ad9bebe5d46ff8559086079a818bce5894de7bc569ec34f5c83a0da2b450,PodSandboxId:f95d33ca1bb337df421b29f762df2a024b547b370b2e99736426dde2275e3d94,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723829672341815218,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-6986q,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6043b81f-fa83-40b5-9674-cf22bb48ad7a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdce23922d9c79c1305ead80951b322992a6c4263cccc427b5eef22d407760ac,PodSandboxId:293fa56faeea3340fb90217cfe3bdff948214c0089a764c14a6110acaa73ed85,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723829620219710468,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-bskwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a6d6155-5571-4393-9e73-83a08e87cbf5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab5b49957caceff30c5edc18fe05235aed3ed6346fff1257d81ab0332e4414b0,PodSandboxId:ad2ceb5c4d227b17acf99d8a12f45df9b83e4344c225d1b75bbffcb317e98bc1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723829618677357958,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 365b0a49-c86a-46f0-bf0f-3b84e1cf9ac2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7de7a6593d24c1ff3638eb2b2d773183f0d8a3dc6dc55377f15f79d1c5a1b11,PodSandboxId:b847be3789c2c0c4e5451846d9ec79f6e0ee5c208a86780955e67d7a9c7ce2ad,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723829607086102880,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ksr6k,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: b0a46b8f-ea93-42d6-a11c-be45c46b3090,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40703b34f4634a7257846aa83155677fe4db38b0df5ae116f3ef7e14e7ced732,PodSandboxId:000e360ba74addf02c020b3454196d03f64bdb827f18333fc95773fe5a167496,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723829603514937370,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tdmh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 6b81a140-0be4-49c7-8d0b-1ebef6efbdb2,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9a8478e7a74ec1f31463e518d0ac01e55e49029495ecbd94952d75b02e5e31f,PodSandboxId:564e02472f5b7c1f7201ff4264735c6e2aa18bb61088a92e82238567eb565b85,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723829592230868888,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-797386,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 3c7a4368fb8cd946d884c2df1c461975,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7c3646b332f95294341f2049499a2f5ec6798771184ced80274c75132ee031a,PodSandboxId:99cc5c92b21797b13b182c2713b158ded6209c0016ea8ebab3d84d6a55c9bc7b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723829592235985468,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-797386,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76c9d6f117fa769b3b81
8b0419b322dc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c48fc7ab9fc9bd52528c2098fbf029f5d200bc571e1de1f6d6ef946967e93e1d,PodSandboxId:08901405eaafce26555a9e8d717b4722961b1ca8809aa5c9a84f3b3476868752,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723829592225164976,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-797386,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5737bb43c1673c7c014a490a4465a36e,},
Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c6740953e611ccc938310422e50ac5f9346f75cad1f1a8641b062847b43647f,PodSandboxId:eadfe17cfad28e4b89ba41ea017afb8b50faa9bf87cc270f2586977426a03a8d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723829592013528926,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-797386,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f362b6a924856b76b521c8598769e769,},Annotations:map
[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1367ebdb-451a-4a60-ac93-db222a0ce07c name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 17:41:44 multinode-797386 crio[2782]: time="2024-08-16 17:41:44.904788555Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c997f745-b7e5-4c47-8ede-afc2186e4256 name=/runtime.v1.RuntimeService/Version
	Aug 16 17:41:44 multinode-797386 crio[2782]: time="2024-08-16 17:41:44.904877919Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c997f745-b7e5-4c47-8ede-afc2186e4256 name=/runtime.v1.RuntimeService/Version
	Aug 16 17:41:44 multinode-797386 crio[2782]: time="2024-08-16 17:41:44.905937348Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6839047c-d5ba-4679-b042-44fee90f35cb name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 17:41:44 multinode-797386 crio[2782]: time="2024-08-16 17:41:44.906397792Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723830104906368698,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6839047c-d5ba-4679-b042-44fee90f35cb name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 17:41:44 multinode-797386 crio[2782]: time="2024-08-16 17:41:44.906896952Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=df534aab-5a20-471b-97ab-712d0cfef1d8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 17:41:44 multinode-797386 crio[2782]: time="2024-08-16 17:41:44.906967332Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=df534aab-5a20-471b-97ab-712d0cfef1d8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 17:41:44 multinode-797386 crio[2782]: time="2024-08-16 17:41:44.907320824Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:098cb4d42f45971f744870cab8252004b646b1cae854b99ed067069c91d0a919,PodSandboxId:69a58bde7f31e870042d7bd2a0b242639d66d1e2275ce8c9da9267164a8a8589,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723830039464008494,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-6986q,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6043b81f-fa83-40b5-9674-cf22bb48ad7a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05ab432af1c9ceb8c8581aa18daa8602c8d5ffab88a3c85c24b28e75eb810a16,PodSandboxId:b81b19f5f5d080de6ca34dc0ad182b3ba32a3425f2375075c8330d6b4d5d59f2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723830006003627531,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ksr6k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0a46b8f-ea93-42d6-a11c-be45c46b3090,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ac3da3f42414f32fe9de3cb0e5b73ddb03e164893d0f0c5ec697f791f0c6d65,PodSandboxId:2d4df22c0989e95cf8a46ba7bf1b7286d50b35cfab8fd3504db38afff2dbdfc2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723830005863909944,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-bskwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a6d6155-5571-4393-9e73-83a08e87cbf5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1c359d855cf881c9dd98bbfc89a234c44c8df8963d9283cd6e479c25c66a0b6,PodSandboxId:eff1f1e5ad9f410a1eb62b74d5907d137204f487b8094452006f4782d0239e47,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723830005836122349,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 365b0a49-c86a-46f0-bf0f-3b84e1cf9ac2,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e50cd41dedf63835f7ab9caeedc1516aa542aba5eb4fba13647d34bbc9737997,PodSandboxId:50e8d156417dd83b93cc20d37a0a4fcdf187c58ebc894f5dd455adcf0c2d6402,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723830005694420111,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tdmh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b81a140-0be4-49c7-8d0b-1ebef6efbdb2,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df7fce7cad33c0a2bcf3266bec644bd9c040b5cb853854e79bff3ab38f60e9b2,PodSandboxId:e8e4d3319b6d68a5c76b6481687e4d8610a8ad5d1a68389d83764b2b19f0bea0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723830000940003177,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-797386,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f362b6a924856b76b521c8598769e769,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60af2ace6d078854a1463f8016e4ab1e4b7bae447c85c8ca8e8133634455d135,PodSandboxId:4ab9f1526277a13ad2c8e657d92025f32aa77e6de04fc3ad5ceb6b0438e5ad58,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723830000888587770,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-797386,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76c9d6f117fa769b3b818b0419b322dc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:789e8d05ce4d8643459c20ed36046ab525699960cea246d15807f7a5f98866f3,PodSandboxId:ff3bb9a3535c889166d54b86e10b096c38e35a23a28ac57b37e6ea95e20238e2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723830000865318712,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-797386,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c7a4368fb8cd946d884c2df1c461975,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e5a8f300597d04f1ee5f27ae2cc632a1587c6299983e4a6de87814f96e37c65,PodSandboxId:f72fd728d5193ee968665a0dd3cfdb2264323cc2d77d4a72c1aeadefde6facf4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723830000804216173,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-797386,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5737bb43c1673c7c014a490a4465a36e,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4d0ad9bebe5d46ff8559086079a818bce5894de7bc569ec34f5c83a0da2b450,PodSandboxId:f95d33ca1bb337df421b29f762df2a024b547b370b2e99736426dde2275e3d94,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723829672341815218,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-6986q,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6043b81f-fa83-40b5-9674-cf22bb48ad7a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdce23922d9c79c1305ead80951b322992a6c4263cccc427b5eef22d407760ac,PodSandboxId:293fa56faeea3340fb90217cfe3bdff948214c0089a764c14a6110acaa73ed85,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723829620219710468,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-bskwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a6d6155-5571-4393-9e73-83a08e87cbf5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab5b49957caceff30c5edc18fe05235aed3ed6346fff1257d81ab0332e4414b0,PodSandboxId:ad2ceb5c4d227b17acf99d8a12f45df9b83e4344c225d1b75bbffcb317e98bc1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723829618677357958,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 365b0a49-c86a-46f0-bf0f-3b84e1cf9ac2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7de7a6593d24c1ff3638eb2b2d773183f0d8a3dc6dc55377f15f79d1c5a1b11,PodSandboxId:b847be3789c2c0c4e5451846d9ec79f6e0ee5c208a86780955e67d7a9c7ce2ad,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723829607086102880,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ksr6k,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: b0a46b8f-ea93-42d6-a11c-be45c46b3090,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40703b34f4634a7257846aa83155677fe4db38b0df5ae116f3ef7e14e7ced732,PodSandboxId:000e360ba74addf02c020b3454196d03f64bdb827f18333fc95773fe5a167496,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723829603514937370,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tdmh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 6b81a140-0be4-49c7-8d0b-1ebef6efbdb2,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9a8478e7a74ec1f31463e518d0ac01e55e49029495ecbd94952d75b02e5e31f,PodSandboxId:564e02472f5b7c1f7201ff4264735c6e2aa18bb61088a92e82238567eb565b85,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723829592230868888,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-797386,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 3c7a4368fb8cd946d884c2df1c461975,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7c3646b332f95294341f2049499a2f5ec6798771184ced80274c75132ee031a,PodSandboxId:99cc5c92b21797b13b182c2713b158ded6209c0016ea8ebab3d84d6a55c9bc7b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723829592235985468,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-797386,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76c9d6f117fa769b3b81
8b0419b322dc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c48fc7ab9fc9bd52528c2098fbf029f5d200bc571e1de1f6d6ef946967e93e1d,PodSandboxId:08901405eaafce26555a9e8d717b4722961b1ca8809aa5c9a84f3b3476868752,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723829592225164976,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-797386,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5737bb43c1673c7c014a490a4465a36e,},
Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c6740953e611ccc938310422e50ac5f9346f75cad1f1a8641b062847b43647f,PodSandboxId:eadfe17cfad28e4b89ba41ea017afb8b50faa9bf87cc270f2586977426a03a8d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723829592013528926,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-797386,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f362b6a924856b76b521c8598769e769,},Annotations:map
[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=df534aab-5a20-471b-97ab-712d0cfef1d8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 17:41:44 multinode-797386 crio[2782]: time="2024-08-16 17:41:44.949229648Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7ea78617-ec0d-4743-92fd-89eeaa7e7b74 name=/runtime.v1.RuntimeService/Version
	Aug 16 17:41:44 multinode-797386 crio[2782]: time="2024-08-16 17:41:44.949507816Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7ea78617-ec0d-4743-92fd-89eeaa7e7b74 name=/runtime.v1.RuntimeService/Version
	Aug 16 17:41:44 multinode-797386 crio[2782]: time="2024-08-16 17:41:44.950630861Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=daba41c3-2805-4782-87b4-9a3203f29478 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 17:41:44 multinode-797386 crio[2782]: time="2024-08-16 17:41:44.951101943Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723830104951078976,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=daba41c3-2805-4782-87b4-9a3203f29478 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 17:41:44 multinode-797386 crio[2782]: time="2024-08-16 17:41:44.951623173Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f834c6b0-9ec2-44e1-9f78-b80bff1b221c name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 17:41:44 multinode-797386 crio[2782]: time="2024-08-16 17:41:44.951687941Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f834c6b0-9ec2-44e1-9f78-b80bff1b221c name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 17:41:44 multinode-797386 crio[2782]: time="2024-08-16 17:41:44.953090751Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:098cb4d42f45971f744870cab8252004b646b1cae854b99ed067069c91d0a919,PodSandboxId:69a58bde7f31e870042d7bd2a0b242639d66d1e2275ce8c9da9267164a8a8589,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723830039464008494,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-6986q,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6043b81f-fa83-40b5-9674-cf22bb48ad7a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05ab432af1c9ceb8c8581aa18daa8602c8d5ffab88a3c85c24b28e75eb810a16,PodSandboxId:b81b19f5f5d080de6ca34dc0ad182b3ba32a3425f2375075c8330d6b4d5d59f2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723830006003627531,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ksr6k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0a46b8f-ea93-42d6-a11c-be45c46b3090,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ac3da3f42414f32fe9de3cb0e5b73ddb03e164893d0f0c5ec697f791f0c6d65,PodSandboxId:2d4df22c0989e95cf8a46ba7bf1b7286d50b35cfab8fd3504db38afff2dbdfc2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723830005863909944,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-bskwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a6d6155-5571-4393-9e73-83a08e87cbf5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1c359d855cf881c9dd98bbfc89a234c44c8df8963d9283cd6e479c25c66a0b6,PodSandboxId:eff1f1e5ad9f410a1eb62b74d5907d137204f487b8094452006f4782d0239e47,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723830005836122349,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 365b0a49-c86a-46f0-bf0f-3b84e1cf9ac2,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e50cd41dedf63835f7ab9caeedc1516aa542aba5eb4fba13647d34bbc9737997,PodSandboxId:50e8d156417dd83b93cc20d37a0a4fcdf187c58ebc894f5dd455adcf0c2d6402,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723830005694420111,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tdmh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b81a140-0be4-49c7-8d0b-1ebef6efbdb2,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df7fce7cad33c0a2bcf3266bec644bd9c040b5cb853854e79bff3ab38f60e9b2,PodSandboxId:e8e4d3319b6d68a5c76b6481687e4d8610a8ad5d1a68389d83764b2b19f0bea0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723830000940003177,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-797386,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f362b6a924856b76b521c8598769e769,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60af2ace6d078854a1463f8016e4ab1e4b7bae447c85c8ca8e8133634455d135,PodSandboxId:4ab9f1526277a13ad2c8e657d92025f32aa77e6de04fc3ad5ceb6b0438e5ad58,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723830000888587770,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-797386,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76c9d6f117fa769b3b818b0419b322dc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:789e8d05ce4d8643459c20ed36046ab525699960cea246d15807f7a5f98866f3,PodSandboxId:ff3bb9a3535c889166d54b86e10b096c38e35a23a28ac57b37e6ea95e20238e2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723830000865318712,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-797386,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c7a4368fb8cd946d884c2df1c461975,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e5a8f300597d04f1ee5f27ae2cc632a1587c6299983e4a6de87814f96e37c65,PodSandboxId:f72fd728d5193ee968665a0dd3cfdb2264323cc2d77d4a72c1aeadefde6facf4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723830000804216173,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-797386,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5737bb43c1673c7c014a490a4465a36e,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4d0ad9bebe5d46ff8559086079a818bce5894de7bc569ec34f5c83a0da2b450,PodSandboxId:f95d33ca1bb337df421b29f762df2a024b547b370b2e99736426dde2275e3d94,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723829672341815218,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-6986q,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6043b81f-fa83-40b5-9674-cf22bb48ad7a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdce23922d9c79c1305ead80951b322992a6c4263cccc427b5eef22d407760ac,PodSandboxId:293fa56faeea3340fb90217cfe3bdff948214c0089a764c14a6110acaa73ed85,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723829620219710468,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-bskwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a6d6155-5571-4393-9e73-83a08e87cbf5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab5b49957caceff30c5edc18fe05235aed3ed6346fff1257d81ab0332e4414b0,PodSandboxId:ad2ceb5c4d227b17acf99d8a12f45df9b83e4344c225d1b75bbffcb317e98bc1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723829618677357958,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 365b0a49-c86a-46f0-bf0f-3b84e1cf9ac2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7de7a6593d24c1ff3638eb2b2d773183f0d8a3dc6dc55377f15f79d1c5a1b11,PodSandboxId:b847be3789c2c0c4e5451846d9ec79f6e0ee5c208a86780955e67d7a9c7ce2ad,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723829607086102880,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ksr6k,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: b0a46b8f-ea93-42d6-a11c-be45c46b3090,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40703b34f4634a7257846aa83155677fe4db38b0df5ae116f3ef7e14e7ced732,PodSandboxId:000e360ba74addf02c020b3454196d03f64bdb827f18333fc95773fe5a167496,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723829603514937370,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tdmh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 6b81a140-0be4-49c7-8d0b-1ebef6efbdb2,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9a8478e7a74ec1f31463e518d0ac01e55e49029495ecbd94952d75b02e5e31f,PodSandboxId:564e02472f5b7c1f7201ff4264735c6e2aa18bb61088a92e82238567eb565b85,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723829592230868888,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-797386,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 3c7a4368fb8cd946d884c2df1c461975,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7c3646b332f95294341f2049499a2f5ec6798771184ced80274c75132ee031a,PodSandboxId:99cc5c92b21797b13b182c2713b158ded6209c0016ea8ebab3d84d6a55c9bc7b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723829592235985468,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-797386,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76c9d6f117fa769b3b81
8b0419b322dc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c48fc7ab9fc9bd52528c2098fbf029f5d200bc571e1de1f6d6ef946967e93e1d,PodSandboxId:08901405eaafce26555a9e8d717b4722961b1ca8809aa5c9a84f3b3476868752,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723829592225164976,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-797386,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5737bb43c1673c7c014a490a4465a36e,},
Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c6740953e611ccc938310422e50ac5f9346f75cad1f1a8641b062847b43647f,PodSandboxId:eadfe17cfad28e4b89ba41ea017afb8b50faa9bf87cc270f2586977426a03a8d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723829592013528926,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-797386,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f362b6a924856b76b521c8598769e769,},Annotations:map
[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f834c6b0-9ec2-44e1-9f78-b80bff1b221c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	098cb4d42f459       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   69a58bde7f31e       busybox-7dff88458-6986q
	05ab432af1c9c       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      About a minute ago   Running             kindnet-cni               1                   b81b19f5f5d08       kindnet-ksr6k
	1ac3da3f42414       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   2d4df22c0989e       coredns-6f6b679f8f-bskwd
	f1c359d855cf8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   eff1f1e5ad9f4       storage-provisioner
	e50cd41dedf63       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      About a minute ago   Running             kube-proxy                1                   50e8d156417dd       kube-proxy-tdmh8
	df7fce7cad33c       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      About a minute ago   Running             kube-scheduler            1                   e8e4d3319b6d6       kube-scheduler-multinode-797386
	60af2ace6d078       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      About a minute ago   Running             etcd                      1                   4ab9f1526277a       etcd-multinode-797386
	789e8d05ce4d8       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      About a minute ago   Running             kube-controller-manager   1                   ff3bb9a3535c8       kube-controller-manager-multinode-797386
	7e5a8f300597d       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      About a minute ago   Running             kube-apiserver            1                   f72fd728d5193       kube-apiserver-multinode-797386
	f4d0ad9bebe5d       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   f95d33ca1bb33       busybox-7dff88458-6986q
	cdce23922d9c7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      8 minutes ago        Exited              coredns                   0                   293fa56faeea3       coredns-6f6b679f8f-bskwd
	ab5b49957cace       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago        Exited              storage-provisioner       0                   ad2ceb5c4d227       storage-provisioner
	d7de7a6593d24       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    8 minutes ago        Exited              kindnet-cni               0                   b847be3789c2c       kindnet-ksr6k
	40703b34f4634       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      8 minutes ago        Exited              kube-proxy                0                   000e360ba74ad       kube-proxy-tdmh8
	a7c3646b332f9       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      8 minutes ago        Exited              etcd                      0                   99cc5c92b2179       etcd-multinode-797386
	a9a8478e7a74e       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      8 minutes ago        Exited              kube-controller-manager   0                   564e02472f5b7       kube-controller-manager-multinode-797386
	c48fc7ab9fc9b       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      8 minutes ago        Exited              kube-apiserver            0                   08901405eaafc       kube-apiserver-multinode-797386
	6c6740953e611       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      8 minutes ago        Exited              kube-scheduler            0                   eadfe17cfad28       kube-scheduler-multinode-797386
	
	
	==> coredns [1ac3da3f42414f32fe9de3cb0e5b73ddb03e164893d0f0c5ec697f791f0c6d65] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:49747 - 32955 "HINFO IN 4622227773593751248.7735566643631206532. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011781015s
	
	
	==> coredns [cdce23922d9c79c1305ead80951b322992a6c4263cccc427b5eef22d407760ac] <==
	[INFO] 10.244.1.2:59194 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001860817s
	[INFO] 10.244.1.2:56815 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00009145s
	[INFO] 10.244.1.2:47769 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000075554s
	[INFO] 10.244.1.2:39798 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001272791s
	[INFO] 10.244.1.2:54706 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000066123s
	[INFO] 10.244.1.2:34364 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000060491s
	[INFO] 10.244.1.2:54133 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000069582s
	[INFO] 10.244.0.3:35865 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00012994s
	[INFO] 10.244.0.3:56139 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000041737s
	[INFO] 10.244.0.3:49918 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000039779s
	[INFO] 10.244.0.3:37411 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000031333s
	[INFO] 10.244.1.2:36935 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109219s
	[INFO] 10.244.1.2:34050 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000119027s
	[INFO] 10.244.1.2:36132 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000080311s
	[INFO] 10.244.1.2:42498 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000055105s
	[INFO] 10.244.0.3:40926 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000194258s
	[INFO] 10.244.0.3:52515 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00007584s
	[INFO] 10.244.0.3:46454 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000070512s
	[INFO] 10.244.0.3:33899 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000080169s
	[INFO] 10.244.1.2:43068 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000151468s
	[INFO] 10.244.1.2:33364 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000149498s
	[INFO] 10.244.1.2:38424 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00007622s
	[INFO] 10.244.1.2:36342 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000112145s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-797386
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-797386
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd
	                    minikube.k8s.io/name=multinode-797386
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_16T17_33_18_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 17:33:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-797386
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 17:41:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 17:40:04 +0000   Fri, 16 Aug 2024 17:33:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 17:40:04 +0000   Fri, 16 Aug 2024 17:33:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 17:40:04 +0000   Fri, 16 Aug 2024 17:33:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 17:40:04 +0000   Fri, 16 Aug 2024 17:33:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.218
	  Hostname:    multinode-797386
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 61e13cf0aea544eebccb4bbf7da65841
	  System UUID:                61e13cf0-aea5-44ee-bccb-4bbf7da65841
	  Boot ID:                    ac23b698-afdd-47fb-a552-4de7e8c23dc5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-6986q                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m16s
	  kube-system                 coredns-6f6b679f8f-bskwd                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     8m22s
	  kube-system                 etcd-multinode-797386                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         8m28s
	  kube-system                 kindnet-ksr6k                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      8m23s
	  kube-system                 kube-apiserver-multinode-797386             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m28s
	  kube-system                 kube-controller-manager-multinode-797386    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m28s
	  kube-system                 kube-proxy-tdmh8                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m23s
	  kube-system                 kube-scheduler-multinode-797386             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m28s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 8m21s                kube-proxy       
	  Normal  Starting                 99s                  kube-proxy       
	  Normal  NodeHasSufficientPID     8m28s                kubelet          Node multinode-797386 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m28s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m28s                kubelet          Node multinode-797386 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m28s                kubelet          Node multinode-797386 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 8m28s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m23s                node-controller  Node multinode-797386 event: Registered Node multinode-797386 in Controller
	  Normal  NodeReady                8m7s                 kubelet          Node multinode-797386 status is now: NodeReady
	  Normal  Starting                 105s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  105s (x8 over 105s)  kubelet          Node multinode-797386 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    105s (x8 over 105s)  kubelet          Node multinode-797386 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     105s (x7 over 105s)  kubelet          Node multinode-797386 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  105s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           98s                  node-controller  Node multinode-797386 event: Registered Node multinode-797386 in Controller
	
	
	Name:               multinode-797386-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-797386-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd
	                    minikube.k8s.io/name=multinode-797386
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_16T17_40_44_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 17:40:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-797386-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 17:41:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 17:41:14 +0000   Fri, 16 Aug 2024 17:40:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 17:41:14 +0000   Fri, 16 Aug 2024 17:40:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 17:41:14 +0000   Fri, 16 Aug 2024 17:40:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 17:41:14 +0000   Fri, 16 Aug 2024 17:41:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.27
	  Hostname:    multinode-797386-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4942706ae02e46b1ad0097d8fe1d8139
	  System UUID:                4942706a-e02e-46b1-ad00-97d8fe1d8139
	  Boot ID:                    a6f68d2d-c7a9-430d-bc73-9134ba12128a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-dpsv9    0 (0%)        0 (0%)      0 (0%)           0 (0%)         66s
	  kube-system                 kindnet-wz6gh              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m39s
	  kube-system                 kube-proxy-gdpkq           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 7m34s                  kube-proxy  
	  Normal  Starting                 56s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  7m39s (x2 over 7m39s)  kubelet     Node multinode-797386-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m39s (x2 over 7m39s)  kubelet     Node multinode-797386-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m39s (x2 over 7m39s)  kubelet     Node multinode-797386-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m39s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m19s                  kubelet     Node multinode-797386-m02 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  62s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  61s (x2 over 62s)      kubelet     Node multinode-797386-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s (x2 over 62s)      kubelet     Node multinode-797386-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s (x2 over 62s)      kubelet     Node multinode-797386-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                42s                    kubelet     Node multinode-797386-m02 status is now: NodeReady
	
	
	Name:               multinode-797386-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-797386-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd
	                    minikube.k8s.io/name=multinode-797386
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_16T17_41_23_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 17:41:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-797386-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 17:41:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 17:41:41 +0000   Fri, 16 Aug 2024 17:41:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 17:41:41 +0000   Fri, 16 Aug 2024 17:41:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 17:41:41 +0000   Fri, 16 Aug 2024 17:41:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 17:41:41 +0000   Fri, 16 Aug 2024 17:41:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.71
	  Hostname:    multinode-797386-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fda714fc57cd4bd984ab1ce1efce0ef0
	  System UUID:                fda714fc-57cd-4bd9-84ab-1ce1efce0ef0
	  Boot ID:                    3d695618-ce86-48be-ac5b-960cecd19962
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-fk9hf       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m42s
	  kube-system                 kube-proxy-jwxd2    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m36s                  kube-proxy  
	  Normal  Starting                 18s                    kube-proxy  
	  Normal  Starting                 5m47s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  6m42s (x2 over 6m43s)  kubelet     Node multinode-797386-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m42s (x2 over 6m43s)  kubelet     Node multinode-797386-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m42s (x2 over 6m43s)  kubelet     Node multinode-797386-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m42s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m22s                  kubelet     Node multinode-797386-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m53s (x2 over 5m53s)  kubelet     Node multinode-797386-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m53s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    5m53s (x2 over 5m53s)  kubelet     Node multinode-797386-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m53s (x2 over 5m53s)  kubelet     Node multinode-797386-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m33s                  kubelet     Node multinode-797386-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  23s (x2 over 23s)      kubelet     Node multinode-797386-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x2 over 23s)      kubelet     Node multinode-797386-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x2 over 23s)      kubelet     Node multinode-797386-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                4s                     kubelet     Node multinode-797386-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.062095] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.173494] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.137993] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.259680] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +3.773609] systemd-fstab-generator[761]: Ignoring "noauto" option for root device
	[  +3.397247] systemd-fstab-generator[893]: Ignoring "noauto" option for root device
	[  +0.061371] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.509967] systemd-fstab-generator[1228]: Ignoring "noauto" option for root device
	[  +0.076510] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.147734] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.432695] systemd-fstab-generator[1340]: Ignoring "noauto" option for root device
	[  +4.946221] kauditd_printk_skb: 59 callbacks suppressed
	[Aug16 17:34] kauditd_printk_skb: 14 callbacks suppressed
	[Aug16 17:39] systemd-fstab-generator[2701]: Ignoring "noauto" option for root device
	[  +0.139386] systemd-fstab-generator[2713]: Ignoring "noauto" option for root device
	[  +0.170362] systemd-fstab-generator[2727]: Ignoring "noauto" option for root device
	[  +0.141971] systemd-fstab-generator[2739]: Ignoring "noauto" option for root device
	[  +0.264535] systemd-fstab-generator[2767]: Ignoring "noauto" option for root device
	[  +8.428790] systemd-fstab-generator[2865]: Ignoring "noauto" option for root device
	[  +0.088295] kauditd_printk_skb: 100 callbacks suppressed
	[  +1.681147] systemd-fstab-generator[2987]: Ignoring "noauto" option for root device
	[Aug16 17:40] kauditd_printk_skb: 74 callbacks suppressed
	[ +14.189245] systemd-fstab-generator[3828]: Ignoring "noauto" option for root device
	[  +0.095202] kauditd_printk_skb: 34 callbacks suppressed
	[ +19.498294] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [60af2ace6d078854a1463f8016e4ab1e4b7bae447c85c8ca8e8133634455d135] <==
	{"level":"info","ts":"2024-08-16T17:40:01.291769Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"2483a61a4a74c1c4","local-member-id":"e5f6aca4c72f5b22","added-peer-id":"e5f6aca4c72f5b22","added-peer-peer-urls":["https://192.168.39.218:2380"]}
	{"level":"info","ts":"2024-08-16T17:40:01.292842Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"2483a61a4a74c1c4","local-member-id":"e5f6aca4c72f5b22","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-16T17:40:01.293035Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-16T17:40:01.296549Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T17:40:01.298219Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-16T17:40:01.300381Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"e5f6aca4c72f5b22","initial-advertise-peer-urls":["https://192.168.39.218:2380"],"listen-peer-urls":["https://192.168.39.218:2380"],"advertise-client-urls":["https://192.168.39.218:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.218:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-16T17:40:01.300418Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-16T17:40:01.300610Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.218:2380"}
	{"level":"info","ts":"2024-08-16T17:40:01.300676Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.218:2380"}
	{"level":"info","ts":"2024-08-16T17:40:03.048934Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5f6aca4c72f5b22 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-16T17:40:03.048996Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5f6aca4c72f5b22 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-16T17:40:03.049046Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5f6aca4c72f5b22 received MsgPreVoteResp from e5f6aca4c72f5b22 at term 2"}
	{"level":"info","ts":"2024-08-16T17:40:03.049062Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5f6aca4c72f5b22 became candidate at term 3"}
	{"level":"info","ts":"2024-08-16T17:40:03.049068Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5f6aca4c72f5b22 received MsgVoteResp from e5f6aca4c72f5b22 at term 3"}
	{"level":"info","ts":"2024-08-16T17:40:03.049103Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5f6aca4c72f5b22 became leader at term 3"}
	{"level":"info","ts":"2024-08-16T17:40:03.049115Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e5f6aca4c72f5b22 elected leader e5f6aca4c72f5b22 at term 3"}
	{"level":"info","ts":"2024-08-16T17:40:03.054810Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"e5f6aca4c72f5b22","local-member-attributes":"{Name:multinode-797386 ClientURLs:[https://192.168.39.218:2379]}","request-path":"/0/members/e5f6aca4c72f5b22/attributes","cluster-id":"2483a61a4a74c1c4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-16T17:40:03.055019Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-16T17:40:03.055519Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-16T17:40:03.055613Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-16T17:40:03.055635Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-16T17:40:03.056386Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T17:40:03.056390Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T17:40:03.057326Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-16T17:40:03.057327Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.218:2379"}
	
	
	==> etcd [a7c3646b332f95294341f2049499a2f5ec6798771184ced80274c75132ee031a] <==
	{"level":"info","ts":"2024-08-16T17:34:06.813639Z","caller":"traceutil/trace.go:171","msg":"trace[1946004062] linearizableReadLoop","detail":"{readStateIndex:455; appliedIndex:454; }","duration":"224.277596ms","start":"2024-08-16T17:34:06.589335Z","end":"2024-08-16T17:34:06.813612Z","steps":["trace[1946004062] 'read index received'  (duration: 72.43885ms)","trace[1946004062] 'applied index is now lower than readState.Index'  (duration: 151.836558ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-16T17:34:06.813722Z","caller":"traceutil/trace.go:171","msg":"trace[1526626322] transaction","detail":"{read_only:false; response_revision:437; number_of_response:1; }","duration":"229.922204ms","start":"2024-08-16T17:34:06.583791Z","end":"2024-08-16T17:34:06.813714Z","steps":["trace[1526626322] 'process raft request'  (duration: 77.947501ms)","trace[1526626322] 'compare'  (duration: 150.77613ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-16T17:34:06.813926Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"224.420617ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-797386-m02\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-16T17:34:06.813967Z","caller":"traceutil/trace.go:171","msg":"trace[480461888] range","detail":"{range_begin:/registry/minions/multinode-797386-m02; range_end:; response_count:0; response_revision:438; }","duration":"224.468217ms","start":"2024-08-16T17:34:06.589489Z","end":"2024-08-16T17:34:06.813957Z","steps":["trace[480461888] 'agreement among raft nodes before linearized reading'  (duration: 224.402055ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T17:34:06.814067Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"224.728335ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/multinode-797386-m02\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-16T17:34:06.814097Z","caller":"traceutil/trace.go:171","msg":"trace[1911649428] range","detail":"{range_begin:/registry/csinodes/multinode-797386-m02; range_end:; response_count:0; response_revision:438; }","duration":"224.761428ms","start":"2024-08-16T17:34:06.589330Z","end":"2024-08-16T17:34:06.814092Z","steps":["trace[1911649428] 'agreement among raft nodes before linearized reading'  (duration: 224.719499ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T17:35:03.159546Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"150.751392ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-16T17:35:03.159870Z","caller":"traceutil/trace.go:171","msg":"trace[1079726022] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:573; }","duration":"151.095109ms","start":"2024-08-16T17:35:03.008753Z","end":"2024-08-16T17:35:03.159848Z","steps":["trace[1079726022] 'range keys from in-memory index tree'  (duration: 150.733446ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T17:35:03.159566Z","caller":"traceutil/trace.go:171","msg":"trace[137127456] linearizableReadLoop","detail":"{readStateIndex:606; appliedIndex:605; }","duration":"203.409311ms","start":"2024-08-16T17:35:02.956134Z","end":"2024-08-16T17:35:03.159543Z","steps":["trace[137127456] 'read index received'  (duration: 198.826391ms)","trace[137127456] 'applied index is now lower than readState.Index'  (duration: 4.582427ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-16T17:35:03.159756Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"203.585898ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-797386-m03\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-16T17:35:03.160109Z","caller":"traceutil/trace.go:171","msg":"trace[444285936] range","detail":"{range_begin:/registry/minions/multinode-797386-m03; range_end:; response_count:0; response_revision:574; }","duration":"203.936125ms","start":"2024-08-16T17:35:02.956130Z","end":"2024-08-16T17:35:03.160066Z","steps":["trace[444285936] 'agreement among raft nodes before linearized reading'  (duration: 203.529461ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T17:35:03.159779Z","caller":"traceutil/trace.go:171","msg":"trace[1615455947] transaction","detail":"{read_only:false; response_revision:574; number_of_response:1; }","duration":"225.282903ms","start":"2024-08-16T17:35:02.934490Z","end":"2024-08-16T17:35:03.159773Z","steps":["trace[1615455947] 'process raft request'  (duration: 220.507695ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T17:35:06.416874Z","caller":"traceutil/trace.go:171","msg":"trace[1291442356] transaction","detail":"{read_only:false; response_revision:604; number_of_response:1; }","duration":"117.08554ms","start":"2024-08-16T17:35:06.299774Z","end":"2024-08-16T17:35:06.416859Z","steps":["trace[1291442356] 'process raft request'  (duration: 116.883781ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T17:35:57.307186Z","caller":"traceutil/trace.go:171","msg":"trace[1337146133] transaction","detail":"{read_only:false; response_revision:702; number_of_response:1; }","duration":"187.451347ms","start":"2024-08-16T17:35:57.119704Z","end":"2024-08-16T17:35:57.307156Z","steps":["trace[1337146133] 'process raft request'  (duration: 187.350688ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T17:35:57.311626Z","caller":"traceutil/trace.go:171","msg":"trace[516929672] transaction","detail":"{read_only:false; response_revision:703; number_of_response:1; }","duration":"173.944514ms","start":"2024-08-16T17:35:57.137667Z","end":"2024-08-16T17:35:57.311612Z","steps":["trace[516929672] 'process raft request'  (duration: 173.847389ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T17:38:17.771346Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-16T17:38:17.771517Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-797386","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.218:2380"],"advertise-client-urls":["https://192.168.39.218:2379"]}
	{"level":"warn","ts":"2024-08-16T17:38:17.771662Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-16T17:38:17.771780Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-16T17:38:17.808831Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.218:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-16T17:38:17.809020Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.218:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-16T17:38:17.809255Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"e5f6aca4c72f5b22","current-leader-member-id":"e5f6aca4c72f5b22"}
	{"level":"info","ts":"2024-08-16T17:38:17.814484Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.218:2380"}
	{"level":"info","ts":"2024-08-16T17:38:17.814589Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.218:2380"}
	{"level":"info","ts":"2024-08-16T17:38:17.814599Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-797386","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.218:2380"],"advertise-client-urls":["https://192.168.39.218:2379"]}
	
	
	==> kernel <==
	 17:41:45 up 9 min,  0 users,  load average: 0.11, 0.25, 0.18
	Linux multinode-797386 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [05ab432af1c9ceb8c8581aa18daa8602c8d5ffab88a3c85c24b28e75eb810a16] <==
	I0816 17:40:56.849564       1 main.go:322] Node multinode-797386-m03 has CIDR [10.244.3.0/24] 
	I0816 17:41:06.849639       1 main.go:295] Handling node with IPs: map[192.168.39.218:{}]
	I0816 17:41:06.849748       1 main.go:299] handling current node
	I0816 17:41:06.849776       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0816 17:41:06.849799       1 main.go:322] Node multinode-797386-m02 has CIDR [10.244.1.0/24] 
	I0816 17:41:06.849934       1 main.go:295] Handling node with IPs: map[192.168.39.71:{}]
	I0816 17:41:06.849955       1 main.go:322] Node multinode-797386-m03 has CIDR [10.244.3.0/24] 
	I0816 17:41:16.848645       1 main.go:295] Handling node with IPs: map[192.168.39.218:{}]
	I0816 17:41:16.848688       1 main.go:299] handling current node
	I0816 17:41:16.848703       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0816 17:41:16.848708       1 main.go:322] Node multinode-797386-m02 has CIDR [10.244.1.0/24] 
	I0816 17:41:16.848837       1 main.go:295] Handling node with IPs: map[192.168.39.71:{}]
	I0816 17:41:16.849190       1 main.go:322] Node multinode-797386-m03 has CIDR [10.244.3.0/24] 
	I0816 17:41:26.848719       1 main.go:295] Handling node with IPs: map[192.168.39.218:{}]
	I0816 17:41:26.848836       1 main.go:299] handling current node
	I0816 17:41:26.848869       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0816 17:41:26.848891       1 main.go:322] Node multinode-797386-m02 has CIDR [10.244.1.0/24] 
	I0816 17:41:26.849043       1 main.go:295] Handling node with IPs: map[192.168.39.71:{}]
	I0816 17:41:26.849070       1 main.go:322] Node multinode-797386-m03 has CIDR [10.244.2.0/24] 
	I0816 17:41:36.848754       1 main.go:295] Handling node with IPs: map[192.168.39.71:{}]
	I0816 17:41:36.848865       1 main.go:322] Node multinode-797386-m03 has CIDR [10.244.2.0/24] 
	I0816 17:41:36.849038       1 main.go:295] Handling node with IPs: map[192.168.39.218:{}]
	I0816 17:41:36.849112       1 main.go:299] handling current node
	I0816 17:41:36.849149       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0816 17:41:36.849167       1 main.go:322] Node multinode-797386-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [d7de7a6593d24c1ff3638eb2b2d773183f0d8a3dc6dc55377f15f79d1c5a1b11] <==
	I0816 17:37:28.045902       1 main.go:322] Node multinode-797386-m03 has CIDR [10.244.3.0/24] 
	I0816 17:37:38.046030       1 main.go:295] Handling node with IPs: map[192.168.39.71:{}]
	I0816 17:37:38.046141       1 main.go:322] Node multinode-797386-m03 has CIDR [10.244.3.0/24] 
	I0816 17:37:38.046295       1 main.go:295] Handling node with IPs: map[192.168.39.218:{}]
	I0816 17:37:38.046322       1 main.go:299] handling current node
	I0816 17:37:38.046358       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0816 17:37:38.046376       1 main.go:322] Node multinode-797386-m02 has CIDR [10.244.1.0/24] 
	I0816 17:37:48.053569       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0816 17:37:48.053611       1 main.go:322] Node multinode-797386-m02 has CIDR [10.244.1.0/24] 
	I0816 17:37:48.053791       1 main.go:295] Handling node with IPs: map[192.168.39.71:{}]
	I0816 17:37:48.053811       1 main.go:322] Node multinode-797386-m03 has CIDR [10.244.3.0/24] 
	I0816 17:37:48.053892       1 main.go:295] Handling node with IPs: map[192.168.39.218:{}]
	I0816 17:37:48.053912       1 main.go:299] handling current node
	I0816 17:37:58.045480       1 main.go:295] Handling node with IPs: map[192.168.39.218:{}]
	I0816 17:37:58.045573       1 main.go:299] handling current node
	I0816 17:37:58.045602       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0816 17:37:58.045611       1 main.go:322] Node multinode-797386-m02 has CIDR [10.244.1.0/24] 
	I0816 17:37:58.045792       1 main.go:295] Handling node with IPs: map[192.168.39.71:{}]
	I0816 17:37:58.045814       1 main.go:322] Node multinode-797386-m03 has CIDR [10.244.3.0/24] 
	I0816 17:38:08.047106       1 main.go:295] Handling node with IPs: map[192.168.39.218:{}]
	I0816 17:38:08.047268       1 main.go:299] handling current node
	I0816 17:38:08.047316       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0816 17:38:08.047336       1 main.go:322] Node multinode-797386-m02 has CIDR [10.244.1.0/24] 
	I0816 17:38:08.047527       1 main.go:295] Handling node with IPs: map[192.168.39.71:{}]
	I0816 17:38:08.047559       1 main.go:322] Node multinode-797386-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [7e5a8f300597d04f1ee5f27ae2cc632a1587c6299983e4a6de87814f96e37c65] <==
	I0816 17:40:04.339680       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0816 17:40:04.343546       1 aggregator.go:171] initial CRD sync complete...
	I0816 17:40:04.343644       1 autoregister_controller.go:144] Starting autoregister controller
	I0816 17:40:04.343718       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0816 17:40:04.344621       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0816 17:40:04.344951       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0816 17:40:04.345035       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	E0816 17:40:04.377828       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0816 17:40:04.379875       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0816 17:40:04.397548       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0816 17:40:04.397621       1 policy_source.go:224] refreshing policies
	I0816 17:40:04.399331       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0816 17:40:04.433535       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0816 17:40:04.434003       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0816 17:40:04.436170       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0816 17:40:04.439204       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0816 17:40:04.448605       1 cache.go:39] Caches are synced for autoregister controller
	I0816 17:40:05.244904       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0816 17:40:06.504196       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0816 17:40:06.641341       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0816 17:40:06.661420       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0816 17:40:06.729513       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0816 17:40:06.735075       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0816 17:40:07.885122       1 controller.go:615] quota admission added evaluator for: endpoints
	I0816 17:40:07.937003       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [c48fc7ab9fc9bd52528c2098fbf029f5d200bc571e1de1f6d6ef946967e93e1d] <==
	E0816 17:34:34.473737       1 conn.go:339] Error on socket receive: read tcp 192.168.39.218:8443->192.168.39.1:38848: use of closed network connection
	E0816 17:34:34.638696       1 conn.go:339] Error on socket receive: read tcp 192.168.39.218:8443->192.168.39.1:38854: use of closed network connection
	E0816 17:34:34.797185       1 conn.go:339] Error on socket receive: read tcp 192.168.39.218:8443->192.168.39.1:38870: use of closed network connection
	E0816 17:34:34.959574       1 conn.go:339] Error on socket receive: read tcp 192.168.39.218:8443->192.168.39.1:38888: use of closed network connection
	I0816 17:38:17.773747       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	W0816 17:38:17.776851       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:38:17.776925       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:38:17.776962       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:38:17.785622       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:38:17.794962       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:38:17.797502       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:38:17.798129       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:38:17.798211       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:38:17.798268       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:38:17.798329       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:38:17.798382       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:38:17.798966       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:38:17.799067       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:38:17.799106       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:38:17.799139       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:38:17.799191       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:38:17.799242       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:38:17.799281       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:38:17.799315       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:38:17.799368       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [789e8d05ce4d8643459c20ed36046ab525699960cea246d15807f7a5f98866f3] <==
	I0816 17:41:03.379573       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-797386-m02"
	I0816 17:41:03.389578       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-797386-m02"
	I0816 17:41:03.397089       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="40.198µs"
	I0816 17:41:03.409578       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="71.379µs"
	I0816 17:41:07.145693       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="8.311168ms"
	I0816 17:41:07.146767       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="38.839µs"
	I0816 17:41:07.896832       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-797386-m02"
	I0816 17:41:14.350279       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-797386-m02"
	I0816 17:41:21.106702       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-797386-m03"
	I0816 17:41:21.121686       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-797386-m03"
	I0816 17:41:21.338768       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-797386-m02"
	I0816 17:41:21.339018       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-797386-m03"
	I0816 17:41:22.383562       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-797386-m02"
	I0816 17:41:22.384230       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-797386-m03\" does not exist"
	I0816 17:41:22.405029       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-797386-m03" podCIDRs=["10.244.2.0/24"]
	I0816 17:41:22.405457       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-797386-m03"
	I0816 17:41:22.405551       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-797386-m03"
	I0816 17:41:22.823886       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-797386-m03"
	I0816 17:41:22.949712       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-797386-m03"
	I0816 17:41:23.147561       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-797386-m03"
	I0816 17:41:32.490086       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-797386-m03"
	I0816 17:41:41.805890       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-797386-m02"
	I0816 17:41:41.806007       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-797386-m03"
	I0816 17:41:41.814643       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-797386-m03"
	I0816 17:41:42.914538       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-797386-m03"
	
	
	==> kube-controller-manager [a9a8478e7a74ec1f31463e518d0ac01e55e49029495ecbd94952d75b02e5e31f] <==
	I0816 17:35:51.381913       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-797386-m03"
	I0816 17:35:51.606046       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-797386-m02"
	I0816 17:35:51.606158       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-797386-m03"
	I0816 17:35:52.876802       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-797386-m02"
	I0816 17:35:52.876854       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-797386-m03\" does not exist"
	I0816 17:35:52.894683       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-797386-m03" podCIDRs=["10.244.3.0/24"]
	I0816 17:35:52.894760       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-797386-m03"
	I0816 17:35:52.894800       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-797386-m03"
	I0816 17:35:53.099895       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-797386-m03"
	I0816 17:35:53.425292       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-797386-m03"
	I0816 17:35:57.314508       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-797386-m03"
	I0816 17:36:03.203063       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-797386-m03"
	I0816 17:36:12.469626       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-797386-m03"
	I0816 17:36:12.470475       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-797386-m02"
	I0816 17:36:12.481423       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-797386-m03"
	I0816 17:36:17.135375       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-797386-m03"
	I0816 17:36:52.152879       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-797386-m02"
	I0816 17:36:52.153347       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-797386-m03"
	I0816 17:36:52.177243       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-797386-m02"
	I0816 17:36:52.216105       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="8.632246ms"
	I0816 17:36:52.216291       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="28.396µs"
	I0816 17:36:57.215258       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-797386-m03"
	I0816 17:36:57.233877       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-797386-m03"
	I0816 17:36:57.284010       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-797386-m02"
	I0816 17:37:07.357152       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-797386-m03"
	
	
	==> kube-proxy [40703b34f4634a7257846aa83155677fe4db38b0df5ae116f3ef7e14e7ced732] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0816 17:33:23.907726       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0816 17:33:23.926518       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.218"]
	E0816 17:33:23.926808       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0816 17:33:23.958144       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0816 17:33:23.958174       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0816 17:33:23.958206       1 server_linux.go:169] "Using iptables Proxier"
	I0816 17:33:23.961914       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0816 17:33:23.962217       1 server.go:483] "Version info" version="v1.31.0"
	I0816 17:33:23.962274       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 17:33:23.963489       1 config.go:197] "Starting service config controller"
	I0816 17:33:23.963555       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0816 17:33:23.963593       1 config.go:104] "Starting endpoint slice config controller"
	I0816 17:33:23.963608       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0816 17:33:23.964069       1 config.go:326] "Starting node config controller"
	I0816 17:33:23.965698       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0816 17:33:24.064358       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0816 17:33:24.064478       1 shared_informer.go:320] Caches are synced for service config
	I0816 17:33:24.065893       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [e50cd41dedf63835f7ab9caeedc1516aa542aba5eb4fba13647d34bbc9737997] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0816 17:40:06.102144       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0816 17:40:06.123046       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.218"]
	E0816 17:40:06.123104       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0816 17:40:06.216323       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0816 17:40:06.216366       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0816 17:40:06.216392       1 server_linux.go:169] "Using iptables Proxier"
	I0816 17:40:06.218679       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0816 17:40:06.218900       1 server.go:483] "Version info" version="v1.31.0"
	I0816 17:40:06.218911       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 17:40:06.220588       1 config.go:197] "Starting service config controller"
	I0816 17:40:06.220602       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0816 17:40:06.220618       1 config.go:104] "Starting endpoint slice config controller"
	I0816 17:40:06.220622       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0816 17:40:06.221121       1 config.go:326] "Starting node config controller"
	I0816 17:40:06.221130       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0816 17:40:06.322317       1 shared_informer.go:320] Caches are synced for node config
	I0816 17:40:06.322357       1 shared_informer.go:320] Caches are synced for service config
	I0816 17:40:06.322394       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [6c6740953e611ccc938310422e50ac5f9346f75cad1f1a8641b062847b43647f] <==
	E0816 17:33:16.180755       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0816 17:33:16.331246       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0816 17:33:16.331296       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:33:16.359866       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0816 17:33:16.360011       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 17:33:16.359978       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 17:33:16.360142       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 17:33:16.377801       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 17:33:16.378087       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:33:16.395497       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 17:33:16.395637       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0816 17:33:16.395727       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 17:33:16.395840       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:33:16.417509       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 17:33:16.417554       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:33:16.425543       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 17:33:16.425587       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:33:16.479378       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 17:33:16.479562       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 17:33:16.479928       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 17:33:16.479966       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:33:16.487689       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 17:33:16.487821       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0816 17:33:18.276640       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0816 17:38:17.763087       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [df7fce7cad33c0a2bcf3266bec644bd9c040b5cb853854e79bff3ab38f60e9b2] <==
	I0816 17:40:01.885255       1 serving.go:386] Generated self-signed cert in-memory
	W0816 17:40:04.330713       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0816 17:40:04.330798       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0816 17:40:04.330859       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0816 17:40:04.330882       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0816 17:40:04.376876       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0816 17:40:04.376951       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 17:40:04.386698       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0816 17:40:04.387191       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0816 17:40:04.388520       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0816 17:40:04.388597       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0816 17:40:04.489629       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 16 17:40:10 multinode-797386 kubelet[2994]: E0816 17:40:10.256753    2994 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723830010255510868,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:40:10 multinode-797386 kubelet[2994]: E0816 17:40:10.257216    2994 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723830010255510868,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:40:20 multinode-797386 kubelet[2994]: E0816 17:40:20.259133    2994 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723830020258672169,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:40:20 multinode-797386 kubelet[2994]: E0816 17:40:20.259181    2994 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723830020258672169,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:40:30 multinode-797386 kubelet[2994]: E0816 17:40:30.260905    2994 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723830030260541379,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:40:30 multinode-797386 kubelet[2994]: E0816 17:40:30.260970    2994 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723830030260541379,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:40:40 multinode-797386 kubelet[2994]: E0816 17:40:40.263289    2994 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723830040262305399,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:40:40 multinode-797386 kubelet[2994]: E0816 17:40:40.263396    2994 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723830040262305399,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:40:50 multinode-797386 kubelet[2994]: E0816 17:40:50.265318    2994 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723830050264857958,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:40:50 multinode-797386 kubelet[2994]: E0816 17:40:50.265359    2994 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723830050264857958,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:41:00 multinode-797386 kubelet[2994]: E0816 17:41:00.220485    2994 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 17:41:00 multinode-797386 kubelet[2994]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 17:41:00 multinode-797386 kubelet[2994]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 17:41:00 multinode-797386 kubelet[2994]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 17:41:00 multinode-797386 kubelet[2994]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 17:41:00 multinode-797386 kubelet[2994]: E0816 17:41:00.268103    2994 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723830060267837317,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:41:00 multinode-797386 kubelet[2994]: E0816 17:41:00.268125    2994 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723830060267837317,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:41:10 multinode-797386 kubelet[2994]: E0816 17:41:10.271857    2994 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723830070270576836,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:41:10 multinode-797386 kubelet[2994]: E0816 17:41:10.272248    2994 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723830070270576836,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:41:20 multinode-797386 kubelet[2994]: E0816 17:41:20.274209    2994 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723830080273704980,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:41:20 multinode-797386 kubelet[2994]: E0816 17:41:20.274249    2994 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723830080273704980,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:41:30 multinode-797386 kubelet[2994]: E0816 17:41:30.276177    2994 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723830090275612170,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:41:30 multinode-797386 kubelet[2994]: E0816 17:41:30.276249    2994 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723830090275612170,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:41:40 multinode-797386 kubelet[2994]: E0816 17:41:40.277988    2994 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723830100277641381,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:41:40 multinode-797386 kubelet[2994]: E0816 17:41:40.278260    2994 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723830100277641381,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 17:41:44.561846   46980 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19461-9545/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-797386 -n multinode-797386
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-797386 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (331.30s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-797386 stop
E0816 17:43:21.062457   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/functional-654639/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-797386 stop: exit status 82 (2m0.459274661s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-797386-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-797386 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-797386 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-797386 status: exit status 3 (18.735612892s)

                                                
                                                
-- stdout --
	multinode-797386
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-797386-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 17:44:07.628995   47651 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.27:22: connect: no route to host
	E0816 17:44:07.629031   47651 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.27:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-797386 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-797386 -n multinode-797386
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-797386 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-797386 logs -n 25: (1.456695903s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-797386 ssh -n                                                                 | multinode-797386 | jenkins | v1.33.1 | 16 Aug 24 17:35 UTC | 16 Aug 24 17:35 UTC |
	|         | multinode-797386-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-797386 cp multinode-797386-m02:/home/docker/cp-test.txt                       | multinode-797386 | jenkins | v1.33.1 | 16 Aug 24 17:35 UTC | 16 Aug 24 17:35 UTC |
	|         | multinode-797386:/home/docker/cp-test_multinode-797386-m02_multinode-797386.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-797386 ssh -n                                                                 | multinode-797386 | jenkins | v1.33.1 | 16 Aug 24 17:35 UTC | 16 Aug 24 17:35 UTC |
	|         | multinode-797386-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-797386 ssh -n multinode-797386 sudo cat                                       | multinode-797386 | jenkins | v1.33.1 | 16 Aug 24 17:35 UTC | 16 Aug 24 17:35 UTC |
	|         | /home/docker/cp-test_multinode-797386-m02_multinode-797386.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-797386 cp multinode-797386-m02:/home/docker/cp-test.txt                       | multinode-797386 | jenkins | v1.33.1 | 16 Aug 24 17:35 UTC | 16 Aug 24 17:35 UTC |
	|         | multinode-797386-m03:/home/docker/cp-test_multinode-797386-m02_multinode-797386-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-797386 ssh -n                                                                 | multinode-797386 | jenkins | v1.33.1 | 16 Aug 24 17:35 UTC | 16 Aug 24 17:35 UTC |
	|         | multinode-797386-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-797386 ssh -n multinode-797386-m03 sudo cat                                   | multinode-797386 | jenkins | v1.33.1 | 16 Aug 24 17:35 UTC | 16 Aug 24 17:35 UTC |
	|         | /home/docker/cp-test_multinode-797386-m02_multinode-797386-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-797386 cp testdata/cp-test.txt                                                | multinode-797386 | jenkins | v1.33.1 | 16 Aug 24 17:35 UTC | 16 Aug 24 17:35 UTC |
	|         | multinode-797386-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-797386 ssh -n                                                                 | multinode-797386 | jenkins | v1.33.1 | 16 Aug 24 17:35 UTC | 16 Aug 24 17:35 UTC |
	|         | multinode-797386-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-797386 cp multinode-797386-m03:/home/docker/cp-test.txt                       | multinode-797386 | jenkins | v1.33.1 | 16 Aug 24 17:35 UTC | 16 Aug 24 17:35 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3908969690/001/cp-test_multinode-797386-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-797386 ssh -n                                                                 | multinode-797386 | jenkins | v1.33.1 | 16 Aug 24 17:35 UTC | 16 Aug 24 17:35 UTC |
	|         | multinode-797386-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-797386 cp multinode-797386-m03:/home/docker/cp-test.txt                       | multinode-797386 | jenkins | v1.33.1 | 16 Aug 24 17:35 UTC | 16 Aug 24 17:35 UTC |
	|         | multinode-797386:/home/docker/cp-test_multinode-797386-m03_multinode-797386.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-797386 ssh -n                                                                 | multinode-797386 | jenkins | v1.33.1 | 16 Aug 24 17:35 UTC | 16 Aug 24 17:35 UTC |
	|         | multinode-797386-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-797386 ssh -n multinode-797386 sudo cat                                       | multinode-797386 | jenkins | v1.33.1 | 16 Aug 24 17:35 UTC | 16 Aug 24 17:35 UTC |
	|         | /home/docker/cp-test_multinode-797386-m03_multinode-797386.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-797386 cp multinode-797386-m03:/home/docker/cp-test.txt                       | multinode-797386 | jenkins | v1.33.1 | 16 Aug 24 17:35 UTC | 16 Aug 24 17:35 UTC |
	|         | multinode-797386-m02:/home/docker/cp-test_multinode-797386-m03_multinode-797386-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-797386 ssh -n                                                                 | multinode-797386 | jenkins | v1.33.1 | 16 Aug 24 17:35 UTC | 16 Aug 24 17:35 UTC |
	|         | multinode-797386-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-797386 ssh -n multinode-797386-m02 sudo cat                                   | multinode-797386 | jenkins | v1.33.1 | 16 Aug 24 17:35 UTC | 16 Aug 24 17:35 UTC |
	|         | /home/docker/cp-test_multinode-797386-m03_multinode-797386-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-797386 node stop m03                                                          | multinode-797386 | jenkins | v1.33.1 | 16 Aug 24 17:35 UTC | 16 Aug 24 17:35 UTC |
	| node    | multinode-797386 node start                                                             | multinode-797386 | jenkins | v1.33.1 | 16 Aug 24 17:35 UTC | 16 Aug 24 17:36 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-797386                                                                | multinode-797386 | jenkins | v1.33.1 | 16 Aug 24 17:36 UTC |                     |
	| stop    | -p multinode-797386                                                                     | multinode-797386 | jenkins | v1.33.1 | 16 Aug 24 17:36 UTC |                     |
	| start   | -p multinode-797386                                                                     | multinode-797386 | jenkins | v1.33.1 | 16 Aug 24 17:38 UTC | 16 Aug 24 17:41 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-797386                                                                | multinode-797386 | jenkins | v1.33.1 | 16 Aug 24 17:41 UTC |                     |
	| node    | multinode-797386 node delete                                                            | multinode-797386 | jenkins | v1.33.1 | 16 Aug 24 17:41 UTC | 16 Aug 24 17:41 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-797386 stop                                                                   | multinode-797386 | jenkins | v1.33.1 | 16 Aug 24 17:41 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 17:38:16
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 17:38:16.837227   45790 out.go:345] Setting OutFile to fd 1 ...
	I0816 17:38:16.837335   45790 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:38:16.837348   45790 out.go:358] Setting ErrFile to fd 2...
	I0816 17:38:16.837353   45790 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:38:16.837521   45790 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-9545/.minikube/bin
	I0816 17:38:16.838083   45790 out.go:352] Setting JSON to false
	I0816 17:38:16.839021   45790 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4795,"bootTime":1723825102,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0816 17:38:16.839079   45790 start.go:139] virtualization: kvm guest
	I0816 17:38:16.841196   45790 out.go:177] * [multinode-797386] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0816 17:38:16.842645   45790 notify.go:220] Checking for updates...
	I0816 17:38:16.842650   45790 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 17:38:16.844113   45790 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 17:38:16.845458   45790 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19461-9545/kubeconfig
	I0816 17:38:16.846610   45790 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19461-9545/.minikube
	I0816 17:38:16.847732   45790 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 17:38:16.848881   45790 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 17:38:16.850870   45790 config.go:182] Loaded profile config "multinode-797386": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 17:38:16.850945   45790 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 17:38:16.851366   45790 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:38:16.851411   45790 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:38:16.866398   45790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42447
	I0816 17:38:16.866792   45790 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:38:16.867386   45790 main.go:141] libmachine: Using API Version  1
	I0816 17:38:16.867406   45790 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:38:16.867775   45790 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:38:16.868013   45790 main.go:141] libmachine: (multinode-797386) Calling .DriverName
	I0816 17:38:16.902858   45790 out.go:177] * Using the kvm2 driver based on existing profile
	I0816 17:38:16.904046   45790 start.go:297] selected driver: kvm2
	I0816 17:38:16.904069   45790 start.go:901] validating driver "kvm2" against &{Name:multinode-797386 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:multinode-797386 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.218 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.71 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 17:38:16.904232   45790 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 17:38:16.904604   45790 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 17:38:16.904710   45790 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19461-9545/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 17:38:16.920164   45790 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0816 17:38:16.921047   45790 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 17:38:16.921128   45790 cni.go:84] Creating CNI manager for ""
	I0816 17:38:16.921138   45790 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0816 17:38:16.921221   45790 start.go:340] cluster config:
	{Name:multinode-797386 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-797386 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.218 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.71 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 17:38:16.921400   45790 iso.go:125] acquiring lock: {Name:mke35866c1d0f078e1f027c2d727692722810e04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 17:38:16.923090   45790 out.go:177] * Starting "multinode-797386" primary control-plane node in "multinode-797386" cluster
	I0816 17:38:16.924158   45790 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 17:38:16.924186   45790 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0816 17:38:16.924195   45790 cache.go:56] Caching tarball of preloaded images
	I0816 17:38:16.924269   45790 preload.go:172] Found /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 17:38:16.924280   45790 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0816 17:38:16.924402   45790 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/multinode-797386/config.json ...
	I0816 17:38:16.924640   45790 start.go:360] acquireMachinesLock for multinode-797386: {Name:mke1ffa1f2d6d714bdd85e184816ba8f4dfd08f1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 17:38:16.924693   45790 start.go:364] duration metric: took 29.995µs to acquireMachinesLock for "multinode-797386"
	I0816 17:38:16.924713   45790 start.go:96] Skipping create...Using existing machine configuration
	I0816 17:38:16.924725   45790 fix.go:54] fixHost starting: 
	I0816 17:38:16.925011   45790 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:38:16.925042   45790 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:38:16.939606   45790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41975
	I0816 17:38:16.940051   45790 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:38:16.940537   45790 main.go:141] libmachine: Using API Version  1
	I0816 17:38:16.940558   45790 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:38:16.940901   45790 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:38:16.941064   45790 main.go:141] libmachine: (multinode-797386) Calling .DriverName
	I0816 17:38:16.941240   45790 main.go:141] libmachine: (multinode-797386) Calling .GetState
	I0816 17:38:16.942867   45790 fix.go:112] recreateIfNeeded on multinode-797386: state=Running err=<nil>
	W0816 17:38:16.942898   45790 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 17:38:16.944796   45790 out.go:177] * Updating the running kvm2 "multinode-797386" VM ...
	I0816 17:38:16.946013   45790 machine.go:93] provisionDockerMachine start ...
	I0816 17:38:16.946033   45790 main.go:141] libmachine: (multinode-797386) Calling .DriverName
	I0816 17:38:16.946237   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHHostname
	I0816 17:38:16.948944   45790 main.go:141] libmachine: (multinode-797386) DBG | domain multinode-797386 has defined MAC address 52:54:00:2f:3e:a1 in network mk-multinode-797386
	I0816 17:38:16.949405   45790 main.go:141] libmachine: (multinode-797386) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:3e:a1", ip: ""} in network mk-multinode-797386: {Iface:virbr1 ExpiryTime:2024-08-16 18:32:52 +0000 UTC Type:0 Mac:52:54:00:2f:3e:a1 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:multinode-797386 Clientid:01:52:54:00:2f:3e:a1}
	I0816 17:38:16.949447   45790 main.go:141] libmachine: (multinode-797386) DBG | domain multinode-797386 has defined IP address 192.168.39.218 and MAC address 52:54:00:2f:3e:a1 in network mk-multinode-797386
	I0816 17:38:16.949531   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHPort
	I0816 17:38:16.949729   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHKeyPath
	I0816 17:38:16.949909   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHKeyPath
	I0816 17:38:16.950072   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHUsername
	I0816 17:38:16.950368   45790 main.go:141] libmachine: Using SSH client type: native
	I0816 17:38:16.950553   45790 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.218 22 <nil> <nil>}
	I0816 17:38:16.950566   45790 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 17:38:17.069477   45790 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-797386
	
	I0816 17:38:17.069512   45790 main.go:141] libmachine: (multinode-797386) Calling .GetMachineName
	I0816 17:38:17.069749   45790 buildroot.go:166] provisioning hostname "multinode-797386"
	I0816 17:38:17.069772   45790 main.go:141] libmachine: (multinode-797386) Calling .GetMachineName
	I0816 17:38:17.069987   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHHostname
	I0816 17:38:17.073086   45790 main.go:141] libmachine: (multinode-797386) DBG | domain multinode-797386 has defined MAC address 52:54:00:2f:3e:a1 in network mk-multinode-797386
	I0816 17:38:17.073500   45790 main.go:141] libmachine: (multinode-797386) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:3e:a1", ip: ""} in network mk-multinode-797386: {Iface:virbr1 ExpiryTime:2024-08-16 18:32:52 +0000 UTC Type:0 Mac:52:54:00:2f:3e:a1 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:multinode-797386 Clientid:01:52:54:00:2f:3e:a1}
	I0816 17:38:17.073533   45790 main.go:141] libmachine: (multinode-797386) DBG | domain multinode-797386 has defined IP address 192.168.39.218 and MAC address 52:54:00:2f:3e:a1 in network mk-multinode-797386
	I0816 17:38:17.073689   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHPort
	I0816 17:38:17.073872   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHKeyPath
	I0816 17:38:17.074037   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHKeyPath
	I0816 17:38:17.074203   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHUsername
	I0816 17:38:17.074469   45790 main.go:141] libmachine: Using SSH client type: native
	I0816 17:38:17.074623   45790 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.218 22 <nil> <nil>}
	I0816 17:38:17.074641   45790 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-797386 && echo "multinode-797386" | sudo tee /etc/hostname
	I0816 17:38:17.204595   45790 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-797386
	
	I0816 17:38:17.204628   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHHostname
	I0816 17:38:17.207746   45790 main.go:141] libmachine: (multinode-797386) DBG | domain multinode-797386 has defined MAC address 52:54:00:2f:3e:a1 in network mk-multinode-797386
	I0816 17:38:17.208199   45790 main.go:141] libmachine: (multinode-797386) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:3e:a1", ip: ""} in network mk-multinode-797386: {Iface:virbr1 ExpiryTime:2024-08-16 18:32:52 +0000 UTC Type:0 Mac:52:54:00:2f:3e:a1 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:multinode-797386 Clientid:01:52:54:00:2f:3e:a1}
	I0816 17:38:17.208233   45790 main.go:141] libmachine: (multinode-797386) DBG | domain multinode-797386 has defined IP address 192.168.39.218 and MAC address 52:54:00:2f:3e:a1 in network mk-multinode-797386
	I0816 17:38:17.208443   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHPort
	I0816 17:38:17.208639   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHKeyPath
	I0816 17:38:17.208780   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHKeyPath
	I0816 17:38:17.208948   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHUsername
	I0816 17:38:17.209088   45790 main.go:141] libmachine: Using SSH client type: native
	I0816 17:38:17.209260   45790 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.218 22 <nil> <nil>}
	I0816 17:38:17.209276   45790 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-797386' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-797386/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-797386' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 17:38:17.321252   45790 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 17:38:17.321280   45790 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19461-9545/.minikube CaCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19461-9545/.minikube}
	I0816 17:38:17.321317   45790 buildroot.go:174] setting up certificates
	I0816 17:38:17.321328   45790 provision.go:84] configureAuth start
	I0816 17:38:17.321342   45790 main.go:141] libmachine: (multinode-797386) Calling .GetMachineName
	I0816 17:38:17.321642   45790 main.go:141] libmachine: (multinode-797386) Calling .GetIP
	I0816 17:38:17.324113   45790 main.go:141] libmachine: (multinode-797386) DBG | domain multinode-797386 has defined MAC address 52:54:00:2f:3e:a1 in network mk-multinode-797386
	I0816 17:38:17.324446   45790 main.go:141] libmachine: (multinode-797386) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:3e:a1", ip: ""} in network mk-multinode-797386: {Iface:virbr1 ExpiryTime:2024-08-16 18:32:52 +0000 UTC Type:0 Mac:52:54:00:2f:3e:a1 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:multinode-797386 Clientid:01:52:54:00:2f:3e:a1}
	I0816 17:38:17.324476   45790 main.go:141] libmachine: (multinode-797386) DBG | domain multinode-797386 has defined IP address 192.168.39.218 and MAC address 52:54:00:2f:3e:a1 in network mk-multinode-797386
	I0816 17:38:17.324601   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHHostname
	I0816 17:38:17.326884   45790 main.go:141] libmachine: (multinode-797386) DBG | domain multinode-797386 has defined MAC address 52:54:00:2f:3e:a1 in network mk-multinode-797386
	I0816 17:38:17.327240   45790 main.go:141] libmachine: (multinode-797386) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:3e:a1", ip: ""} in network mk-multinode-797386: {Iface:virbr1 ExpiryTime:2024-08-16 18:32:52 +0000 UTC Type:0 Mac:52:54:00:2f:3e:a1 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:multinode-797386 Clientid:01:52:54:00:2f:3e:a1}
	I0816 17:38:17.327273   45790 main.go:141] libmachine: (multinode-797386) DBG | domain multinode-797386 has defined IP address 192.168.39.218 and MAC address 52:54:00:2f:3e:a1 in network mk-multinode-797386
	I0816 17:38:17.327372   45790 provision.go:143] copyHostCerts
	I0816 17:38:17.327401   45790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem
	I0816 17:38:17.327440   45790 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem, removing ...
	I0816 17:38:17.327455   45790 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem
	I0816 17:38:17.327521   45790 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem (1082 bytes)
	I0816 17:38:17.327617   45790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem
	I0816 17:38:17.327634   45790 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem, removing ...
	I0816 17:38:17.327639   45790 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem
	I0816 17:38:17.327663   45790 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem (1123 bytes)
	I0816 17:38:17.327748   45790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem
	I0816 17:38:17.327765   45790 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem, removing ...
	I0816 17:38:17.327771   45790 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem
	I0816 17:38:17.327800   45790 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem (1679 bytes)
	I0816 17:38:17.327887   45790 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem org=jenkins.multinode-797386 san=[127.0.0.1 192.168.39.218 localhost minikube multinode-797386]
	I0816 17:38:17.449642   45790 provision.go:177] copyRemoteCerts
	I0816 17:38:17.449705   45790 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 17:38:17.449727   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHHostname
	I0816 17:38:17.452434   45790 main.go:141] libmachine: (multinode-797386) DBG | domain multinode-797386 has defined MAC address 52:54:00:2f:3e:a1 in network mk-multinode-797386
	I0816 17:38:17.452879   45790 main.go:141] libmachine: (multinode-797386) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:3e:a1", ip: ""} in network mk-multinode-797386: {Iface:virbr1 ExpiryTime:2024-08-16 18:32:52 +0000 UTC Type:0 Mac:52:54:00:2f:3e:a1 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:multinode-797386 Clientid:01:52:54:00:2f:3e:a1}
	I0816 17:38:17.452911   45790 main.go:141] libmachine: (multinode-797386) DBG | domain multinode-797386 has defined IP address 192.168.39.218 and MAC address 52:54:00:2f:3e:a1 in network mk-multinode-797386
	I0816 17:38:17.453140   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHPort
	I0816 17:38:17.453345   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHKeyPath
	I0816 17:38:17.453563   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHUsername
	I0816 17:38:17.453706   45790 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/multinode-797386/id_rsa Username:docker}
	I0816 17:38:17.538254   45790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 17:38:17.538313   45790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 17:38:17.577141   45790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 17:38:17.577210   45790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0816 17:38:17.601934   45790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 17:38:17.601988   45790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 17:38:17.635299   45790 provision.go:87] duration metric: took 313.959165ms to configureAuth
	I0816 17:38:17.635321   45790 buildroot.go:189] setting minikube options for container-runtime
	I0816 17:38:17.635543   45790 config.go:182] Loaded profile config "multinode-797386": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 17:38:17.635609   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHHostname
	I0816 17:38:17.638159   45790 main.go:141] libmachine: (multinode-797386) DBG | domain multinode-797386 has defined MAC address 52:54:00:2f:3e:a1 in network mk-multinode-797386
	I0816 17:38:17.638553   45790 main.go:141] libmachine: (multinode-797386) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:3e:a1", ip: ""} in network mk-multinode-797386: {Iface:virbr1 ExpiryTime:2024-08-16 18:32:52 +0000 UTC Type:0 Mac:52:54:00:2f:3e:a1 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:multinode-797386 Clientid:01:52:54:00:2f:3e:a1}
	I0816 17:38:17.638590   45790 main.go:141] libmachine: (multinode-797386) DBG | domain multinode-797386 has defined IP address 192.168.39.218 and MAC address 52:54:00:2f:3e:a1 in network mk-multinode-797386
	I0816 17:38:17.638772   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHPort
	I0816 17:38:17.638984   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHKeyPath
	I0816 17:38:17.639168   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHKeyPath
	I0816 17:38:17.639319   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHUsername
	I0816 17:38:17.639488   45790 main.go:141] libmachine: Using SSH client type: native
	I0816 17:38:17.639700   45790 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.218 22 <nil> <nil>}
	I0816 17:38:17.639716   45790 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 17:39:48.412501   45790 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 17:39:48.412555   45790 machine.go:96] duration metric: took 1m31.466526132s to provisionDockerMachine
	I0816 17:39:48.412591   45790 start.go:293] postStartSetup for "multinode-797386" (driver="kvm2")
	I0816 17:39:48.412646   45790 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 17:39:48.412686   45790 main.go:141] libmachine: (multinode-797386) Calling .DriverName
	I0816 17:39:48.413132   45790 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 17:39:48.413168   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHHostname
	I0816 17:39:48.416296   45790 main.go:141] libmachine: (multinode-797386) DBG | domain multinode-797386 has defined MAC address 52:54:00:2f:3e:a1 in network mk-multinode-797386
	I0816 17:39:48.416796   45790 main.go:141] libmachine: (multinode-797386) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:3e:a1", ip: ""} in network mk-multinode-797386: {Iface:virbr1 ExpiryTime:2024-08-16 18:32:52 +0000 UTC Type:0 Mac:52:54:00:2f:3e:a1 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:multinode-797386 Clientid:01:52:54:00:2f:3e:a1}
	I0816 17:39:48.416823   45790 main.go:141] libmachine: (multinode-797386) DBG | domain multinode-797386 has defined IP address 192.168.39.218 and MAC address 52:54:00:2f:3e:a1 in network mk-multinode-797386
	I0816 17:39:48.417030   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHPort
	I0816 17:39:48.417232   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHKeyPath
	I0816 17:39:48.417401   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHUsername
	I0816 17:39:48.417541   45790 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/multinode-797386/id_rsa Username:docker}
	I0816 17:39:48.503856   45790 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 17:39:48.507834   45790 command_runner.go:130] > NAME=Buildroot
	I0816 17:39:48.507852   45790 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0816 17:39:48.507859   45790 command_runner.go:130] > ID=buildroot
	I0816 17:39:48.507869   45790 command_runner.go:130] > VERSION_ID=2023.02.9
	I0816 17:39:48.507876   45790 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0816 17:39:48.507936   45790 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 17:39:48.507962   45790 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/addons for local assets ...
	I0816 17:39:48.508036   45790 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/files for local assets ...
	I0816 17:39:48.508112   45790 filesync.go:149] local asset: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem -> 167532.pem in /etc/ssl/certs
	I0816 17:39:48.508121   45790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem -> /etc/ssl/certs/167532.pem
	I0816 17:39:48.508198   45790 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 17:39:48.517032   45790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /etc/ssl/certs/167532.pem (1708 bytes)
	I0816 17:39:48.539799   45790 start.go:296] duration metric: took 127.191585ms for postStartSetup
	I0816 17:39:48.539860   45790 fix.go:56] duration metric: took 1m31.615139668s for fixHost
	I0816 17:39:48.539893   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHHostname
	I0816 17:39:48.542819   45790 main.go:141] libmachine: (multinode-797386) DBG | domain multinode-797386 has defined MAC address 52:54:00:2f:3e:a1 in network mk-multinode-797386
	I0816 17:39:48.543187   45790 main.go:141] libmachine: (multinode-797386) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:3e:a1", ip: ""} in network mk-multinode-797386: {Iface:virbr1 ExpiryTime:2024-08-16 18:32:52 +0000 UTC Type:0 Mac:52:54:00:2f:3e:a1 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:multinode-797386 Clientid:01:52:54:00:2f:3e:a1}
	I0816 17:39:48.543216   45790 main.go:141] libmachine: (multinode-797386) DBG | domain multinode-797386 has defined IP address 192.168.39.218 and MAC address 52:54:00:2f:3e:a1 in network mk-multinode-797386
	I0816 17:39:48.543391   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHPort
	I0816 17:39:48.543624   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHKeyPath
	I0816 17:39:48.543835   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHKeyPath
	I0816 17:39:48.543971   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHUsername
	I0816 17:39:48.544145   45790 main.go:141] libmachine: Using SSH client type: native
	I0816 17:39:48.544303   45790 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.218 22 <nil> <nil>}
	I0816 17:39:48.544312   45790 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 17:39:48.656917   45790 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723829988.629867005
	
	I0816 17:39:48.656937   45790 fix.go:216] guest clock: 1723829988.629867005
	I0816 17:39:48.656954   45790 fix.go:229] Guest: 2024-08-16 17:39:48.629867005 +0000 UTC Remote: 2024-08-16 17:39:48.539871648 +0000 UTC m=+91.738991900 (delta=89.995357ms)
	I0816 17:39:48.656982   45790 fix.go:200] guest clock delta is within tolerance: 89.995357ms
	I0816 17:39:48.656988   45790 start.go:83] releasing machines lock for "multinode-797386", held for 1m31.732283366s
	I0816 17:39:48.657008   45790 main.go:141] libmachine: (multinode-797386) Calling .DriverName
	I0816 17:39:48.657255   45790 main.go:141] libmachine: (multinode-797386) Calling .GetIP
	I0816 17:39:48.659940   45790 main.go:141] libmachine: (multinode-797386) DBG | domain multinode-797386 has defined MAC address 52:54:00:2f:3e:a1 in network mk-multinode-797386
	I0816 17:39:48.660305   45790 main.go:141] libmachine: (multinode-797386) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:3e:a1", ip: ""} in network mk-multinode-797386: {Iface:virbr1 ExpiryTime:2024-08-16 18:32:52 +0000 UTC Type:0 Mac:52:54:00:2f:3e:a1 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:multinode-797386 Clientid:01:52:54:00:2f:3e:a1}
	I0816 17:39:48.660330   45790 main.go:141] libmachine: (multinode-797386) DBG | domain multinode-797386 has defined IP address 192.168.39.218 and MAC address 52:54:00:2f:3e:a1 in network mk-multinode-797386
	I0816 17:39:48.660463   45790 main.go:141] libmachine: (multinode-797386) Calling .DriverName
	I0816 17:39:48.660958   45790 main.go:141] libmachine: (multinode-797386) Calling .DriverName
	I0816 17:39:48.661128   45790 main.go:141] libmachine: (multinode-797386) Calling .DriverName
	I0816 17:39:48.661202   45790 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 17:39:48.661254   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHHostname
	I0816 17:39:48.661372   45790 ssh_runner.go:195] Run: cat /version.json
	I0816 17:39:48.661394   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHHostname
	I0816 17:39:48.663936   45790 main.go:141] libmachine: (multinode-797386) DBG | domain multinode-797386 has defined MAC address 52:54:00:2f:3e:a1 in network mk-multinode-797386
	I0816 17:39:48.664161   45790 main.go:141] libmachine: (multinode-797386) DBG | domain multinode-797386 has defined MAC address 52:54:00:2f:3e:a1 in network mk-multinode-797386
	I0816 17:39:48.664437   45790 main.go:141] libmachine: (multinode-797386) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:3e:a1", ip: ""} in network mk-multinode-797386: {Iface:virbr1 ExpiryTime:2024-08-16 18:32:52 +0000 UTC Type:0 Mac:52:54:00:2f:3e:a1 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:multinode-797386 Clientid:01:52:54:00:2f:3e:a1}
	I0816 17:39:48.664464   45790 main.go:141] libmachine: (multinode-797386) DBG | domain multinode-797386 has defined IP address 192.168.39.218 and MAC address 52:54:00:2f:3e:a1 in network mk-multinode-797386
	I0816 17:39:48.664581   45790 main.go:141] libmachine: (multinode-797386) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:3e:a1", ip: ""} in network mk-multinode-797386: {Iface:virbr1 ExpiryTime:2024-08-16 18:32:52 +0000 UTC Type:0 Mac:52:54:00:2f:3e:a1 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:multinode-797386 Clientid:01:52:54:00:2f:3e:a1}
	I0816 17:39:48.664603   45790 main.go:141] libmachine: (multinode-797386) DBG | domain multinode-797386 has defined IP address 192.168.39.218 and MAC address 52:54:00:2f:3e:a1 in network mk-multinode-797386
	I0816 17:39:48.664653   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHPort
	I0816 17:39:48.664735   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHPort
	I0816 17:39:48.664813   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHKeyPath
	I0816 17:39:48.664883   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHKeyPath
	I0816 17:39:48.664937   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHUsername
	I0816 17:39:48.665042   45790 main.go:141] libmachine: (multinode-797386) Calling .GetSSHUsername
	I0816 17:39:48.665091   45790 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/multinode-797386/id_rsa Username:docker}
	I0816 17:39:48.665158   45790 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/multinode-797386/id_rsa Username:docker}
	I0816 17:39:48.745092   45790 command_runner.go:130] > {"iso_version": "v1.33.1-1723740674-19452", "kicbase_version": "v0.0.44-1723650208-19443", "minikube_version": "v1.33.1", "commit": "3bcdc720eef782394bf386d06fca73d1934e08fb"}
	I0816 17:39:48.745329   45790 ssh_runner.go:195] Run: systemctl --version
	I0816 17:39:48.786038   45790 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0816 17:39:48.786103   45790 command_runner.go:130] > systemd 252 (252)
	I0816 17:39:48.786140   45790 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0816 17:39:48.786211   45790 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 17:39:48.941095   45790 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0816 17:39:48.946586   45790 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0816 17:39:48.946733   45790 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 17:39:48.946803   45790 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 17:39:48.955875   45790 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0816 17:39:48.955895   45790 start.go:495] detecting cgroup driver to use...
	I0816 17:39:48.955955   45790 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 17:39:48.971461   45790 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 17:39:48.984534   45790 docker.go:217] disabling cri-docker service (if available) ...
	I0816 17:39:48.984603   45790 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 17:39:48.997732   45790 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 17:39:49.011264   45790 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 17:39:49.149843   45790 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 17:39:49.286037   45790 docker.go:233] disabling docker service ...
	I0816 17:39:49.286116   45790 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 17:39:49.303105   45790 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 17:39:49.316640   45790 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 17:39:49.455302   45790 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 17:39:49.600322   45790 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 17:39:49.613527   45790 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 17:39:49.630926   45790 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0816 17:39:49.631349   45790 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 17:39:49.631397   45790 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:39:49.641394   45790 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 17:39:49.641453   45790 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:39:49.651109   45790 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:39:49.660887   45790 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:39:49.670607   45790 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 17:39:49.680288   45790 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:39:49.689719   45790 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:39:49.699617   45790 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:39:49.709148   45790 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 17:39:49.717798   45790 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0816 17:39:49.717858   45790 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 17:39:49.726334   45790 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 17:39:49.867195   45790 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 17:39:57.859806   45790 ssh_runner.go:235] Completed: sudo systemctl restart crio: (7.992568521s)
	I0816 17:39:57.859837   45790 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 17:39:57.859879   45790 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 17:39:57.864472   45790 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0816 17:39:57.864493   45790 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0816 17:39:57.864505   45790 command_runner.go:130] > Device: 0,22	Inode: 1333        Links: 1
	I0816 17:39:57.864514   45790 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0816 17:39:57.864519   45790 command_runner.go:130] > Access: 2024-08-16 17:39:57.730238821 +0000
	I0816 17:39:57.864529   45790 command_runner.go:130] > Modify: 2024-08-16 17:39:57.730238821 +0000
	I0816 17:39:57.864535   45790 command_runner.go:130] > Change: 2024-08-16 17:39:57.730238821 +0000
	I0816 17:39:57.864539   45790 command_runner.go:130] >  Birth: -
	I0816 17:39:57.864582   45790 start.go:563] Will wait 60s for crictl version
	I0816 17:39:57.864640   45790 ssh_runner.go:195] Run: which crictl
	I0816 17:39:57.867942   45790 command_runner.go:130] > /usr/bin/crictl
	I0816 17:39:57.868095   45790 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 17:39:57.901662   45790 command_runner.go:130] > Version:  0.1.0
	I0816 17:39:57.901692   45790 command_runner.go:130] > RuntimeName:  cri-o
	I0816 17:39:57.901700   45790 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0816 17:39:57.901708   45790 command_runner.go:130] > RuntimeApiVersion:  v1
	I0816 17:39:57.902711   45790 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 17:39:57.902792   45790 ssh_runner.go:195] Run: crio --version
	I0816 17:39:57.930494   45790 command_runner.go:130] > crio version 1.29.1
	I0816 17:39:57.930522   45790 command_runner.go:130] > Version:        1.29.1
	I0816 17:39:57.930531   45790 command_runner.go:130] > GitCommit:      unknown
	I0816 17:39:57.930538   45790 command_runner.go:130] > GitCommitDate:  unknown
	I0816 17:39:57.930544   45790 command_runner.go:130] > GitTreeState:   clean
	I0816 17:39:57.930553   45790 command_runner.go:130] > BuildDate:      2024-08-15T22:11:01Z
	I0816 17:39:57.930560   45790 command_runner.go:130] > GoVersion:      go1.21.6
	I0816 17:39:57.930566   45790 command_runner.go:130] > Compiler:       gc
	I0816 17:39:57.930572   45790 command_runner.go:130] > Platform:       linux/amd64
	I0816 17:39:57.930578   45790 command_runner.go:130] > Linkmode:       dynamic
	I0816 17:39:57.930599   45790 command_runner.go:130] > BuildTags:      
	I0816 17:39:57.930608   45790 command_runner.go:130] >   containers_image_ostree_stub
	I0816 17:39:57.930614   45790 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0816 17:39:57.930623   45790 command_runner.go:130] >   btrfs_noversion
	I0816 17:39:57.930630   45790 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0816 17:39:57.930637   45790 command_runner.go:130] >   libdm_no_deferred_remove
	I0816 17:39:57.930642   45790 command_runner.go:130] >   seccomp
	I0816 17:39:57.930651   45790 command_runner.go:130] > LDFlags:          unknown
	I0816 17:39:57.930658   45790 command_runner.go:130] > SeccompEnabled:   true
	I0816 17:39:57.930666   45790 command_runner.go:130] > AppArmorEnabled:  false
	I0816 17:39:57.931771   45790 ssh_runner.go:195] Run: crio --version
	I0816 17:39:57.957761   45790 command_runner.go:130] > crio version 1.29.1
	I0816 17:39:57.957788   45790 command_runner.go:130] > Version:        1.29.1
	I0816 17:39:57.957797   45790 command_runner.go:130] > GitCommit:      unknown
	I0816 17:39:57.957804   45790 command_runner.go:130] > GitCommitDate:  unknown
	I0816 17:39:57.957809   45790 command_runner.go:130] > GitTreeState:   clean
	I0816 17:39:57.957818   45790 command_runner.go:130] > BuildDate:      2024-08-15T22:11:01Z
	I0816 17:39:57.957825   45790 command_runner.go:130] > GoVersion:      go1.21.6
	I0816 17:39:57.957831   45790 command_runner.go:130] > Compiler:       gc
	I0816 17:39:57.957838   45790 command_runner.go:130] > Platform:       linux/amd64
	I0816 17:39:57.957844   45790 command_runner.go:130] > Linkmode:       dynamic
	I0816 17:39:57.957851   45790 command_runner.go:130] > BuildTags:      
	I0816 17:39:57.957895   45790 command_runner.go:130] >   containers_image_ostree_stub
	I0816 17:39:57.957906   45790 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0816 17:39:57.957912   45790 command_runner.go:130] >   btrfs_noversion
	I0816 17:39:57.957916   45790 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0816 17:39:57.957921   45790 command_runner.go:130] >   libdm_no_deferred_remove
	I0816 17:39:57.957925   45790 command_runner.go:130] >   seccomp
	I0816 17:39:57.957932   45790 command_runner.go:130] > LDFlags:          unknown
	I0816 17:39:57.957936   45790 command_runner.go:130] > SeccompEnabled:   true
	I0816 17:39:57.957942   45790 command_runner.go:130] > AppArmorEnabled:  false
	I0816 17:39:57.960912   45790 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 17:39:57.962069   45790 main.go:141] libmachine: (multinode-797386) Calling .GetIP
	I0816 17:39:57.964726   45790 main.go:141] libmachine: (multinode-797386) DBG | domain multinode-797386 has defined MAC address 52:54:00:2f:3e:a1 in network mk-multinode-797386
	I0816 17:39:57.965139   45790 main.go:141] libmachine: (multinode-797386) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:3e:a1", ip: ""} in network mk-multinode-797386: {Iface:virbr1 ExpiryTime:2024-08-16 18:32:52 +0000 UTC Type:0 Mac:52:54:00:2f:3e:a1 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:multinode-797386 Clientid:01:52:54:00:2f:3e:a1}
	I0816 17:39:57.965170   45790 main.go:141] libmachine: (multinode-797386) DBG | domain multinode-797386 has defined IP address 192.168.39.218 and MAC address 52:54:00:2f:3e:a1 in network mk-multinode-797386
	I0816 17:39:57.965368   45790 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0816 17:39:57.969385   45790 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0816 17:39:57.969498   45790 kubeadm.go:883] updating cluster {Name:multinode-797386 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.0 ClusterName:multinode-797386 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.218 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.71 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 17:39:57.969676   45790 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 17:39:57.969732   45790 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 17:39:58.011427   45790 command_runner.go:130] > {
	I0816 17:39:58.011453   45790 command_runner.go:130] >   "images": [
	I0816 17:39:58.011467   45790 command_runner.go:130] >     {
	I0816 17:39:58.011480   45790 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0816 17:39:58.011491   45790 command_runner.go:130] >       "repoTags": [
	I0816 17:39:58.011500   45790 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0816 17:39:58.011505   45790 command_runner.go:130] >       ],
	I0816 17:39:58.011511   45790 command_runner.go:130] >       "repoDigests": [
	I0816 17:39:58.011533   45790 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0816 17:39:58.011548   45790 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0816 17:39:58.011565   45790 command_runner.go:130] >       ],
	I0816 17:39:58.011575   45790 command_runner.go:130] >       "size": "87165492",
	I0816 17:39:58.011581   45790 command_runner.go:130] >       "uid": null,
	I0816 17:39:58.011591   45790 command_runner.go:130] >       "username": "",
	I0816 17:39:58.011601   45790 command_runner.go:130] >       "spec": null,
	I0816 17:39:58.011611   45790 command_runner.go:130] >       "pinned": false
	I0816 17:39:58.011618   45790 command_runner.go:130] >     },
	I0816 17:39:58.011624   45790 command_runner.go:130] >     {
	I0816 17:39:58.011632   45790 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0816 17:39:58.011639   45790 command_runner.go:130] >       "repoTags": [
	I0816 17:39:58.011647   45790 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0816 17:39:58.011654   45790 command_runner.go:130] >       ],
	I0816 17:39:58.011660   45790 command_runner.go:130] >       "repoDigests": [
	I0816 17:39:58.011671   45790 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0816 17:39:58.011682   45790 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0816 17:39:58.011691   45790 command_runner.go:130] >       ],
	I0816 17:39:58.011699   45790 command_runner.go:130] >       "size": "87190579",
	I0816 17:39:58.011708   45790 command_runner.go:130] >       "uid": null,
	I0816 17:39:58.011720   45790 command_runner.go:130] >       "username": "",
	I0816 17:39:58.011729   45790 command_runner.go:130] >       "spec": null,
	I0816 17:39:58.011735   45790 command_runner.go:130] >       "pinned": false
	I0816 17:39:58.011743   45790 command_runner.go:130] >     },
	I0816 17:39:58.011749   45790 command_runner.go:130] >     {
	I0816 17:39:58.011764   45790 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0816 17:39:58.011773   45790 command_runner.go:130] >       "repoTags": [
	I0816 17:39:58.011780   45790 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0816 17:39:58.011787   45790 command_runner.go:130] >       ],
	I0816 17:39:58.011796   45790 command_runner.go:130] >       "repoDigests": [
	I0816 17:39:58.011817   45790 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0816 17:39:58.011832   45790 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0816 17:39:58.011841   45790 command_runner.go:130] >       ],
	I0816 17:39:58.011855   45790 command_runner.go:130] >       "size": "1363676",
	I0816 17:39:58.011863   45790 command_runner.go:130] >       "uid": null,
	I0816 17:39:58.011870   45790 command_runner.go:130] >       "username": "",
	I0816 17:39:58.011876   45790 command_runner.go:130] >       "spec": null,
	I0816 17:39:58.011881   45790 command_runner.go:130] >       "pinned": false
	I0816 17:39:58.011886   45790 command_runner.go:130] >     },
	I0816 17:39:58.011890   45790 command_runner.go:130] >     {
	I0816 17:39:58.011899   45790 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0816 17:39:58.011904   45790 command_runner.go:130] >       "repoTags": [
	I0816 17:39:58.011911   45790 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0816 17:39:58.011916   45790 command_runner.go:130] >       ],
	I0816 17:39:58.011922   45790 command_runner.go:130] >       "repoDigests": [
	I0816 17:39:58.011933   45790 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0816 17:39:58.011953   45790 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0816 17:39:58.011958   45790 command_runner.go:130] >       ],
	I0816 17:39:58.011965   45790 command_runner.go:130] >       "size": "31470524",
	I0816 17:39:58.011970   45790 command_runner.go:130] >       "uid": null,
	I0816 17:39:58.011975   45790 command_runner.go:130] >       "username": "",
	I0816 17:39:58.011981   45790 command_runner.go:130] >       "spec": null,
	I0816 17:39:58.011986   45790 command_runner.go:130] >       "pinned": false
	I0816 17:39:58.011991   45790 command_runner.go:130] >     },
	I0816 17:39:58.011997   45790 command_runner.go:130] >     {
	I0816 17:39:58.012005   45790 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0816 17:39:58.012014   45790 command_runner.go:130] >       "repoTags": [
	I0816 17:39:58.012021   45790 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0816 17:39:58.012027   45790 command_runner.go:130] >       ],
	I0816 17:39:58.012037   45790 command_runner.go:130] >       "repoDigests": [
	I0816 17:39:58.012050   45790 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0816 17:39:58.012063   45790 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0816 17:39:58.012068   45790 command_runner.go:130] >       ],
	I0816 17:39:58.012076   45790 command_runner.go:130] >       "size": "61245718",
	I0816 17:39:58.012082   45790 command_runner.go:130] >       "uid": null,
	I0816 17:39:58.012088   45790 command_runner.go:130] >       "username": "nonroot",
	I0816 17:39:58.012100   45790 command_runner.go:130] >       "spec": null,
	I0816 17:39:58.012108   45790 command_runner.go:130] >       "pinned": false
	I0816 17:39:58.012113   45790 command_runner.go:130] >     },
	I0816 17:39:58.012121   45790 command_runner.go:130] >     {
	I0816 17:39:58.012130   45790 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0816 17:39:58.012139   45790 command_runner.go:130] >       "repoTags": [
	I0816 17:39:58.012146   45790 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0816 17:39:58.012154   45790 command_runner.go:130] >       ],
	I0816 17:39:58.012160   45790 command_runner.go:130] >       "repoDigests": [
	I0816 17:39:58.012169   45790 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0816 17:39:58.012182   45790 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0816 17:39:58.012188   45790 command_runner.go:130] >       ],
	I0816 17:39:58.012198   45790 command_runner.go:130] >       "size": "149009664",
	I0816 17:39:58.012204   45790 command_runner.go:130] >       "uid": {
	I0816 17:39:58.012212   45790 command_runner.go:130] >         "value": "0"
	I0816 17:39:58.012217   45790 command_runner.go:130] >       },
	I0816 17:39:58.012226   45790 command_runner.go:130] >       "username": "",
	I0816 17:39:58.012233   45790 command_runner.go:130] >       "spec": null,
	I0816 17:39:58.012243   45790 command_runner.go:130] >       "pinned": false
	I0816 17:39:58.012249   45790 command_runner.go:130] >     },
	I0816 17:39:58.012255   45790 command_runner.go:130] >     {
	I0816 17:39:58.012265   45790 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0816 17:39:58.012274   45790 command_runner.go:130] >       "repoTags": [
	I0816 17:39:58.012282   45790 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0816 17:39:58.012291   45790 command_runner.go:130] >       ],
	I0816 17:39:58.012298   45790 command_runner.go:130] >       "repoDigests": [
	I0816 17:39:58.012311   45790 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0816 17:39:58.012327   45790 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0816 17:39:58.012336   45790 command_runner.go:130] >       ],
	I0816 17:39:58.012341   45790 command_runner.go:130] >       "size": "95233506",
	I0816 17:39:58.012346   45790 command_runner.go:130] >       "uid": {
	I0816 17:39:58.012353   45790 command_runner.go:130] >         "value": "0"
	I0816 17:39:58.012358   45790 command_runner.go:130] >       },
	I0816 17:39:58.012366   45790 command_runner.go:130] >       "username": "",
	I0816 17:39:58.012371   45790 command_runner.go:130] >       "spec": null,
	I0816 17:39:58.012380   45790 command_runner.go:130] >       "pinned": false
	I0816 17:39:58.012394   45790 command_runner.go:130] >     },
	I0816 17:39:58.012403   45790 command_runner.go:130] >     {
	I0816 17:39:58.012413   45790 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0816 17:39:58.012422   45790 command_runner.go:130] >       "repoTags": [
	I0816 17:39:58.012429   45790 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0816 17:39:58.012437   45790 command_runner.go:130] >       ],
	I0816 17:39:58.012443   45790 command_runner.go:130] >       "repoDigests": [
	I0816 17:39:58.012473   45790 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0816 17:39:58.012489   45790 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0816 17:39:58.012495   45790 command_runner.go:130] >       ],
	I0816 17:39:58.012502   45790 command_runner.go:130] >       "size": "89437512",
	I0816 17:39:58.012511   45790 command_runner.go:130] >       "uid": {
	I0816 17:39:58.012517   45790 command_runner.go:130] >         "value": "0"
	I0816 17:39:58.012525   45790 command_runner.go:130] >       },
	I0816 17:39:58.012530   45790 command_runner.go:130] >       "username": "",
	I0816 17:39:58.012536   45790 command_runner.go:130] >       "spec": null,
	I0816 17:39:58.012541   45790 command_runner.go:130] >       "pinned": false
	I0816 17:39:58.012546   45790 command_runner.go:130] >     },
	I0816 17:39:58.012551   45790 command_runner.go:130] >     {
	I0816 17:39:58.012565   45790 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0816 17:39:58.012570   45790 command_runner.go:130] >       "repoTags": [
	I0816 17:39:58.012578   45790 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0816 17:39:58.012584   45790 command_runner.go:130] >       ],
	I0816 17:39:58.012589   45790 command_runner.go:130] >       "repoDigests": [
	I0816 17:39:58.012610   45790 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0816 17:39:58.012631   45790 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0816 17:39:58.012638   45790 command_runner.go:130] >       ],
	I0816 17:39:58.012643   45790 command_runner.go:130] >       "size": "92728217",
	I0816 17:39:58.012648   45790 command_runner.go:130] >       "uid": null,
	I0816 17:39:58.012653   45790 command_runner.go:130] >       "username": "",
	I0816 17:39:58.012659   45790 command_runner.go:130] >       "spec": null,
	I0816 17:39:58.012664   45790 command_runner.go:130] >       "pinned": false
	I0816 17:39:58.012669   45790 command_runner.go:130] >     },
	I0816 17:39:58.012673   45790 command_runner.go:130] >     {
	I0816 17:39:58.012681   45790 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0816 17:39:58.012686   45790 command_runner.go:130] >       "repoTags": [
	I0816 17:39:58.012704   45790 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0816 17:39:58.012712   45790 command_runner.go:130] >       ],
	I0816 17:39:58.012718   45790 command_runner.go:130] >       "repoDigests": [
	I0816 17:39:58.012730   45790 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0816 17:39:58.012741   45790 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0816 17:39:58.012750   45790 command_runner.go:130] >       ],
	I0816 17:39:58.012756   45790 command_runner.go:130] >       "size": "68420936",
	I0816 17:39:58.012761   45790 command_runner.go:130] >       "uid": {
	I0816 17:39:58.012765   45790 command_runner.go:130] >         "value": "0"
	I0816 17:39:58.012768   45790 command_runner.go:130] >       },
	I0816 17:39:58.012772   45790 command_runner.go:130] >       "username": "",
	I0816 17:39:58.012776   45790 command_runner.go:130] >       "spec": null,
	I0816 17:39:58.012783   45790 command_runner.go:130] >       "pinned": false
	I0816 17:39:58.012786   45790 command_runner.go:130] >     },
	I0816 17:39:58.012789   45790 command_runner.go:130] >     {
	I0816 17:39:58.012795   45790 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0816 17:39:58.012801   45790 command_runner.go:130] >       "repoTags": [
	I0816 17:39:58.012806   45790 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0816 17:39:58.012812   45790 command_runner.go:130] >       ],
	I0816 17:39:58.012816   45790 command_runner.go:130] >       "repoDigests": [
	I0816 17:39:58.012822   45790 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0816 17:39:58.012829   45790 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0816 17:39:58.012834   45790 command_runner.go:130] >       ],
	I0816 17:39:58.012838   45790 command_runner.go:130] >       "size": "742080",
	I0816 17:39:58.012842   45790 command_runner.go:130] >       "uid": {
	I0816 17:39:58.012846   45790 command_runner.go:130] >         "value": "65535"
	I0816 17:39:58.012850   45790 command_runner.go:130] >       },
	I0816 17:39:58.012854   45790 command_runner.go:130] >       "username": "",
	I0816 17:39:58.012859   45790 command_runner.go:130] >       "spec": null,
	I0816 17:39:58.012863   45790 command_runner.go:130] >       "pinned": true
	I0816 17:39:58.012867   45790 command_runner.go:130] >     }
	I0816 17:39:58.012870   45790 command_runner.go:130] >   ]
	I0816 17:39:58.012873   45790 command_runner.go:130] > }
	I0816 17:39:58.013194   45790 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 17:39:58.013211   45790 crio.go:433] Images already preloaded, skipping extraction
	I0816 17:39:58.013279   45790 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 17:39:58.043584   45790 command_runner.go:130] > {
	I0816 17:39:58.043615   45790 command_runner.go:130] >   "images": [
	I0816 17:39:58.043619   45790 command_runner.go:130] >     {
	I0816 17:39:58.043626   45790 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0816 17:39:58.043630   45790 command_runner.go:130] >       "repoTags": [
	I0816 17:39:58.043636   45790 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0816 17:39:58.043639   45790 command_runner.go:130] >       ],
	I0816 17:39:58.043643   45790 command_runner.go:130] >       "repoDigests": [
	I0816 17:39:58.043651   45790 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0816 17:39:58.043658   45790 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0816 17:39:58.043661   45790 command_runner.go:130] >       ],
	I0816 17:39:58.043667   45790 command_runner.go:130] >       "size": "87165492",
	I0816 17:39:58.043673   45790 command_runner.go:130] >       "uid": null,
	I0816 17:39:58.043678   45790 command_runner.go:130] >       "username": "",
	I0816 17:39:58.043688   45790 command_runner.go:130] >       "spec": null,
	I0816 17:39:58.043695   45790 command_runner.go:130] >       "pinned": false
	I0816 17:39:58.043700   45790 command_runner.go:130] >     },
	I0816 17:39:58.043706   45790 command_runner.go:130] >     {
	I0816 17:39:58.043714   45790 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0816 17:39:58.043724   45790 command_runner.go:130] >       "repoTags": [
	I0816 17:39:58.043731   45790 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0816 17:39:58.043738   45790 command_runner.go:130] >       ],
	I0816 17:39:58.043742   45790 command_runner.go:130] >       "repoDigests": [
	I0816 17:39:58.043749   45790 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0816 17:39:58.043756   45790 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0816 17:39:58.043760   45790 command_runner.go:130] >       ],
	I0816 17:39:58.043766   45790 command_runner.go:130] >       "size": "87190579",
	I0816 17:39:58.043769   45790 command_runner.go:130] >       "uid": null,
	I0816 17:39:58.043782   45790 command_runner.go:130] >       "username": "",
	I0816 17:39:58.043791   45790 command_runner.go:130] >       "spec": null,
	I0816 17:39:58.043801   45790 command_runner.go:130] >       "pinned": false
	I0816 17:39:58.043806   45790 command_runner.go:130] >     },
	I0816 17:39:58.043811   45790 command_runner.go:130] >     {
	I0816 17:39:58.043823   45790 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0816 17:39:58.043829   45790 command_runner.go:130] >       "repoTags": [
	I0816 17:39:58.043837   45790 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0816 17:39:58.043841   45790 command_runner.go:130] >       ],
	I0816 17:39:58.043849   45790 command_runner.go:130] >       "repoDigests": [
	I0816 17:39:58.043859   45790 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0816 17:39:58.043866   45790 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0816 17:39:58.043872   45790 command_runner.go:130] >       ],
	I0816 17:39:58.043876   45790 command_runner.go:130] >       "size": "1363676",
	I0816 17:39:58.043884   45790 command_runner.go:130] >       "uid": null,
	I0816 17:39:58.043893   45790 command_runner.go:130] >       "username": "",
	I0816 17:39:58.043902   45790 command_runner.go:130] >       "spec": null,
	I0816 17:39:58.043912   45790 command_runner.go:130] >       "pinned": false
	I0816 17:39:58.043918   45790 command_runner.go:130] >     },
	I0816 17:39:58.043925   45790 command_runner.go:130] >     {
	I0816 17:39:58.043931   45790 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0816 17:39:58.043937   45790 command_runner.go:130] >       "repoTags": [
	I0816 17:39:58.043942   45790 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0816 17:39:58.043948   45790 command_runner.go:130] >       ],
	I0816 17:39:58.043951   45790 command_runner.go:130] >       "repoDigests": [
	I0816 17:39:58.043961   45790 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0816 17:39:58.044031   45790 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0816 17:39:58.044039   45790 command_runner.go:130] >       ],
	I0816 17:39:58.044043   45790 command_runner.go:130] >       "size": "31470524",
	I0816 17:39:58.044047   45790 command_runner.go:130] >       "uid": null,
	I0816 17:39:58.044050   45790 command_runner.go:130] >       "username": "",
	I0816 17:39:58.044055   45790 command_runner.go:130] >       "spec": null,
	I0816 17:39:58.044061   45790 command_runner.go:130] >       "pinned": false
	I0816 17:39:58.044069   45790 command_runner.go:130] >     },
	I0816 17:39:58.044075   45790 command_runner.go:130] >     {
	I0816 17:39:58.044088   45790 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0816 17:39:58.044097   45790 command_runner.go:130] >       "repoTags": [
	I0816 17:39:58.044104   45790 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0816 17:39:58.044113   45790 command_runner.go:130] >       ],
	I0816 17:39:58.044119   45790 command_runner.go:130] >       "repoDigests": [
	I0816 17:39:58.044138   45790 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0816 17:39:58.044151   45790 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0816 17:39:58.044161   45790 command_runner.go:130] >       ],
	I0816 17:39:58.044167   45790 command_runner.go:130] >       "size": "61245718",
	I0816 17:39:58.044174   45790 command_runner.go:130] >       "uid": null,
	I0816 17:39:58.044187   45790 command_runner.go:130] >       "username": "nonroot",
	I0816 17:39:58.044196   45790 command_runner.go:130] >       "spec": null,
	I0816 17:39:58.044203   45790 command_runner.go:130] >       "pinned": false
	I0816 17:39:58.044210   45790 command_runner.go:130] >     },
	I0816 17:39:58.044216   45790 command_runner.go:130] >     {
	I0816 17:39:58.044227   45790 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0816 17:39:58.044235   45790 command_runner.go:130] >       "repoTags": [
	I0816 17:39:58.044240   45790 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0816 17:39:58.044248   45790 command_runner.go:130] >       ],
	I0816 17:39:58.044256   45790 command_runner.go:130] >       "repoDigests": [
	I0816 17:39:58.044270   45790 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0816 17:39:58.044284   45790 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0816 17:39:58.044293   45790 command_runner.go:130] >       ],
	I0816 17:39:58.044299   45790 command_runner.go:130] >       "size": "149009664",
	I0816 17:39:58.044306   45790 command_runner.go:130] >       "uid": {
	I0816 17:39:58.044312   45790 command_runner.go:130] >         "value": "0"
	I0816 17:39:58.044324   45790 command_runner.go:130] >       },
	I0816 17:39:58.044333   45790 command_runner.go:130] >       "username": "",
	I0816 17:39:58.044338   45790 command_runner.go:130] >       "spec": null,
	I0816 17:39:58.044342   45790 command_runner.go:130] >       "pinned": false
	I0816 17:39:58.044346   45790 command_runner.go:130] >     },
	I0816 17:39:58.044353   45790 command_runner.go:130] >     {
	I0816 17:39:58.044363   45790 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0816 17:39:58.044372   45790 command_runner.go:130] >       "repoTags": [
	I0816 17:39:58.044381   45790 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0816 17:39:58.044389   45790 command_runner.go:130] >       ],
	I0816 17:39:58.044396   45790 command_runner.go:130] >       "repoDigests": [
	I0816 17:39:58.044410   45790 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0816 17:39:58.044423   45790 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0816 17:39:58.044431   45790 command_runner.go:130] >       ],
	I0816 17:39:58.044438   45790 command_runner.go:130] >       "size": "95233506",
	I0816 17:39:58.044445   45790 command_runner.go:130] >       "uid": {
	I0816 17:39:58.044449   45790 command_runner.go:130] >         "value": "0"
	I0816 17:39:58.044454   45790 command_runner.go:130] >       },
	I0816 17:39:58.044461   45790 command_runner.go:130] >       "username": "",
	I0816 17:39:58.044469   45790 command_runner.go:130] >       "spec": null,
	I0816 17:39:58.044483   45790 command_runner.go:130] >       "pinned": false
	I0816 17:39:58.044492   45790 command_runner.go:130] >     },
	I0816 17:39:58.044497   45790 command_runner.go:130] >     {
	I0816 17:39:58.044509   45790 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0816 17:39:58.044518   45790 command_runner.go:130] >       "repoTags": [
	I0816 17:39:58.044527   45790 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0816 17:39:58.044539   45790 command_runner.go:130] >       ],
	I0816 17:39:58.044543   45790 command_runner.go:130] >       "repoDigests": [
	I0816 17:39:58.044571   45790 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0816 17:39:58.044587   45790 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0816 17:39:58.044598   45790 command_runner.go:130] >       ],
	I0816 17:39:58.044606   45790 command_runner.go:130] >       "size": "89437512",
	I0816 17:39:58.044613   45790 command_runner.go:130] >       "uid": {
	I0816 17:39:58.044618   45790 command_runner.go:130] >         "value": "0"
	I0816 17:39:58.044638   45790 command_runner.go:130] >       },
	I0816 17:39:58.044644   45790 command_runner.go:130] >       "username": "",
	I0816 17:39:58.044653   45790 command_runner.go:130] >       "spec": null,
	I0816 17:39:58.044659   45790 command_runner.go:130] >       "pinned": false
	I0816 17:39:58.044667   45790 command_runner.go:130] >     },
	I0816 17:39:58.044672   45790 command_runner.go:130] >     {
	I0816 17:39:58.044684   45790 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0816 17:39:58.044694   45790 command_runner.go:130] >       "repoTags": [
	I0816 17:39:58.044702   45790 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0816 17:39:58.044711   45790 command_runner.go:130] >       ],
	I0816 17:39:58.044718   45790 command_runner.go:130] >       "repoDigests": [
	I0816 17:39:58.044732   45790 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0816 17:39:58.044749   45790 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0816 17:39:58.044757   45790 command_runner.go:130] >       ],
	I0816 17:39:58.044763   45790 command_runner.go:130] >       "size": "92728217",
	I0816 17:39:58.044769   45790 command_runner.go:130] >       "uid": null,
	I0816 17:39:58.044773   45790 command_runner.go:130] >       "username": "",
	I0816 17:39:58.044782   45790 command_runner.go:130] >       "spec": null,
	I0816 17:39:58.044791   45790 command_runner.go:130] >       "pinned": false
	I0816 17:39:58.044796   45790 command_runner.go:130] >     },
	I0816 17:39:58.044805   45790 command_runner.go:130] >     {
	I0816 17:39:58.044814   45790 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0816 17:39:58.044829   45790 command_runner.go:130] >       "repoTags": [
	I0816 17:39:58.044941   45790 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0816 17:39:58.044958   45790 command_runner.go:130] >       ],
	I0816 17:39:58.044966   45790 command_runner.go:130] >       "repoDigests": [
	I0816 17:39:58.044981   45790 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0816 17:39:58.044994   45790 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0816 17:39:58.045000   45790 command_runner.go:130] >       ],
	I0816 17:39:58.045007   45790 command_runner.go:130] >       "size": "68420936",
	I0816 17:39:58.045016   45790 command_runner.go:130] >       "uid": {
	I0816 17:39:58.045023   45790 command_runner.go:130] >         "value": "0"
	I0816 17:39:58.045032   45790 command_runner.go:130] >       },
	I0816 17:39:58.045039   45790 command_runner.go:130] >       "username": "",
	I0816 17:39:58.045048   45790 command_runner.go:130] >       "spec": null,
	I0816 17:39:58.045055   45790 command_runner.go:130] >       "pinned": false
	I0816 17:39:58.045061   45790 command_runner.go:130] >     },
	I0816 17:39:58.045067   45790 command_runner.go:130] >     {
	I0816 17:39:58.045077   45790 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0816 17:39:58.045151   45790 command_runner.go:130] >       "repoTags": [
	I0816 17:39:58.045168   45790 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0816 17:39:58.045178   45790 command_runner.go:130] >       ],
	I0816 17:39:58.045185   45790 command_runner.go:130] >       "repoDigests": [
	I0816 17:39:58.045199   45790 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0816 17:39:58.045210   45790 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0816 17:39:58.045217   45790 command_runner.go:130] >       ],
	I0816 17:39:58.045223   45790 command_runner.go:130] >       "size": "742080",
	I0816 17:39:58.045232   45790 command_runner.go:130] >       "uid": {
	I0816 17:39:58.045240   45790 command_runner.go:130] >         "value": "65535"
	I0816 17:39:58.045249   45790 command_runner.go:130] >       },
	I0816 17:39:58.045257   45790 command_runner.go:130] >       "username": "",
	I0816 17:39:58.045265   45790 command_runner.go:130] >       "spec": null,
	I0816 17:39:58.045272   45790 command_runner.go:130] >       "pinned": true
	I0816 17:39:58.045280   45790 command_runner.go:130] >     }
	I0816 17:39:58.045285   45790 command_runner.go:130] >   ]
	I0816 17:39:58.045293   45790 command_runner.go:130] > }
	I0816 17:39:58.045521   45790 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 17:39:58.045545   45790 cache_images.go:84] Images are preloaded, skipping loading
	I0816 17:39:58.045553   45790 kubeadm.go:934] updating node { 192.168.39.218 8443 v1.31.0 crio true true} ...
	I0816 17:39:58.045666   45790 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-797386 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.218
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:multinode-797386 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 17:39:58.046037   45790 ssh_runner.go:195] Run: crio config
	I0816 17:39:58.087059   45790 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0816 17:39:58.087098   45790 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0816 17:39:58.087109   45790 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0816 17:39:58.087115   45790 command_runner.go:130] > #
	I0816 17:39:58.087126   45790 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0816 17:39:58.087136   45790 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0816 17:39:58.087148   45790 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0816 17:39:58.087158   45790 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0816 17:39:58.087164   45790 command_runner.go:130] > # reload'.
	I0816 17:39:58.087174   45790 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0816 17:39:58.087185   45790 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0816 17:39:58.087195   45790 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0816 17:39:58.087209   45790 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0816 17:39:58.087214   45790 command_runner.go:130] > [crio]
	I0816 17:39:58.087223   45790 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0816 17:39:58.087233   45790 command_runner.go:130] > # containers images, in this directory.
	I0816 17:39:58.087243   45790 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0816 17:39:58.087259   45790 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0816 17:39:58.087270   45790 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0816 17:39:58.087282   45790 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0816 17:39:58.087453   45790 command_runner.go:130] > # imagestore = ""
	I0816 17:39:58.087480   45790 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0816 17:39:58.087493   45790 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0816 17:39:58.087604   45790 command_runner.go:130] > storage_driver = "overlay"
	I0816 17:39:58.087622   45790 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0816 17:39:58.087631   45790 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0816 17:39:58.087640   45790 command_runner.go:130] > storage_option = [
	I0816 17:39:58.087716   45790 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0816 17:39:58.087747   45790 command_runner.go:130] > ]
	I0816 17:39:58.087758   45790 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0816 17:39:58.087779   45790 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0816 17:39:58.087948   45790 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0816 17:39:58.087960   45790 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0816 17:39:58.087969   45790 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0816 17:39:58.087977   45790 command_runner.go:130] > # always happen on a node reboot
	I0816 17:39:58.088211   45790 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0816 17:39:58.088236   45790 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0816 17:39:58.088257   45790 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0816 17:39:58.088266   45790 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0816 17:39:58.088328   45790 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0816 17:39:58.088347   45790 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0816 17:39:58.088362   45790 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0816 17:39:58.088577   45790 command_runner.go:130] > # internal_wipe = true
	I0816 17:39:58.088607   45790 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0816 17:39:58.088629   45790 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0816 17:39:58.088881   45790 command_runner.go:130] > # internal_repair = false
	I0816 17:39:58.088893   45790 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0816 17:39:58.088903   45790 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0816 17:39:58.088912   45790 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0816 17:39:58.089143   45790 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0816 17:39:58.089155   45790 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0816 17:39:58.089161   45790 command_runner.go:130] > [crio.api]
	I0816 17:39:58.089170   45790 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0816 17:39:58.089362   45790 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0816 17:39:58.089379   45790 command_runner.go:130] > # IP address on which the stream server will listen.
	I0816 17:39:58.089584   45790 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0816 17:39:58.089598   45790 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0816 17:39:58.089605   45790 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0816 17:39:58.089898   45790 command_runner.go:130] > # stream_port = "0"
	I0816 17:39:58.089909   45790 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0816 17:39:58.090130   45790 command_runner.go:130] > # stream_enable_tls = false
	I0816 17:39:58.090141   45790 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0816 17:39:58.090312   45790 command_runner.go:130] > # stream_idle_timeout = ""
	I0816 17:39:58.090326   45790 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0816 17:39:58.090337   45790 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0816 17:39:58.090345   45790 command_runner.go:130] > # minutes.
	I0816 17:39:58.090522   45790 command_runner.go:130] > # stream_tls_cert = ""
	I0816 17:39:58.090544   45790 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0816 17:39:58.090555   45790 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0816 17:39:58.090760   45790 command_runner.go:130] > # stream_tls_key = ""
	I0816 17:39:58.090779   45790 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0816 17:39:58.090790   45790 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0816 17:39:58.090835   45790 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0816 17:39:58.090931   45790 command_runner.go:130] > # stream_tls_ca = ""
	I0816 17:39:58.090945   45790 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0816 17:39:58.091055   45790 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0816 17:39:58.091071   45790 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0816 17:39:58.091167   45790 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0816 17:39:58.091182   45790 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0816 17:39:58.091194   45790 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0816 17:39:58.091205   45790 command_runner.go:130] > [crio.runtime]
	I0816 17:39:58.091217   45790 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0816 17:39:58.091229   45790 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0816 17:39:58.091238   45790 command_runner.go:130] > # "nofile=1024:2048"
	I0816 17:39:58.091252   45790 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0816 17:39:58.091274   45790 command_runner.go:130] > # default_ulimits = [
	I0816 17:39:58.091394   45790 command_runner.go:130] > # ]
	I0816 17:39:58.091409   45790 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0816 17:39:58.091611   45790 command_runner.go:130] > # no_pivot = false
	I0816 17:39:58.091626   45790 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0816 17:39:58.091637   45790 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0816 17:39:58.091913   45790 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0816 17:39:58.091927   45790 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0816 17:39:58.091935   45790 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0816 17:39:58.091947   45790 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0816 17:39:58.092024   45790 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0816 17:39:58.092036   45790 command_runner.go:130] > # Cgroup setting for conmon
	I0816 17:39:58.092047   45790 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0816 17:39:58.092119   45790 command_runner.go:130] > conmon_cgroup = "pod"
	I0816 17:39:58.092129   45790 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0816 17:39:58.092134   45790 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0816 17:39:58.092141   45790 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0816 17:39:58.092144   45790 command_runner.go:130] > conmon_env = [
	I0816 17:39:58.092229   45790 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0816 17:39:58.092299   45790 command_runner.go:130] > ]
	I0816 17:39:58.092317   45790 command_runner.go:130] > # Additional environment variables to set for all the
	I0816 17:39:58.092328   45790 command_runner.go:130] > # containers. These are overridden if set in the
	I0816 17:39:58.092338   45790 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0816 17:39:58.092348   45790 command_runner.go:130] > # default_env = [
	I0816 17:39:58.092448   45790 command_runner.go:130] > # ]
	I0816 17:39:58.092463   45790 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0816 17:39:58.092475   45790 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0816 17:39:58.092682   45790 command_runner.go:130] > # selinux = false
	I0816 17:39:58.092697   45790 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0816 17:39:58.092708   45790 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0816 17:39:58.092720   45790 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0816 17:39:58.092894   45790 command_runner.go:130] > # seccomp_profile = ""
	I0816 17:39:58.092909   45790 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0816 17:39:58.092920   45790 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0816 17:39:58.092930   45790 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0816 17:39:58.092940   45790 command_runner.go:130] > # which might increase security.
	I0816 17:39:58.092948   45790 command_runner.go:130] > # This option is currently deprecated,
	I0816 17:39:58.092960   45790 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0816 17:39:58.093031   45790 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0816 17:39:58.093048   45790 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0816 17:39:58.093059   45790 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0816 17:39:58.093073   45790 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0816 17:39:58.093086   45790 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0816 17:39:58.093098   45790 command_runner.go:130] > # This option supports live configuration reload.
	I0816 17:39:58.093256   45790 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0816 17:39:58.093273   45790 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0816 17:39:58.093278   45790 command_runner.go:130] > # the cgroup blockio controller.
	I0816 17:39:58.093420   45790 command_runner.go:130] > # blockio_config_file = ""
	I0816 17:39:58.093436   45790 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0816 17:39:58.093443   45790 command_runner.go:130] > # blockio parameters.
	I0816 17:39:58.093672   45790 command_runner.go:130] > # blockio_reload = false
	I0816 17:39:58.093689   45790 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0816 17:39:58.093695   45790 command_runner.go:130] > # irqbalance daemon.
	I0816 17:39:58.093979   45790 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0816 17:39:58.093997   45790 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0816 17:39:58.094007   45790 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0816 17:39:58.094019   45790 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0816 17:39:58.094203   45790 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0816 17:39:58.094218   45790 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0816 17:39:58.094227   45790 command_runner.go:130] > # This option supports live configuration reload.
	I0816 17:39:58.094378   45790 command_runner.go:130] > # rdt_config_file = ""
	I0816 17:39:58.094390   45790 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0816 17:39:58.094497   45790 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0816 17:39:58.094542   45790 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0816 17:39:58.094735   45790 command_runner.go:130] > # separate_pull_cgroup = ""
	I0816 17:39:58.094748   45790 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0816 17:39:58.094759   45790 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0816 17:39:58.094768   45790 command_runner.go:130] > # will be added.
	I0816 17:39:58.094832   45790 command_runner.go:130] > # default_capabilities = [
	I0816 17:39:58.094869   45790 command_runner.go:130] > # 	"CHOWN",
	I0816 17:39:58.094887   45790 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0816 17:39:58.094905   45790 command_runner.go:130] > # 	"FSETID",
	I0816 17:39:58.094957   45790 command_runner.go:130] > # 	"FOWNER",
	I0816 17:39:58.094967   45790 command_runner.go:130] > # 	"SETGID",
	I0816 17:39:58.094990   45790 command_runner.go:130] > # 	"SETUID",
	I0816 17:39:58.095000   45790 command_runner.go:130] > # 	"SETPCAP",
	I0816 17:39:58.095007   45790 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0816 17:39:58.095034   45790 command_runner.go:130] > # 	"KILL",
	I0816 17:39:58.095043   45790 command_runner.go:130] > # ]
	I0816 17:39:58.095057   45790 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0816 17:39:58.095070   45790 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0816 17:39:58.095080   45790 command_runner.go:130] > # add_inheritable_capabilities = false
	I0816 17:39:58.095093   45790 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0816 17:39:58.095106   45790 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0816 17:39:58.095116   45790 command_runner.go:130] > default_sysctls = [
	I0816 17:39:58.095126   45790 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0816 17:39:58.095134   45790 command_runner.go:130] > ]
	I0816 17:39:58.095144   45790 command_runner.go:130] > # List of devices on the host that a
	I0816 17:39:58.095166   45790 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0816 17:39:58.095176   45790 command_runner.go:130] > # allowed_devices = [
	I0816 17:39:58.095184   45790 command_runner.go:130] > # 	"/dev/fuse",
	I0816 17:39:58.095192   45790 command_runner.go:130] > # ]
	I0816 17:39:58.095201   45790 command_runner.go:130] > # List of additional devices. specified as
	I0816 17:39:58.095218   45790 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0816 17:39:58.095229   45790 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0816 17:39:58.095240   45790 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0816 17:39:58.095249   45790 command_runner.go:130] > # additional_devices = [
	I0816 17:39:58.095253   45790 command_runner.go:130] > # ]
	I0816 17:39:58.095263   45790 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0816 17:39:58.095276   45790 command_runner.go:130] > # cdi_spec_dirs = [
	I0816 17:39:58.095283   45790 command_runner.go:130] > # 	"/etc/cdi",
	I0816 17:39:58.095291   45790 command_runner.go:130] > # 	"/var/run/cdi",
	I0816 17:39:58.095318   45790 command_runner.go:130] > # ]
	I0816 17:39:58.095327   45790 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0816 17:39:58.095338   45790 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0816 17:39:58.095348   45790 command_runner.go:130] > # Defaults to false.
	I0816 17:39:58.095361   45790 command_runner.go:130] > # device_ownership_from_security_context = false
	I0816 17:39:58.095375   45790 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0816 17:39:58.095387   45790 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0816 17:39:58.095395   45790 command_runner.go:130] > # hooks_dir = [
	I0816 17:39:58.095405   45790 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0816 17:39:58.095414   45790 command_runner.go:130] > # ]
	I0816 17:39:58.095425   45790 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0816 17:39:58.095438   45790 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0816 17:39:58.095450   45790 command_runner.go:130] > # its default mounts from the following two files:
	I0816 17:39:58.095457   45790 command_runner.go:130] > #
	I0816 17:39:58.095468   45790 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0816 17:39:58.095482   45790 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0816 17:39:58.095491   45790 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0816 17:39:58.095497   45790 command_runner.go:130] > #
	I0816 17:39:58.095509   45790 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0816 17:39:58.095519   45790 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0816 17:39:58.095531   45790 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0816 17:39:58.095540   45790 command_runner.go:130] > #      only add mounts it finds in this file.
	I0816 17:39:58.095550   45790 command_runner.go:130] > #
	I0816 17:39:58.095559   45790 command_runner.go:130] > # default_mounts_file = ""
	I0816 17:39:58.095569   45790 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0816 17:39:58.095583   45790 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0816 17:39:58.095592   45790 command_runner.go:130] > pids_limit = 1024
	I0816 17:39:58.095602   45790 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0816 17:39:58.095614   45790 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0816 17:39:58.095625   45790 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0816 17:39:58.095640   45790 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0816 17:39:58.095649   45790 command_runner.go:130] > # log_size_max = -1
	I0816 17:39:58.095660   45790 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0816 17:39:58.095673   45790 command_runner.go:130] > # log_to_journald = false
	I0816 17:39:58.095686   45790 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0816 17:39:58.095697   45790 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0816 17:39:58.095709   45790 command_runner.go:130] > # Path to directory for container attach sockets.
	I0816 17:39:58.095719   45790 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0816 17:39:58.095727   45790 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0816 17:39:58.095736   45790 command_runner.go:130] > # bind_mount_prefix = ""
	I0816 17:39:58.095745   45790 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0816 17:39:58.095754   45790 command_runner.go:130] > # read_only = false
	I0816 17:39:58.095764   45790 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0816 17:39:58.095776   45790 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0816 17:39:58.095786   45790 command_runner.go:130] > # live configuration reload.
	I0816 17:39:58.095793   45790 command_runner.go:130] > # log_level = "info"
	I0816 17:39:58.095804   45790 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0816 17:39:58.095813   45790 command_runner.go:130] > # This option supports live configuration reload.
	I0816 17:39:58.095823   45790 command_runner.go:130] > # log_filter = ""
	I0816 17:39:58.095832   45790 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0816 17:39:58.095858   45790 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0816 17:39:58.095869   45790 command_runner.go:130] > # separated by comma.
	I0816 17:39:58.095883   45790 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0816 17:39:58.095894   45790 command_runner.go:130] > # uid_mappings = ""
	I0816 17:39:58.095904   45790 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0816 17:39:58.095915   45790 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0816 17:39:58.095924   45790 command_runner.go:130] > # separated by comma.
	I0816 17:39:58.095936   45790 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0816 17:39:58.095944   45790 command_runner.go:130] > # gid_mappings = ""
	I0816 17:39:58.095953   45790 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0816 17:39:58.095966   45790 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0816 17:39:58.095979   45790 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0816 17:39:58.095994   45790 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0816 17:39:58.096005   45790 command_runner.go:130] > # minimum_mappable_uid = -1
	I0816 17:39:58.096017   45790 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0816 17:39:58.096030   45790 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0816 17:39:58.096042   45790 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0816 17:39:58.096057   45790 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0816 17:39:58.096067   45790 command_runner.go:130] > # minimum_mappable_gid = -1
	I0816 17:39:58.096077   45790 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0816 17:39:58.096086   45790 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0816 17:39:58.096094   45790 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0816 17:39:58.096107   45790 command_runner.go:130] > # ctr_stop_timeout = 30
	I0816 17:39:58.096120   45790 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0816 17:39:58.096133   45790 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0816 17:39:58.096143   45790 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0816 17:39:58.096154   45790 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0816 17:39:58.096160   45790 command_runner.go:130] > drop_infra_ctr = false
	I0816 17:39:58.096173   45790 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0816 17:39:58.096185   45790 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0816 17:39:58.096195   45790 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0816 17:39:58.096206   45790 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0816 17:39:58.096217   45790 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0816 17:39:58.096229   45790 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0816 17:39:58.096240   45790 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0816 17:39:58.096247   45790 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0816 17:39:58.096254   45790 command_runner.go:130] > # shared_cpuset = ""
	I0816 17:39:58.096263   45790 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0816 17:39:58.096274   45790 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0816 17:39:58.096281   45790 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0816 17:39:58.096300   45790 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0816 17:39:58.096312   45790 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0816 17:39:58.096322   45790 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0816 17:39:58.096335   45790 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0816 17:39:58.096343   45790 command_runner.go:130] > # enable_criu_support = false
	I0816 17:39:58.096351   45790 command_runner.go:130] > # Enable/disable the generation of the container,
	I0816 17:39:58.096364   45790 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0816 17:39:58.096373   45790 command_runner.go:130] > # enable_pod_events = false
	I0816 17:39:58.096383   45790 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0816 17:39:58.096397   45790 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0816 17:39:58.096408   45790 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0816 17:39:58.096417   45790 command_runner.go:130] > # default_runtime = "runc"
	I0816 17:39:58.096426   45790 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0816 17:39:58.096440   45790 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0816 17:39:58.096454   45790 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0816 17:39:58.096465   45790 command_runner.go:130] > # creation as a file is not desired either.
	I0816 17:39:58.096480   45790 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0816 17:39:58.096494   45790 command_runner.go:130] > # the hostname is being managed dynamically.
	I0816 17:39:58.096504   45790 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0816 17:39:58.096511   45790 command_runner.go:130] > # ]
	I0816 17:39:58.096523   45790 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0816 17:39:58.096536   45790 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0816 17:39:58.096549   45790 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0816 17:39:58.096560   45790 command_runner.go:130] > # Each entry in the table should follow the format:
	I0816 17:39:58.096567   45790 command_runner.go:130] > #
	I0816 17:39:58.096591   45790 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0816 17:39:58.096604   45790 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0816 17:39:58.096649   45790 command_runner.go:130] > # runtime_type = "oci"
	I0816 17:39:58.096662   45790 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0816 17:39:58.096670   45790 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0816 17:39:58.096680   45790 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0816 17:39:58.096687   45790 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0816 17:39:58.096696   45790 command_runner.go:130] > # monitor_env = []
	I0816 17:39:58.096703   45790 command_runner.go:130] > # privileged_without_host_devices = false
	I0816 17:39:58.096710   45790 command_runner.go:130] > # allowed_annotations = []
	I0816 17:39:58.096715   45790 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0816 17:39:58.096722   45790 command_runner.go:130] > # Where:
	I0816 17:39:58.096730   45790 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0816 17:39:58.096742   45790 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0816 17:39:58.096755   45790 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0816 17:39:58.096768   45790 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0816 17:39:58.096777   45790 command_runner.go:130] > #   in $PATH.
	I0816 17:39:58.096786   45790 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0816 17:39:58.096797   45790 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0816 17:39:58.096810   45790 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0816 17:39:58.096820   45790 command_runner.go:130] > #   state.
	I0816 17:39:58.096830   45790 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0816 17:39:58.096841   45790 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0816 17:39:58.096854   45790 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0816 17:39:58.096865   45790 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0816 17:39:58.096876   45790 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0816 17:39:58.096889   45790 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0816 17:39:58.096901   45790 command_runner.go:130] > #   The currently recognized values are:
	I0816 17:39:58.096911   45790 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0816 17:39:58.096924   45790 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0816 17:39:58.096937   45790 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0816 17:39:58.096949   45790 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0816 17:39:58.096964   45790 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0816 17:39:58.096977   45790 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0816 17:39:58.096990   45790 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0816 17:39:58.097000   45790 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0816 17:39:58.097012   45790 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0816 17:39:58.097025   45790 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0816 17:39:58.097034   45790 command_runner.go:130] > #   deprecated option "conmon".
	I0816 17:39:58.097045   45790 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0816 17:39:58.097056   45790 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0816 17:39:58.097068   45790 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0816 17:39:58.097079   45790 command_runner.go:130] > #   should be moved to the container's cgroup
	I0816 17:39:58.097094   45790 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0816 17:39:58.097114   45790 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0816 17:39:58.097130   45790 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0816 17:39:58.097141   45790 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0816 17:39:58.097147   45790 command_runner.go:130] > #
	I0816 17:39:58.097154   45790 command_runner.go:130] > # Using the seccomp notifier feature:
	I0816 17:39:58.097163   45790 command_runner.go:130] > #
	I0816 17:39:58.097172   45790 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0816 17:39:58.097185   45790 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0816 17:39:58.097193   45790 command_runner.go:130] > #
	I0816 17:39:58.097203   45790 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0816 17:39:58.097214   45790 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0816 17:39:58.097218   45790 command_runner.go:130] > #
	I0816 17:39:58.097227   45790 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0816 17:39:58.097235   45790 command_runner.go:130] > # feature.
	I0816 17:39:58.097241   45790 command_runner.go:130] > #
	I0816 17:39:58.097253   45790 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0816 17:39:58.097266   45790 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0816 17:39:58.097278   45790 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0816 17:39:58.097290   45790 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0816 17:39:58.097304   45790 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0816 17:39:58.097310   45790 command_runner.go:130] > #
	I0816 17:39:58.097319   45790 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0816 17:39:58.097336   45790 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0816 17:39:58.097344   45790 command_runner.go:130] > #
	I0816 17:39:58.097355   45790 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0816 17:39:58.097366   45790 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0816 17:39:58.097374   45790 command_runner.go:130] > #
	I0816 17:39:58.097383   45790 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0816 17:39:58.097392   45790 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0816 17:39:58.097396   45790 command_runner.go:130] > # limitation.
	I0816 17:39:58.097401   45790 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0816 17:39:58.097405   45790 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0816 17:39:58.097411   45790 command_runner.go:130] > runtime_type = "oci"
	I0816 17:39:58.097418   45790 command_runner.go:130] > runtime_root = "/run/runc"
	I0816 17:39:58.097424   45790 command_runner.go:130] > runtime_config_path = ""
	I0816 17:39:58.097431   45790 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0816 17:39:58.097437   45790 command_runner.go:130] > monitor_cgroup = "pod"
	I0816 17:39:58.097444   45790 command_runner.go:130] > monitor_exec_cgroup = ""
	I0816 17:39:58.097450   45790 command_runner.go:130] > monitor_env = [
	I0816 17:39:58.097459   45790 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0816 17:39:58.097464   45790 command_runner.go:130] > ]
	I0816 17:39:58.097471   45790 command_runner.go:130] > privileged_without_host_devices = false
	I0816 17:39:58.097480   45790 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0816 17:39:58.097488   45790 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0816 17:39:58.097501   45790 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0816 17:39:58.097516   45790 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0816 17:39:58.097531   45790 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0816 17:39:58.097542   45790 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0816 17:39:58.097557   45790 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0816 17:39:58.097571   45790 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0816 17:39:58.097583   45790 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0816 17:39:58.097594   45790 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0816 17:39:58.097603   45790 command_runner.go:130] > # Example:
	I0816 17:39:58.097611   45790 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0816 17:39:58.097619   45790 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0816 17:39:58.097627   45790 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0816 17:39:58.097634   45790 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0816 17:39:58.097639   45790 command_runner.go:130] > # cpuset = 0
	I0816 17:39:58.097645   45790 command_runner.go:130] > # cpushares = "0-1"
	I0816 17:39:58.097650   45790 command_runner.go:130] > # Where:
	I0816 17:39:58.097660   45790 command_runner.go:130] > # The workload name is workload-type.
	I0816 17:39:58.097671   45790 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0816 17:39:58.097680   45790 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0816 17:39:58.097689   45790 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0816 17:39:58.097700   45790 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0816 17:39:58.097709   45790 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0816 17:39:58.097717   45790 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0816 17:39:58.097727   45790 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0816 17:39:58.097733   45790 command_runner.go:130] > # Default value is set to true
	I0816 17:39:58.097740   45790 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0816 17:39:58.097748   45790 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0816 17:39:58.097756   45790 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0816 17:39:58.097764   45790 command_runner.go:130] > # Default value is set to 'false'
	I0816 17:39:58.097770   45790 command_runner.go:130] > # disable_hostport_mapping = false
	I0816 17:39:58.097780   45790 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0816 17:39:58.097785   45790 command_runner.go:130] > #
	I0816 17:39:58.097794   45790 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0816 17:39:58.097814   45790 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0816 17:39:58.097825   45790 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0816 17:39:58.097831   45790 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0816 17:39:58.097836   45790 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0816 17:39:58.097839   45790 command_runner.go:130] > [crio.image]
	I0816 17:39:58.097844   45790 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0816 17:39:58.097849   45790 command_runner.go:130] > # default_transport = "docker://"
	I0816 17:39:58.097856   45790 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0816 17:39:58.097862   45790 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0816 17:39:58.097866   45790 command_runner.go:130] > # global_auth_file = ""
	I0816 17:39:58.097871   45790 command_runner.go:130] > # The image used to instantiate infra containers.
	I0816 17:39:58.097875   45790 command_runner.go:130] > # This option supports live configuration reload.
	I0816 17:39:58.097883   45790 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0816 17:39:58.097889   45790 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0816 17:39:58.097895   45790 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0816 17:39:58.097900   45790 command_runner.go:130] > # This option supports live configuration reload.
	I0816 17:39:58.097907   45790 command_runner.go:130] > # pause_image_auth_file = ""
	I0816 17:39:58.097913   45790 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0816 17:39:58.097921   45790 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0816 17:39:58.097930   45790 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0816 17:39:58.097938   45790 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0816 17:39:58.097942   45790 command_runner.go:130] > # pause_command = "/pause"
	I0816 17:39:58.097949   45790 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0816 17:39:58.097955   45790 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0816 17:39:58.097963   45790 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0816 17:39:58.097968   45790 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0816 17:39:58.097976   45790 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0816 17:39:58.097981   45790 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0816 17:39:58.097985   45790 command_runner.go:130] > # pinned_images = [
	I0816 17:39:58.097989   45790 command_runner.go:130] > # ]
	I0816 17:39:58.097994   45790 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0816 17:39:58.098002   45790 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0816 17:39:58.098008   45790 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0816 17:39:58.098016   45790 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0816 17:39:58.098021   45790 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0816 17:39:58.098027   45790 command_runner.go:130] > # signature_policy = ""
	I0816 17:39:58.098032   45790 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0816 17:39:58.098040   45790 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0816 17:39:58.098047   45790 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0816 17:39:58.098055   45790 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0816 17:39:58.098061   45790 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0816 17:39:58.098066   45790 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0816 17:39:58.098071   45790 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0816 17:39:58.098081   45790 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0816 17:39:58.098084   45790 command_runner.go:130] > # changing them here.
	I0816 17:39:58.098088   45790 command_runner.go:130] > # insecure_registries = [
	I0816 17:39:58.098092   45790 command_runner.go:130] > # ]
	I0816 17:39:58.098098   45790 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0816 17:39:58.098105   45790 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0816 17:39:58.098110   45790 command_runner.go:130] > # image_volumes = "mkdir"
	I0816 17:39:58.098117   45790 command_runner.go:130] > # Temporary directory to use for storing big files
	I0816 17:39:58.098121   45790 command_runner.go:130] > # big_files_temporary_dir = ""
	I0816 17:39:58.098127   45790 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0816 17:39:58.098131   45790 command_runner.go:130] > # CNI plugins.
	I0816 17:39:58.098135   45790 command_runner.go:130] > [crio.network]
	I0816 17:39:58.098140   45790 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0816 17:39:58.098148   45790 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0816 17:39:58.098153   45790 command_runner.go:130] > # cni_default_network = ""
	I0816 17:39:58.098158   45790 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0816 17:39:58.098164   45790 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0816 17:39:58.098170   45790 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0816 17:39:58.098176   45790 command_runner.go:130] > # plugin_dirs = [
	I0816 17:39:58.098180   45790 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0816 17:39:58.098183   45790 command_runner.go:130] > # ]
	I0816 17:39:58.098188   45790 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0816 17:39:58.098194   45790 command_runner.go:130] > [crio.metrics]
	I0816 17:39:58.098198   45790 command_runner.go:130] > # Globally enable or disable metrics support.
	I0816 17:39:58.098203   45790 command_runner.go:130] > enable_metrics = true
	I0816 17:39:58.098209   45790 command_runner.go:130] > # Specify enabled metrics collectors.
	I0816 17:39:58.098214   45790 command_runner.go:130] > # Per default all metrics are enabled.
	I0816 17:39:58.098220   45790 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0816 17:39:58.098226   45790 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0816 17:39:58.098234   45790 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0816 17:39:58.098239   45790 command_runner.go:130] > # metrics_collectors = [
	I0816 17:39:58.098243   45790 command_runner.go:130] > # 	"operations",
	I0816 17:39:58.098248   45790 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0816 17:39:58.098253   45790 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0816 17:39:58.098256   45790 command_runner.go:130] > # 	"operations_errors",
	I0816 17:39:58.098264   45790 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0816 17:39:58.098270   45790 command_runner.go:130] > # 	"image_pulls_by_name",
	I0816 17:39:58.098278   45790 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0816 17:39:58.098282   45790 command_runner.go:130] > # 	"image_pulls_failures",
	I0816 17:39:58.098286   45790 command_runner.go:130] > # 	"image_pulls_successes",
	I0816 17:39:58.098296   45790 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0816 17:39:58.098303   45790 command_runner.go:130] > # 	"image_layer_reuse",
	I0816 17:39:58.098307   45790 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0816 17:39:58.098311   45790 command_runner.go:130] > # 	"containers_oom_total",
	I0816 17:39:58.098315   45790 command_runner.go:130] > # 	"containers_oom",
	I0816 17:39:58.098318   45790 command_runner.go:130] > # 	"processes_defunct",
	I0816 17:39:58.098322   45790 command_runner.go:130] > # 	"operations_total",
	I0816 17:39:58.098326   45790 command_runner.go:130] > # 	"operations_latency_seconds",
	I0816 17:39:58.098331   45790 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0816 17:39:58.098335   45790 command_runner.go:130] > # 	"operations_errors_total",
	I0816 17:39:58.098342   45790 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0816 17:39:58.098346   45790 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0816 17:39:58.098351   45790 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0816 17:39:58.098355   45790 command_runner.go:130] > # 	"image_pulls_success_total",
	I0816 17:39:58.098361   45790 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0816 17:39:58.098365   45790 command_runner.go:130] > # 	"containers_oom_count_total",
	I0816 17:39:58.098370   45790 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0816 17:39:58.098375   45790 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0816 17:39:58.098382   45790 command_runner.go:130] > # ]
	I0816 17:39:58.098387   45790 command_runner.go:130] > # The port on which the metrics server will listen.
	I0816 17:39:58.098391   45790 command_runner.go:130] > # metrics_port = 9090
	I0816 17:39:58.098395   45790 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0816 17:39:58.098399   45790 command_runner.go:130] > # metrics_socket = ""
	I0816 17:39:58.098404   45790 command_runner.go:130] > # The certificate for the secure metrics server.
	I0816 17:39:58.098413   45790 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0816 17:39:58.098422   45790 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0816 17:39:58.098432   45790 command_runner.go:130] > # certificate on any modification event.
	I0816 17:39:58.098438   45790 command_runner.go:130] > # metrics_cert = ""
	I0816 17:39:58.098449   45790 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0816 17:39:58.098457   45790 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0816 17:39:58.098461   45790 command_runner.go:130] > # metrics_key = ""
	I0816 17:39:58.098469   45790 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0816 17:39:58.098474   45790 command_runner.go:130] > [crio.tracing]
	I0816 17:39:58.098482   45790 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0816 17:39:58.098486   45790 command_runner.go:130] > # enable_tracing = false
	I0816 17:39:58.098495   45790 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0816 17:39:58.098505   45790 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0816 17:39:58.098518   45790 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0816 17:39:58.098526   45790 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0816 17:39:58.098530   45790 command_runner.go:130] > # CRI-O NRI configuration.
	I0816 17:39:58.098538   45790 command_runner.go:130] > [crio.nri]
	I0816 17:39:58.098545   45790 command_runner.go:130] > # Globally enable or disable NRI.
	I0816 17:39:58.098554   45790 command_runner.go:130] > # enable_nri = false
	I0816 17:39:58.098560   45790 command_runner.go:130] > # NRI socket to listen on.
	I0816 17:39:58.098570   45790 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0816 17:39:58.098579   45790 command_runner.go:130] > # NRI plugin directory to use.
	I0816 17:39:58.098590   45790 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0816 17:39:58.098598   45790 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0816 17:39:58.098609   45790 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0816 17:39:58.098620   45790 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0816 17:39:58.098629   45790 command_runner.go:130] > # nri_disable_connections = false
	I0816 17:39:58.098634   45790 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0816 17:39:58.098641   45790 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0816 17:39:58.098645   45790 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0816 17:39:58.098652   45790 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0816 17:39:58.098658   45790 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0816 17:39:58.098663   45790 command_runner.go:130] > [crio.stats]
	I0816 17:39:58.098670   45790 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0816 17:39:58.098680   45790 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0816 17:39:58.098687   45790 command_runner.go:130] > # stats_collection_period = 0
	I0816 17:39:58.098715   45790 command_runner.go:130] ! time="2024-08-16 17:39:58.051659931Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0816 17:39:58.098738   45790 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0816 17:39:58.098868   45790 cni.go:84] Creating CNI manager for ""
	I0816 17:39:58.098878   45790 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0816 17:39:58.098889   45790 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 17:39:58.098915   45790 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.218 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-797386 NodeName:multinode-797386 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.218"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.218 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 17:39:58.099090   45790 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.218
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-797386"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.218
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.218"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 17:39:58.099158   45790 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 17:39:58.108732   45790 command_runner.go:130] > kubeadm
	I0816 17:39:58.108749   45790 command_runner.go:130] > kubectl
	I0816 17:39:58.108755   45790 command_runner.go:130] > kubelet
	I0816 17:39:58.108872   45790 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 17:39:58.108930   45790 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 17:39:58.117711   45790 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0816 17:39:58.132111   45790 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 17:39:58.146287   45790 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0816 17:39:58.161543   45790 ssh_runner.go:195] Run: grep 192.168.39.218	control-plane.minikube.internal$ /etc/hosts
	I0816 17:39:58.164797   45790 command_runner.go:130] > 192.168.39.218	control-plane.minikube.internal
	I0816 17:39:58.164853   45790 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 17:39:58.300749   45790 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 17:39:58.315462   45790 certs.go:68] Setting up /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/multinode-797386 for IP: 192.168.39.218
	I0816 17:39:58.315488   45790 certs.go:194] generating shared ca certs ...
	I0816 17:39:58.315506   45790 certs.go:226] acquiring lock for ca certs: {Name:mk1e000575c94c138513704c2900b8a68810eb65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:39:58.315680   45790 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key
	I0816 17:39:58.315718   45790 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key
	I0816 17:39:58.315729   45790 certs.go:256] generating profile certs ...
	I0816 17:39:58.315801   45790 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/multinode-797386/client.key
	I0816 17:39:58.315856   45790 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/multinode-797386/apiserver.key.e5b1fba5
	I0816 17:39:58.315889   45790 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/multinode-797386/proxy-client.key
	I0816 17:39:58.315899   45790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0816 17:39:58.315912   45790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0816 17:39:58.315923   45790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0816 17:39:58.315933   45790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0816 17:39:58.315945   45790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/multinode-797386/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0816 17:39:58.315959   45790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/multinode-797386/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0816 17:39:58.315972   45790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/multinode-797386/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0816 17:39:58.315986   45790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/multinode-797386/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0816 17:39:58.316049   45790 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem (1338 bytes)
	W0816 17:39:58.316076   45790 certs.go:480] ignoring /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753_empty.pem, impossibly tiny 0 bytes
	I0816 17:39:58.316085   45790 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 17:39:58.316107   45790 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem (1082 bytes)
	I0816 17:39:58.316128   45790 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem (1123 bytes)
	I0816 17:39:58.316148   45790 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem (1679 bytes)
	I0816 17:39:58.316185   45790 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem (1708 bytes)
	I0816 17:39:58.316212   45790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem -> /usr/share/ca-certificates/16753.pem
	I0816 17:39:58.316226   45790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem -> /usr/share/ca-certificates/167532.pem
	I0816 17:39:58.316238   45790 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0816 17:39:58.316875   45790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 17:39:58.338921   45790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 17:39:58.360319   45790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 17:39:58.382234   45790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 17:39:58.404599   45790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/multinode-797386/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0816 17:39:58.426747   45790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/multinode-797386/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 17:39:58.447847   45790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/multinode-797386/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 17:39:58.469629   45790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/multinode-797386/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 17:39:58.491056   45790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem --> /usr/share/ca-certificates/16753.pem (1338 bytes)
	I0816 17:39:58.512795   45790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /usr/share/ca-certificates/167532.pem (1708 bytes)
	I0816 17:39:58.534479   45790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 17:39:58.555704   45790 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 17:39:58.570418   45790 ssh_runner.go:195] Run: openssl version
	I0816 17:39:58.575519   45790 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0816 17:39:58.575709   45790 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 17:39:58.585300   45790 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 17:39:58.589857   45790 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 16 16:49 /usr/share/ca-certificates/minikubeCA.pem
	I0816 17:39:58.589878   45790 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:49 /usr/share/ca-certificates/minikubeCA.pem
	I0816 17:39:58.589915   45790 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 17:39:58.594966   45790 command_runner.go:130] > b5213941
	I0816 17:39:58.595010   45790 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 17:39:58.603431   45790 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16753.pem && ln -fs /usr/share/ca-certificates/16753.pem /etc/ssl/certs/16753.pem"
	I0816 17:39:58.612612   45790 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16753.pem
	I0816 17:39:58.616399   45790 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 16 17:00 /usr/share/ca-certificates/16753.pem
	I0816 17:39:58.616425   45790 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 17:00 /usr/share/ca-certificates/16753.pem
	I0816 17:39:58.616463   45790 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16753.pem
	I0816 17:39:58.621314   45790 command_runner.go:130] > 51391683
	I0816 17:39:58.621404   45790 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16753.pem /etc/ssl/certs/51391683.0"
	I0816 17:39:58.629544   45790 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167532.pem && ln -fs /usr/share/ca-certificates/167532.pem /etc/ssl/certs/167532.pem"
	I0816 17:39:58.638770   45790 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167532.pem
	I0816 17:39:58.642547   45790 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 16 17:00 /usr/share/ca-certificates/167532.pem
	I0816 17:39:58.642574   45790 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 17:00 /usr/share/ca-certificates/167532.pem
	I0816 17:39:58.642610   45790 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167532.pem
	I0816 17:39:58.647550   45790 command_runner.go:130] > 3ec20f2e
	I0816 17:39:58.647594   45790 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167532.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 17:39:58.655704   45790 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 17:39:58.659342   45790 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 17:39:58.659361   45790 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0816 17:39:58.659369   45790 command_runner.go:130] > Device: 253,1	Inode: 1056278     Links: 1
	I0816 17:39:58.659381   45790 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0816 17:39:58.659393   45790 command_runner.go:130] > Access: 2024-08-16 17:33:08.818485768 +0000
	I0816 17:39:58.659405   45790 command_runner.go:130] > Modify: 2024-08-16 17:33:08.818485768 +0000
	I0816 17:39:58.659413   45790 command_runner.go:130] > Change: 2024-08-16 17:33:08.818485768 +0000
	I0816 17:39:58.659424   45790 command_runner.go:130] >  Birth: 2024-08-16 17:33:08.818485768 +0000
	I0816 17:39:58.659470   45790 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 17:39:58.664301   45790 command_runner.go:130] > Certificate will not expire
	I0816 17:39:58.664489   45790 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 17:39:58.669261   45790 command_runner.go:130] > Certificate will not expire
	I0816 17:39:58.669422   45790 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 17:39:58.674109   45790 command_runner.go:130] > Certificate will not expire
	I0816 17:39:58.674259   45790 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 17:39:58.679047   45790 command_runner.go:130] > Certificate will not expire
	I0816 17:39:58.679091   45790 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 17:39:58.683677   45790 command_runner.go:130] > Certificate will not expire
	I0816 17:39:58.683763   45790 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 17:39:58.688832   45790 command_runner.go:130] > Certificate will not expire
	I0816 17:39:58.688893   45790 kubeadm.go:392] StartCluster: {Name:multinode-797386 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:multinode-797386 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.218 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.71 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 17:39:58.689003   45790 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 17:39:58.689051   45790 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 17:39:58.723885   45790 command_runner.go:130] > cdce23922d9c79c1305ead80951b322992a6c4263cccc427b5eef22d407760ac
	I0816 17:39:58.723910   45790 command_runner.go:130] > ab5b49957caceff30c5edc18fe05235aed3ed6346fff1257d81ab0332e4414b0
	I0816 17:39:58.723918   45790 command_runner.go:130] > d7de7a6593d24c1ff3638eb2b2d773183f0d8a3dc6dc55377f15f79d1c5a1b11
	I0816 17:39:58.723944   45790 command_runner.go:130] > 40703b34f4634a7257846aa83155677fe4db38b0df5ae116f3ef7e14e7ced732
	I0816 17:39:58.723956   45790 command_runner.go:130] > a7c3646b332f95294341f2049499a2f5ec6798771184ced80274c75132ee031a
	I0816 17:39:58.723962   45790 command_runner.go:130] > a9a8478e7a74ec1f31463e518d0ac01e55e49029495ecbd94952d75b02e5e31f
	I0816 17:39:58.723973   45790 command_runner.go:130] > c48fc7ab9fc9bd52528c2098fbf029f5d200bc571e1de1f6d6ef946967e93e1d
	I0816 17:39:58.723983   45790 command_runner.go:130] > 6c6740953e611ccc938310422e50ac5f9346f75cad1f1a8641b062847b43647f
	I0816 17:39:58.724023   45790 cri.go:89] found id: "cdce23922d9c79c1305ead80951b322992a6c4263cccc427b5eef22d407760ac"
	I0816 17:39:58.724036   45790 cri.go:89] found id: "ab5b49957caceff30c5edc18fe05235aed3ed6346fff1257d81ab0332e4414b0"
	I0816 17:39:58.724042   45790 cri.go:89] found id: "d7de7a6593d24c1ff3638eb2b2d773183f0d8a3dc6dc55377f15f79d1c5a1b11"
	I0816 17:39:58.724049   45790 cri.go:89] found id: "40703b34f4634a7257846aa83155677fe4db38b0df5ae116f3ef7e14e7ced732"
	I0816 17:39:58.724054   45790 cri.go:89] found id: "a7c3646b332f95294341f2049499a2f5ec6798771184ced80274c75132ee031a"
	I0816 17:39:58.724060   45790 cri.go:89] found id: "a9a8478e7a74ec1f31463e518d0ac01e55e49029495ecbd94952d75b02e5e31f"
	I0816 17:39:58.724065   45790 cri.go:89] found id: "c48fc7ab9fc9bd52528c2098fbf029f5d200bc571e1de1f6d6ef946967e93e1d"
	I0816 17:39:58.724068   45790 cri.go:89] found id: "6c6740953e611ccc938310422e50ac5f9346f75cad1f1a8641b062847b43647f"
	I0816 17:39:58.724071   45790 cri.go:89] found id: ""
	I0816 17:39:58.724117   45790 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 16 17:44:08 multinode-797386 crio[2782]: time="2024-08-16 17:44:08.234950003Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723830248234925587,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9cd2adbf-7b08-4039-be88-b8afc9cc56fa name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 17:44:08 multinode-797386 crio[2782]: time="2024-08-16 17:44:08.235871225Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f3c6fd68-5eab-433f-9fe6-eaf78686bca4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 17:44:08 multinode-797386 crio[2782]: time="2024-08-16 17:44:08.235965095Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f3c6fd68-5eab-433f-9fe6-eaf78686bca4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 17:44:08 multinode-797386 crio[2782]: time="2024-08-16 17:44:08.236328233Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:098cb4d42f45971f744870cab8252004b646b1cae854b99ed067069c91d0a919,PodSandboxId:69a58bde7f31e870042d7bd2a0b242639d66d1e2275ce8c9da9267164a8a8589,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723830039464008494,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-6986q,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6043b81f-fa83-40b5-9674-cf22bb48ad7a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05ab432af1c9ceb8c8581aa18daa8602c8d5ffab88a3c85c24b28e75eb810a16,PodSandboxId:b81b19f5f5d080de6ca34dc0ad182b3ba32a3425f2375075c8330d6b4d5d59f2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723830006003627531,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ksr6k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0a46b8f-ea93-42d6-a11c-be45c46b3090,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ac3da3f42414f32fe9de3cb0e5b73ddb03e164893d0f0c5ec697f791f0c6d65,PodSandboxId:2d4df22c0989e95cf8a46ba7bf1b7286d50b35cfab8fd3504db38afff2dbdfc2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723830005863909944,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-bskwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a6d6155-5571-4393-9e73-83a08e87cbf5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1c359d855cf881c9dd98bbfc89a234c44c8df8963d9283cd6e479c25c66a0b6,PodSandboxId:eff1f1e5ad9f410a1eb62b74d5907d137204f487b8094452006f4782d0239e47,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723830005836122349,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 365b0a49-c86a-46f0-bf0f-3b84e1cf9ac2,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e50cd41dedf63835f7ab9caeedc1516aa542aba5eb4fba13647d34bbc9737997,PodSandboxId:50e8d156417dd83b93cc20d37a0a4fcdf187c58ebc894f5dd455adcf0c2d6402,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723830005694420111,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tdmh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b81a140-0be4-49c7-8d0b-1ebef6efbdb2,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df7fce7cad33c0a2bcf3266bec644bd9c040b5cb853854e79bff3ab38f60e9b2,PodSandboxId:e8e4d3319b6d68a5c76b6481687e4d8610a8ad5d1a68389d83764b2b19f0bea0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723830000940003177,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-797386,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f362b6a924856b76b521c8598769e769,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60af2ace6d078854a1463f8016e4ab1e4b7bae447c85c8ca8e8133634455d135,PodSandboxId:4ab9f1526277a13ad2c8e657d92025f32aa77e6de04fc3ad5ceb6b0438e5ad58,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723830000888587770,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-797386,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76c9d6f117fa769b3b818b0419b322dc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:789e8d05ce4d8643459c20ed36046ab525699960cea246d15807f7a5f98866f3,PodSandboxId:ff3bb9a3535c889166d54b86e10b096c38e35a23a28ac57b37e6ea95e20238e2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723830000865318712,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-797386,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c7a4368fb8cd946d884c2df1c461975,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e5a8f300597d04f1ee5f27ae2cc632a1587c6299983e4a6de87814f96e37c65,PodSandboxId:f72fd728d5193ee968665a0dd3cfdb2264323cc2d77d4a72c1aeadefde6facf4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723830000804216173,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-797386,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5737bb43c1673c7c014a490a4465a36e,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4d0ad9bebe5d46ff8559086079a818bce5894de7bc569ec34f5c83a0da2b450,PodSandboxId:f95d33ca1bb337df421b29f762df2a024b547b370b2e99736426dde2275e3d94,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723829672341815218,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-6986q,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6043b81f-fa83-40b5-9674-cf22bb48ad7a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdce23922d9c79c1305ead80951b322992a6c4263cccc427b5eef22d407760ac,PodSandboxId:293fa56faeea3340fb90217cfe3bdff948214c0089a764c14a6110acaa73ed85,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723829620219710468,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-bskwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a6d6155-5571-4393-9e73-83a08e87cbf5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab5b49957caceff30c5edc18fe05235aed3ed6346fff1257d81ab0332e4414b0,PodSandboxId:ad2ceb5c4d227b17acf99d8a12f45df9b83e4344c225d1b75bbffcb317e98bc1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723829618677357958,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 365b0a49-c86a-46f0-bf0f-3b84e1cf9ac2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7de7a6593d24c1ff3638eb2b2d773183f0d8a3dc6dc55377f15f79d1c5a1b11,PodSandboxId:b847be3789c2c0c4e5451846d9ec79f6e0ee5c208a86780955e67d7a9c7ce2ad,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723829607086102880,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ksr6k,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: b0a46b8f-ea93-42d6-a11c-be45c46b3090,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40703b34f4634a7257846aa83155677fe4db38b0df5ae116f3ef7e14e7ced732,PodSandboxId:000e360ba74addf02c020b3454196d03f64bdb827f18333fc95773fe5a167496,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723829603514937370,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tdmh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 6b81a140-0be4-49c7-8d0b-1ebef6efbdb2,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9a8478e7a74ec1f31463e518d0ac01e55e49029495ecbd94952d75b02e5e31f,PodSandboxId:564e02472f5b7c1f7201ff4264735c6e2aa18bb61088a92e82238567eb565b85,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723829592230868888,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-797386,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 3c7a4368fb8cd946d884c2df1c461975,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7c3646b332f95294341f2049499a2f5ec6798771184ced80274c75132ee031a,PodSandboxId:99cc5c92b21797b13b182c2713b158ded6209c0016ea8ebab3d84d6a55c9bc7b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723829592235985468,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-797386,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76c9d6f117fa769b3b81
8b0419b322dc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c48fc7ab9fc9bd52528c2098fbf029f5d200bc571e1de1f6d6ef946967e93e1d,PodSandboxId:08901405eaafce26555a9e8d717b4722961b1ca8809aa5c9a84f3b3476868752,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723829592225164976,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-797386,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5737bb43c1673c7c014a490a4465a36e,},
Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c6740953e611ccc938310422e50ac5f9346f75cad1f1a8641b062847b43647f,PodSandboxId:eadfe17cfad28e4b89ba41ea017afb8b50faa9bf87cc270f2586977426a03a8d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723829592013528926,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-797386,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f362b6a924856b76b521c8598769e769,},Annotations:map
[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f3c6fd68-5eab-433f-9fe6-eaf78686bca4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 17:44:08 multinode-797386 crio[2782]: time="2024-08-16 17:44:08.273338012Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a46fa553-3789-4176-becf-942cd56baa0d name=/runtime.v1.RuntimeService/Version
	Aug 16 17:44:08 multinode-797386 crio[2782]: time="2024-08-16 17:44:08.273500820Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a46fa553-3789-4176-becf-942cd56baa0d name=/runtime.v1.RuntimeService/Version
	Aug 16 17:44:08 multinode-797386 crio[2782]: time="2024-08-16 17:44:08.274530461Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=43b8d896-4254-4b68-97be-8e6e035429ee name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 17:44:08 multinode-797386 crio[2782]: time="2024-08-16 17:44:08.274947999Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723830248274928169,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=43b8d896-4254-4b68-97be-8e6e035429ee name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 17:44:08 multinode-797386 crio[2782]: time="2024-08-16 17:44:08.275511747Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5e679436-2b99-448d-a1e9-3497149cdc9d name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 17:44:08 multinode-797386 crio[2782]: time="2024-08-16 17:44:08.275564533Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5e679436-2b99-448d-a1e9-3497149cdc9d name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 17:44:08 multinode-797386 crio[2782]: time="2024-08-16 17:44:08.276361508Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:098cb4d42f45971f744870cab8252004b646b1cae854b99ed067069c91d0a919,PodSandboxId:69a58bde7f31e870042d7bd2a0b242639d66d1e2275ce8c9da9267164a8a8589,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723830039464008494,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-6986q,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6043b81f-fa83-40b5-9674-cf22bb48ad7a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05ab432af1c9ceb8c8581aa18daa8602c8d5ffab88a3c85c24b28e75eb810a16,PodSandboxId:b81b19f5f5d080de6ca34dc0ad182b3ba32a3425f2375075c8330d6b4d5d59f2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723830006003627531,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ksr6k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0a46b8f-ea93-42d6-a11c-be45c46b3090,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ac3da3f42414f32fe9de3cb0e5b73ddb03e164893d0f0c5ec697f791f0c6d65,PodSandboxId:2d4df22c0989e95cf8a46ba7bf1b7286d50b35cfab8fd3504db38afff2dbdfc2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723830005863909944,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-bskwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a6d6155-5571-4393-9e73-83a08e87cbf5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1c359d855cf881c9dd98bbfc89a234c44c8df8963d9283cd6e479c25c66a0b6,PodSandboxId:eff1f1e5ad9f410a1eb62b74d5907d137204f487b8094452006f4782d0239e47,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723830005836122349,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 365b0a49-c86a-46f0-bf0f-3b84e1cf9ac2,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e50cd41dedf63835f7ab9caeedc1516aa542aba5eb4fba13647d34bbc9737997,PodSandboxId:50e8d156417dd83b93cc20d37a0a4fcdf187c58ebc894f5dd455adcf0c2d6402,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723830005694420111,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tdmh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b81a140-0be4-49c7-8d0b-1ebef6efbdb2,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df7fce7cad33c0a2bcf3266bec644bd9c040b5cb853854e79bff3ab38f60e9b2,PodSandboxId:e8e4d3319b6d68a5c76b6481687e4d8610a8ad5d1a68389d83764b2b19f0bea0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723830000940003177,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-797386,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f362b6a924856b76b521c8598769e769,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60af2ace6d078854a1463f8016e4ab1e4b7bae447c85c8ca8e8133634455d135,PodSandboxId:4ab9f1526277a13ad2c8e657d92025f32aa77e6de04fc3ad5ceb6b0438e5ad58,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723830000888587770,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-797386,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76c9d6f117fa769b3b818b0419b322dc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:789e8d05ce4d8643459c20ed36046ab525699960cea246d15807f7a5f98866f3,PodSandboxId:ff3bb9a3535c889166d54b86e10b096c38e35a23a28ac57b37e6ea95e20238e2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723830000865318712,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-797386,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c7a4368fb8cd946d884c2df1c461975,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e5a8f300597d04f1ee5f27ae2cc632a1587c6299983e4a6de87814f96e37c65,PodSandboxId:f72fd728d5193ee968665a0dd3cfdb2264323cc2d77d4a72c1aeadefde6facf4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723830000804216173,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-797386,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5737bb43c1673c7c014a490a4465a36e,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4d0ad9bebe5d46ff8559086079a818bce5894de7bc569ec34f5c83a0da2b450,PodSandboxId:f95d33ca1bb337df421b29f762df2a024b547b370b2e99736426dde2275e3d94,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723829672341815218,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-6986q,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6043b81f-fa83-40b5-9674-cf22bb48ad7a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdce23922d9c79c1305ead80951b322992a6c4263cccc427b5eef22d407760ac,PodSandboxId:293fa56faeea3340fb90217cfe3bdff948214c0089a764c14a6110acaa73ed85,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723829620219710468,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-bskwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a6d6155-5571-4393-9e73-83a08e87cbf5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab5b49957caceff30c5edc18fe05235aed3ed6346fff1257d81ab0332e4414b0,PodSandboxId:ad2ceb5c4d227b17acf99d8a12f45df9b83e4344c225d1b75bbffcb317e98bc1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723829618677357958,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 365b0a49-c86a-46f0-bf0f-3b84e1cf9ac2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7de7a6593d24c1ff3638eb2b2d773183f0d8a3dc6dc55377f15f79d1c5a1b11,PodSandboxId:b847be3789c2c0c4e5451846d9ec79f6e0ee5c208a86780955e67d7a9c7ce2ad,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723829607086102880,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ksr6k,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: b0a46b8f-ea93-42d6-a11c-be45c46b3090,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40703b34f4634a7257846aa83155677fe4db38b0df5ae116f3ef7e14e7ced732,PodSandboxId:000e360ba74addf02c020b3454196d03f64bdb827f18333fc95773fe5a167496,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723829603514937370,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tdmh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 6b81a140-0be4-49c7-8d0b-1ebef6efbdb2,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9a8478e7a74ec1f31463e518d0ac01e55e49029495ecbd94952d75b02e5e31f,PodSandboxId:564e02472f5b7c1f7201ff4264735c6e2aa18bb61088a92e82238567eb565b85,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723829592230868888,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-797386,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 3c7a4368fb8cd946d884c2df1c461975,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7c3646b332f95294341f2049499a2f5ec6798771184ced80274c75132ee031a,PodSandboxId:99cc5c92b21797b13b182c2713b158ded6209c0016ea8ebab3d84d6a55c9bc7b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723829592235985468,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-797386,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76c9d6f117fa769b3b81
8b0419b322dc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c48fc7ab9fc9bd52528c2098fbf029f5d200bc571e1de1f6d6ef946967e93e1d,PodSandboxId:08901405eaafce26555a9e8d717b4722961b1ca8809aa5c9a84f3b3476868752,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723829592225164976,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-797386,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5737bb43c1673c7c014a490a4465a36e,},
Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c6740953e611ccc938310422e50ac5f9346f75cad1f1a8641b062847b43647f,PodSandboxId:eadfe17cfad28e4b89ba41ea017afb8b50faa9bf87cc270f2586977426a03a8d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723829592013528926,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-797386,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f362b6a924856b76b521c8598769e769,},Annotations:map
[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5e679436-2b99-448d-a1e9-3497149cdc9d name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 17:44:08 multinode-797386 crio[2782]: time="2024-08-16 17:44:08.325738926Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1478e0fc-be9c-4a23-b8cb-7f63b5e5a59b name=/runtime.v1.RuntimeService/Version
	Aug 16 17:44:08 multinode-797386 crio[2782]: time="2024-08-16 17:44:08.325843146Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1478e0fc-be9c-4a23-b8cb-7f63b5e5a59b name=/runtime.v1.RuntimeService/Version
	Aug 16 17:44:08 multinode-797386 crio[2782]: time="2024-08-16 17:44:08.327246778Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d012954d-c6e9-478f-8c9b-4c00520eddf0 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 17:44:08 multinode-797386 crio[2782]: time="2024-08-16 17:44:08.327731542Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723830248327704539,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d012954d-c6e9-478f-8c9b-4c00520eddf0 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 17:44:08 multinode-797386 crio[2782]: time="2024-08-16 17:44:08.328384935Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=49d53408-f10f-4d3a-a83d-9ef75b37a87e name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 17:44:08 multinode-797386 crio[2782]: time="2024-08-16 17:44:08.328474077Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=49d53408-f10f-4d3a-a83d-9ef75b37a87e name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 17:44:08 multinode-797386 crio[2782]: time="2024-08-16 17:44:08.328841182Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:098cb4d42f45971f744870cab8252004b646b1cae854b99ed067069c91d0a919,PodSandboxId:69a58bde7f31e870042d7bd2a0b242639d66d1e2275ce8c9da9267164a8a8589,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723830039464008494,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-6986q,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6043b81f-fa83-40b5-9674-cf22bb48ad7a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05ab432af1c9ceb8c8581aa18daa8602c8d5ffab88a3c85c24b28e75eb810a16,PodSandboxId:b81b19f5f5d080de6ca34dc0ad182b3ba32a3425f2375075c8330d6b4d5d59f2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723830006003627531,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ksr6k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0a46b8f-ea93-42d6-a11c-be45c46b3090,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ac3da3f42414f32fe9de3cb0e5b73ddb03e164893d0f0c5ec697f791f0c6d65,PodSandboxId:2d4df22c0989e95cf8a46ba7bf1b7286d50b35cfab8fd3504db38afff2dbdfc2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723830005863909944,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-bskwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a6d6155-5571-4393-9e73-83a08e87cbf5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1c359d855cf881c9dd98bbfc89a234c44c8df8963d9283cd6e479c25c66a0b6,PodSandboxId:eff1f1e5ad9f410a1eb62b74d5907d137204f487b8094452006f4782d0239e47,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723830005836122349,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 365b0a49-c86a-46f0-bf0f-3b84e1cf9ac2,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e50cd41dedf63835f7ab9caeedc1516aa542aba5eb4fba13647d34bbc9737997,PodSandboxId:50e8d156417dd83b93cc20d37a0a4fcdf187c58ebc894f5dd455adcf0c2d6402,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723830005694420111,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tdmh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b81a140-0be4-49c7-8d0b-1ebef6efbdb2,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df7fce7cad33c0a2bcf3266bec644bd9c040b5cb853854e79bff3ab38f60e9b2,PodSandboxId:e8e4d3319b6d68a5c76b6481687e4d8610a8ad5d1a68389d83764b2b19f0bea0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723830000940003177,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-797386,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f362b6a924856b76b521c8598769e769,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60af2ace6d078854a1463f8016e4ab1e4b7bae447c85c8ca8e8133634455d135,PodSandboxId:4ab9f1526277a13ad2c8e657d92025f32aa77e6de04fc3ad5ceb6b0438e5ad58,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723830000888587770,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-797386,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76c9d6f117fa769b3b818b0419b322dc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:789e8d05ce4d8643459c20ed36046ab525699960cea246d15807f7a5f98866f3,PodSandboxId:ff3bb9a3535c889166d54b86e10b096c38e35a23a28ac57b37e6ea95e20238e2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723830000865318712,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-797386,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c7a4368fb8cd946d884c2df1c461975,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e5a8f300597d04f1ee5f27ae2cc632a1587c6299983e4a6de87814f96e37c65,PodSandboxId:f72fd728d5193ee968665a0dd3cfdb2264323cc2d77d4a72c1aeadefde6facf4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723830000804216173,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-797386,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5737bb43c1673c7c014a490a4465a36e,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4d0ad9bebe5d46ff8559086079a818bce5894de7bc569ec34f5c83a0da2b450,PodSandboxId:f95d33ca1bb337df421b29f762df2a024b547b370b2e99736426dde2275e3d94,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723829672341815218,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-6986q,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6043b81f-fa83-40b5-9674-cf22bb48ad7a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdce23922d9c79c1305ead80951b322992a6c4263cccc427b5eef22d407760ac,PodSandboxId:293fa56faeea3340fb90217cfe3bdff948214c0089a764c14a6110acaa73ed85,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723829620219710468,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-bskwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a6d6155-5571-4393-9e73-83a08e87cbf5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab5b49957caceff30c5edc18fe05235aed3ed6346fff1257d81ab0332e4414b0,PodSandboxId:ad2ceb5c4d227b17acf99d8a12f45df9b83e4344c225d1b75bbffcb317e98bc1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723829618677357958,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 365b0a49-c86a-46f0-bf0f-3b84e1cf9ac2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7de7a6593d24c1ff3638eb2b2d773183f0d8a3dc6dc55377f15f79d1c5a1b11,PodSandboxId:b847be3789c2c0c4e5451846d9ec79f6e0ee5c208a86780955e67d7a9c7ce2ad,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723829607086102880,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ksr6k,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: b0a46b8f-ea93-42d6-a11c-be45c46b3090,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40703b34f4634a7257846aa83155677fe4db38b0df5ae116f3ef7e14e7ced732,PodSandboxId:000e360ba74addf02c020b3454196d03f64bdb827f18333fc95773fe5a167496,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723829603514937370,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tdmh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 6b81a140-0be4-49c7-8d0b-1ebef6efbdb2,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9a8478e7a74ec1f31463e518d0ac01e55e49029495ecbd94952d75b02e5e31f,PodSandboxId:564e02472f5b7c1f7201ff4264735c6e2aa18bb61088a92e82238567eb565b85,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723829592230868888,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-797386,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 3c7a4368fb8cd946d884c2df1c461975,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7c3646b332f95294341f2049499a2f5ec6798771184ced80274c75132ee031a,PodSandboxId:99cc5c92b21797b13b182c2713b158ded6209c0016ea8ebab3d84d6a55c9bc7b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723829592235985468,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-797386,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76c9d6f117fa769b3b81
8b0419b322dc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c48fc7ab9fc9bd52528c2098fbf029f5d200bc571e1de1f6d6ef946967e93e1d,PodSandboxId:08901405eaafce26555a9e8d717b4722961b1ca8809aa5c9a84f3b3476868752,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723829592225164976,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-797386,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5737bb43c1673c7c014a490a4465a36e,},
Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c6740953e611ccc938310422e50ac5f9346f75cad1f1a8641b062847b43647f,PodSandboxId:eadfe17cfad28e4b89ba41ea017afb8b50faa9bf87cc270f2586977426a03a8d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723829592013528926,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-797386,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f362b6a924856b76b521c8598769e769,},Annotations:map
[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=49d53408-f10f-4d3a-a83d-9ef75b37a87e name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 17:44:08 multinode-797386 crio[2782]: time="2024-08-16 17:44:08.375196508Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9fa30cad-03d0-4b03-92a3-a999c842c217 name=/runtime.v1.RuntimeService/Version
	Aug 16 17:44:08 multinode-797386 crio[2782]: time="2024-08-16 17:44:08.375289480Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9fa30cad-03d0-4b03-92a3-a999c842c217 name=/runtime.v1.RuntimeService/Version
	Aug 16 17:44:08 multinode-797386 crio[2782]: time="2024-08-16 17:44:08.376667014Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1d7dfb70-dd70-4e1e-99ee-600f9a3c5551 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 17:44:08 multinode-797386 crio[2782]: time="2024-08-16 17:44:08.377092239Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723830248377065477,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1d7dfb70-dd70-4e1e-99ee-600f9a3c5551 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 17:44:08 multinode-797386 crio[2782]: time="2024-08-16 17:44:08.377788346Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8c01b221-a58e-496c-85d0-6f28a10b126e name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 17:44:08 multinode-797386 crio[2782]: time="2024-08-16 17:44:08.377861160Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8c01b221-a58e-496c-85d0-6f28a10b126e name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 17:44:08 multinode-797386 crio[2782]: time="2024-08-16 17:44:08.378257891Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:098cb4d42f45971f744870cab8252004b646b1cae854b99ed067069c91d0a919,PodSandboxId:69a58bde7f31e870042d7bd2a0b242639d66d1e2275ce8c9da9267164a8a8589,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723830039464008494,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-6986q,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6043b81f-fa83-40b5-9674-cf22bb48ad7a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05ab432af1c9ceb8c8581aa18daa8602c8d5ffab88a3c85c24b28e75eb810a16,PodSandboxId:b81b19f5f5d080de6ca34dc0ad182b3ba32a3425f2375075c8330d6b4d5d59f2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723830006003627531,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ksr6k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0a46b8f-ea93-42d6-a11c-be45c46b3090,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ac3da3f42414f32fe9de3cb0e5b73ddb03e164893d0f0c5ec697f791f0c6d65,PodSandboxId:2d4df22c0989e95cf8a46ba7bf1b7286d50b35cfab8fd3504db38afff2dbdfc2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723830005863909944,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-bskwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a6d6155-5571-4393-9e73-83a08e87cbf5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1c359d855cf881c9dd98bbfc89a234c44c8df8963d9283cd6e479c25c66a0b6,PodSandboxId:eff1f1e5ad9f410a1eb62b74d5907d137204f487b8094452006f4782d0239e47,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723830005836122349,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 365b0a49-c86a-46f0-bf0f-3b84e1cf9ac2,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e50cd41dedf63835f7ab9caeedc1516aa542aba5eb4fba13647d34bbc9737997,PodSandboxId:50e8d156417dd83b93cc20d37a0a4fcdf187c58ebc894f5dd455adcf0c2d6402,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723830005694420111,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tdmh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b81a140-0be4-49c7-8d0b-1ebef6efbdb2,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df7fce7cad33c0a2bcf3266bec644bd9c040b5cb853854e79bff3ab38f60e9b2,PodSandboxId:e8e4d3319b6d68a5c76b6481687e4d8610a8ad5d1a68389d83764b2b19f0bea0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723830000940003177,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-797386,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f362b6a924856b76b521c8598769e769,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60af2ace6d078854a1463f8016e4ab1e4b7bae447c85c8ca8e8133634455d135,PodSandboxId:4ab9f1526277a13ad2c8e657d92025f32aa77e6de04fc3ad5ceb6b0438e5ad58,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723830000888587770,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-797386,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76c9d6f117fa769b3b818b0419b322dc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:789e8d05ce4d8643459c20ed36046ab525699960cea246d15807f7a5f98866f3,PodSandboxId:ff3bb9a3535c889166d54b86e10b096c38e35a23a28ac57b37e6ea95e20238e2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723830000865318712,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-797386,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c7a4368fb8cd946d884c2df1c461975,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e5a8f300597d04f1ee5f27ae2cc632a1587c6299983e4a6de87814f96e37c65,PodSandboxId:f72fd728d5193ee968665a0dd3cfdb2264323cc2d77d4a72c1aeadefde6facf4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723830000804216173,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-797386,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5737bb43c1673c7c014a490a4465a36e,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4d0ad9bebe5d46ff8559086079a818bce5894de7bc569ec34f5c83a0da2b450,PodSandboxId:f95d33ca1bb337df421b29f762df2a024b547b370b2e99736426dde2275e3d94,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723829672341815218,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-6986q,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6043b81f-fa83-40b5-9674-cf22bb48ad7a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdce23922d9c79c1305ead80951b322992a6c4263cccc427b5eef22d407760ac,PodSandboxId:293fa56faeea3340fb90217cfe3bdff948214c0089a764c14a6110acaa73ed85,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723829620219710468,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-bskwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a6d6155-5571-4393-9e73-83a08e87cbf5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab5b49957caceff30c5edc18fe05235aed3ed6346fff1257d81ab0332e4414b0,PodSandboxId:ad2ceb5c4d227b17acf99d8a12f45df9b83e4344c225d1b75bbffcb317e98bc1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723829618677357958,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 365b0a49-c86a-46f0-bf0f-3b84e1cf9ac2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7de7a6593d24c1ff3638eb2b2d773183f0d8a3dc6dc55377f15f79d1c5a1b11,PodSandboxId:b847be3789c2c0c4e5451846d9ec79f6e0ee5c208a86780955e67d7a9c7ce2ad,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723829607086102880,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ksr6k,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: b0a46b8f-ea93-42d6-a11c-be45c46b3090,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40703b34f4634a7257846aa83155677fe4db38b0df5ae116f3ef7e14e7ced732,PodSandboxId:000e360ba74addf02c020b3454196d03f64bdb827f18333fc95773fe5a167496,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723829603514937370,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tdmh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 6b81a140-0be4-49c7-8d0b-1ebef6efbdb2,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9a8478e7a74ec1f31463e518d0ac01e55e49029495ecbd94952d75b02e5e31f,PodSandboxId:564e02472f5b7c1f7201ff4264735c6e2aa18bb61088a92e82238567eb565b85,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723829592230868888,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-797386,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 3c7a4368fb8cd946d884c2df1c461975,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7c3646b332f95294341f2049499a2f5ec6798771184ced80274c75132ee031a,PodSandboxId:99cc5c92b21797b13b182c2713b158ded6209c0016ea8ebab3d84d6a55c9bc7b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723829592235985468,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-797386,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76c9d6f117fa769b3b81
8b0419b322dc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c48fc7ab9fc9bd52528c2098fbf029f5d200bc571e1de1f6d6ef946967e93e1d,PodSandboxId:08901405eaafce26555a9e8d717b4722961b1ca8809aa5c9a84f3b3476868752,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723829592225164976,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-797386,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5737bb43c1673c7c014a490a4465a36e,},
Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c6740953e611ccc938310422e50ac5f9346f75cad1f1a8641b062847b43647f,PodSandboxId:eadfe17cfad28e4b89ba41ea017afb8b50faa9bf87cc270f2586977426a03a8d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723829592013528926,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-797386,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f362b6a924856b76b521c8598769e769,},Annotations:map
[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8c01b221-a58e-496c-85d0-6f28a10b126e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	098cb4d42f459       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   69a58bde7f31e       busybox-7dff88458-6986q
	05ab432af1c9c       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      4 minutes ago       Running             kindnet-cni               1                   b81b19f5f5d08       kindnet-ksr6k
	1ac3da3f42414       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   1                   2d4df22c0989e       coredns-6f6b679f8f-bskwd
	f1c359d855cf8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   eff1f1e5ad9f4       storage-provisioner
	e50cd41dedf63       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      4 minutes ago       Running             kube-proxy                1                   50e8d156417dd       kube-proxy-tdmh8
	df7fce7cad33c       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      4 minutes ago       Running             kube-scheduler            1                   e8e4d3319b6d6       kube-scheduler-multinode-797386
	60af2ace6d078       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      4 minutes ago       Running             etcd                      1                   4ab9f1526277a       etcd-multinode-797386
	789e8d05ce4d8       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      4 minutes ago       Running             kube-controller-manager   1                   ff3bb9a3535c8       kube-controller-manager-multinode-797386
	7e5a8f300597d       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      4 minutes ago       Running             kube-apiserver            1                   f72fd728d5193       kube-apiserver-multinode-797386
	f4d0ad9bebe5d       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   f95d33ca1bb33       busybox-7dff88458-6986q
	cdce23922d9c7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      10 minutes ago      Exited              coredns                   0                   293fa56faeea3       coredns-6f6b679f8f-bskwd
	ab5b49957cace       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   ad2ceb5c4d227       storage-provisioner
	d7de7a6593d24       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    10 minutes ago      Exited              kindnet-cni               0                   b847be3789c2c       kindnet-ksr6k
	40703b34f4634       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      10 minutes ago      Exited              kube-proxy                0                   000e360ba74ad       kube-proxy-tdmh8
	a7c3646b332f9       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      10 minutes ago      Exited              etcd                      0                   99cc5c92b2179       etcd-multinode-797386
	a9a8478e7a74e       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      10 minutes ago      Exited              kube-controller-manager   0                   564e02472f5b7       kube-controller-manager-multinode-797386
	c48fc7ab9fc9b       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      10 minutes ago      Exited              kube-apiserver            0                   08901405eaafc       kube-apiserver-multinode-797386
	6c6740953e611       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      10 minutes ago      Exited              kube-scheduler            0                   eadfe17cfad28       kube-scheduler-multinode-797386
	
	
	==> coredns [1ac3da3f42414f32fe9de3cb0e5b73ddb03e164893d0f0c5ec697f791f0c6d65] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:49747 - 32955 "HINFO IN 4622227773593751248.7735566643631206532. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011781015s
	
	
	==> coredns [cdce23922d9c79c1305ead80951b322992a6c4263cccc427b5eef22d407760ac] <==
	[INFO] 10.244.1.2:59194 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001860817s
	[INFO] 10.244.1.2:56815 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00009145s
	[INFO] 10.244.1.2:47769 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000075554s
	[INFO] 10.244.1.2:39798 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001272791s
	[INFO] 10.244.1.2:54706 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000066123s
	[INFO] 10.244.1.2:34364 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000060491s
	[INFO] 10.244.1.2:54133 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000069582s
	[INFO] 10.244.0.3:35865 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00012994s
	[INFO] 10.244.0.3:56139 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000041737s
	[INFO] 10.244.0.3:49918 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000039779s
	[INFO] 10.244.0.3:37411 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000031333s
	[INFO] 10.244.1.2:36935 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109219s
	[INFO] 10.244.1.2:34050 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000119027s
	[INFO] 10.244.1.2:36132 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000080311s
	[INFO] 10.244.1.2:42498 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000055105s
	[INFO] 10.244.0.3:40926 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000194258s
	[INFO] 10.244.0.3:52515 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00007584s
	[INFO] 10.244.0.3:46454 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000070512s
	[INFO] 10.244.0.3:33899 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000080169s
	[INFO] 10.244.1.2:43068 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000151468s
	[INFO] 10.244.1.2:33364 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000149498s
	[INFO] 10.244.1.2:38424 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00007622s
	[INFO] 10.244.1.2:36342 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000112145s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-797386
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-797386
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd
	                    minikube.k8s.io/name=multinode-797386
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_16T17_33_18_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 17:33:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-797386
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 17:43:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 17:40:04 +0000   Fri, 16 Aug 2024 17:33:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 17:40:04 +0000   Fri, 16 Aug 2024 17:33:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 17:40:04 +0000   Fri, 16 Aug 2024 17:33:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 17:40:04 +0000   Fri, 16 Aug 2024 17:33:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.218
	  Hostname:    multinode-797386
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 61e13cf0aea544eebccb4bbf7da65841
	  System UUID:                61e13cf0-aea5-44ee-bccb-4bbf7da65841
	  Boot ID:                    ac23b698-afdd-47fb-a552-4de7e8c23dc5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-6986q                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m39s
	  kube-system                 coredns-6f6b679f8f-bskwd                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     10m
	  kube-system                 etcd-multinode-797386                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         10m
	  kube-system                 kindnet-ksr6k                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-apiserver-multinode-797386             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-multinode-797386    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-tdmh8                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-multinode-797386             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 10m                  kube-proxy       
	  Normal  Starting                 4m2s                 kube-proxy       
	  Normal  NodeHasSufficientPID     10m                  kubelet          Node multinode-797386 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                  kubelet          Node multinode-797386 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                  kubelet          Node multinode-797386 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 10m                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                  node-controller  Node multinode-797386 event: Registered Node multinode-797386 in Controller
	  Normal  NodeReady                10m                  kubelet          Node multinode-797386 status is now: NodeReady
	  Normal  Starting                 4m8s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m8s (x8 over 4m8s)  kubelet          Node multinode-797386 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m8s (x8 over 4m8s)  kubelet          Node multinode-797386 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m8s (x7 over 4m8s)  kubelet          Node multinode-797386 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m1s                 node-controller  Node multinode-797386 event: Registered Node multinode-797386 in Controller
	
	
	Name:               multinode-797386-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-797386-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd
	                    minikube.k8s.io/name=multinode-797386
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_16T17_40_44_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 17:40:44 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-797386-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 17:41:45 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 16 Aug 2024 17:41:14 +0000   Fri, 16 Aug 2024 17:42:27 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 16 Aug 2024 17:41:14 +0000   Fri, 16 Aug 2024 17:42:27 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 16 Aug 2024 17:41:14 +0000   Fri, 16 Aug 2024 17:42:27 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 16 Aug 2024 17:41:14 +0000   Fri, 16 Aug 2024 17:42:27 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.27
	  Hostname:    multinode-797386-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4942706ae02e46b1ad0097d8fe1d8139
	  System UUID:                4942706a-e02e-46b1-ad00-97d8fe1d8139
	  Boot ID:                    a6f68d2d-c7a9-430d-bc73-9134ba12128a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-dpsv9    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m29s
	  kube-system                 kindnet-wz6gh              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-proxy-gdpkq           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m19s                  kube-proxy       
	  Normal  Starting                 9m57s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  10m (x2 over 10m)      kubelet          Node multinode-797386-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x2 over 10m)      kubelet          Node multinode-797386-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x2 over 10m)      kubelet          Node multinode-797386-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m42s                  kubelet          Node multinode-797386-m02 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  3m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m24s (x2 over 3m25s)  kubelet          Node multinode-797386-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m24s (x2 over 3m25s)  kubelet          Node multinode-797386-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m24s (x2 over 3m25s)  kubelet          Node multinode-797386-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                3m5s                   kubelet          Node multinode-797386-m02 status is now: NodeReady
	  Normal  NodeNotReady             101s                   node-controller  Node multinode-797386-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.062095] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.173494] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.137993] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.259680] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +3.773609] systemd-fstab-generator[761]: Ignoring "noauto" option for root device
	[  +3.397247] systemd-fstab-generator[893]: Ignoring "noauto" option for root device
	[  +0.061371] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.509967] systemd-fstab-generator[1228]: Ignoring "noauto" option for root device
	[  +0.076510] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.147734] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.432695] systemd-fstab-generator[1340]: Ignoring "noauto" option for root device
	[  +4.946221] kauditd_printk_skb: 59 callbacks suppressed
	[Aug16 17:34] kauditd_printk_skb: 14 callbacks suppressed
	[Aug16 17:39] systemd-fstab-generator[2701]: Ignoring "noauto" option for root device
	[  +0.139386] systemd-fstab-generator[2713]: Ignoring "noauto" option for root device
	[  +0.170362] systemd-fstab-generator[2727]: Ignoring "noauto" option for root device
	[  +0.141971] systemd-fstab-generator[2739]: Ignoring "noauto" option for root device
	[  +0.264535] systemd-fstab-generator[2767]: Ignoring "noauto" option for root device
	[  +8.428790] systemd-fstab-generator[2865]: Ignoring "noauto" option for root device
	[  +0.088295] kauditd_printk_skb: 100 callbacks suppressed
	[  +1.681147] systemd-fstab-generator[2987]: Ignoring "noauto" option for root device
	[Aug16 17:40] kauditd_printk_skb: 74 callbacks suppressed
	[ +14.189245] systemd-fstab-generator[3828]: Ignoring "noauto" option for root device
	[  +0.095202] kauditd_printk_skb: 34 callbacks suppressed
	[ +19.498294] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [60af2ace6d078854a1463f8016e4ab1e4b7bae447c85c8ca8e8133634455d135] <==
	{"level":"info","ts":"2024-08-16T17:40:01.291769Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"2483a61a4a74c1c4","local-member-id":"e5f6aca4c72f5b22","added-peer-id":"e5f6aca4c72f5b22","added-peer-peer-urls":["https://192.168.39.218:2380"]}
	{"level":"info","ts":"2024-08-16T17:40:01.292842Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"2483a61a4a74c1c4","local-member-id":"e5f6aca4c72f5b22","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-16T17:40:01.293035Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-16T17:40:01.296549Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T17:40:01.298219Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-16T17:40:01.300381Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"e5f6aca4c72f5b22","initial-advertise-peer-urls":["https://192.168.39.218:2380"],"listen-peer-urls":["https://192.168.39.218:2380"],"advertise-client-urls":["https://192.168.39.218:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.218:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-16T17:40:01.300418Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-16T17:40:01.300610Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.218:2380"}
	{"level":"info","ts":"2024-08-16T17:40:01.300676Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.218:2380"}
	{"level":"info","ts":"2024-08-16T17:40:03.048934Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5f6aca4c72f5b22 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-16T17:40:03.048996Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5f6aca4c72f5b22 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-16T17:40:03.049046Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5f6aca4c72f5b22 received MsgPreVoteResp from e5f6aca4c72f5b22 at term 2"}
	{"level":"info","ts":"2024-08-16T17:40:03.049062Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5f6aca4c72f5b22 became candidate at term 3"}
	{"level":"info","ts":"2024-08-16T17:40:03.049068Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5f6aca4c72f5b22 received MsgVoteResp from e5f6aca4c72f5b22 at term 3"}
	{"level":"info","ts":"2024-08-16T17:40:03.049103Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5f6aca4c72f5b22 became leader at term 3"}
	{"level":"info","ts":"2024-08-16T17:40:03.049115Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e5f6aca4c72f5b22 elected leader e5f6aca4c72f5b22 at term 3"}
	{"level":"info","ts":"2024-08-16T17:40:03.054810Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"e5f6aca4c72f5b22","local-member-attributes":"{Name:multinode-797386 ClientURLs:[https://192.168.39.218:2379]}","request-path":"/0/members/e5f6aca4c72f5b22/attributes","cluster-id":"2483a61a4a74c1c4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-16T17:40:03.055019Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-16T17:40:03.055519Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-16T17:40:03.055613Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-16T17:40:03.055635Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-16T17:40:03.056386Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T17:40:03.056390Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T17:40:03.057326Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-16T17:40:03.057327Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.218:2379"}
	
	
	==> etcd [a7c3646b332f95294341f2049499a2f5ec6798771184ced80274c75132ee031a] <==
	{"level":"info","ts":"2024-08-16T17:34:06.813639Z","caller":"traceutil/trace.go:171","msg":"trace[1946004062] linearizableReadLoop","detail":"{readStateIndex:455; appliedIndex:454; }","duration":"224.277596ms","start":"2024-08-16T17:34:06.589335Z","end":"2024-08-16T17:34:06.813612Z","steps":["trace[1946004062] 'read index received'  (duration: 72.43885ms)","trace[1946004062] 'applied index is now lower than readState.Index'  (duration: 151.836558ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-16T17:34:06.813722Z","caller":"traceutil/trace.go:171","msg":"trace[1526626322] transaction","detail":"{read_only:false; response_revision:437; number_of_response:1; }","duration":"229.922204ms","start":"2024-08-16T17:34:06.583791Z","end":"2024-08-16T17:34:06.813714Z","steps":["trace[1526626322] 'process raft request'  (duration: 77.947501ms)","trace[1526626322] 'compare'  (duration: 150.77613ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-16T17:34:06.813926Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"224.420617ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-797386-m02\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-16T17:34:06.813967Z","caller":"traceutil/trace.go:171","msg":"trace[480461888] range","detail":"{range_begin:/registry/minions/multinode-797386-m02; range_end:; response_count:0; response_revision:438; }","duration":"224.468217ms","start":"2024-08-16T17:34:06.589489Z","end":"2024-08-16T17:34:06.813957Z","steps":["trace[480461888] 'agreement among raft nodes before linearized reading'  (duration: 224.402055ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T17:34:06.814067Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"224.728335ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/multinode-797386-m02\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-16T17:34:06.814097Z","caller":"traceutil/trace.go:171","msg":"trace[1911649428] range","detail":"{range_begin:/registry/csinodes/multinode-797386-m02; range_end:; response_count:0; response_revision:438; }","duration":"224.761428ms","start":"2024-08-16T17:34:06.589330Z","end":"2024-08-16T17:34:06.814092Z","steps":["trace[1911649428] 'agreement among raft nodes before linearized reading'  (duration: 224.719499ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T17:35:03.159546Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"150.751392ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-16T17:35:03.159870Z","caller":"traceutil/trace.go:171","msg":"trace[1079726022] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:573; }","duration":"151.095109ms","start":"2024-08-16T17:35:03.008753Z","end":"2024-08-16T17:35:03.159848Z","steps":["trace[1079726022] 'range keys from in-memory index tree'  (duration: 150.733446ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T17:35:03.159566Z","caller":"traceutil/trace.go:171","msg":"trace[137127456] linearizableReadLoop","detail":"{readStateIndex:606; appliedIndex:605; }","duration":"203.409311ms","start":"2024-08-16T17:35:02.956134Z","end":"2024-08-16T17:35:03.159543Z","steps":["trace[137127456] 'read index received'  (duration: 198.826391ms)","trace[137127456] 'applied index is now lower than readState.Index'  (duration: 4.582427ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-16T17:35:03.159756Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"203.585898ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-797386-m03\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-16T17:35:03.160109Z","caller":"traceutil/trace.go:171","msg":"trace[444285936] range","detail":"{range_begin:/registry/minions/multinode-797386-m03; range_end:; response_count:0; response_revision:574; }","duration":"203.936125ms","start":"2024-08-16T17:35:02.956130Z","end":"2024-08-16T17:35:03.160066Z","steps":["trace[444285936] 'agreement among raft nodes before linearized reading'  (duration: 203.529461ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T17:35:03.159779Z","caller":"traceutil/trace.go:171","msg":"trace[1615455947] transaction","detail":"{read_only:false; response_revision:574; number_of_response:1; }","duration":"225.282903ms","start":"2024-08-16T17:35:02.934490Z","end":"2024-08-16T17:35:03.159773Z","steps":["trace[1615455947] 'process raft request'  (duration: 220.507695ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T17:35:06.416874Z","caller":"traceutil/trace.go:171","msg":"trace[1291442356] transaction","detail":"{read_only:false; response_revision:604; number_of_response:1; }","duration":"117.08554ms","start":"2024-08-16T17:35:06.299774Z","end":"2024-08-16T17:35:06.416859Z","steps":["trace[1291442356] 'process raft request'  (duration: 116.883781ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T17:35:57.307186Z","caller":"traceutil/trace.go:171","msg":"trace[1337146133] transaction","detail":"{read_only:false; response_revision:702; number_of_response:1; }","duration":"187.451347ms","start":"2024-08-16T17:35:57.119704Z","end":"2024-08-16T17:35:57.307156Z","steps":["trace[1337146133] 'process raft request'  (duration: 187.350688ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T17:35:57.311626Z","caller":"traceutil/trace.go:171","msg":"trace[516929672] transaction","detail":"{read_only:false; response_revision:703; number_of_response:1; }","duration":"173.944514ms","start":"2024-08-16T17:35:57.137667Z","end":"2024-08-16T17:35:57.311612Z","steps":["trace[516929672] 'process raft request'  (duration: 173.847389ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T17:38:17.771346Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-16T17:38:17.771517Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-797386","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.218:2380"],"advertise-client-urls":["https://192.168.39.218:2379"]}
	{"level":"warn","ts":"2024-08-16T17:38:17.771662Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-16T17:38:17.771780Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-16T17:38:17.808831Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.218:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-16T17:38:17.809020Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.218:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-16T17:38:17.809255Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"e5f6aca4c72f5b22","current-leader-member-id":"e5f6aca4c72f5b22"}
	{"level":"info","ts":"2024-08-16T17:38:17.814484Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.218:2380"}
	{"level":"info","ts":"2024-08-16T17:38:17.814589Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.218:2380"}
	{"level":"info","ts":"2024-08-16T17:38:17.814599Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-797386","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.218:2380"],"advertise-client-urls":["https://192.168.39.218:2379"]}
	
	
	==> kernel <==
	 17:44:08 up 11 min,  0 users,  load average: 0.17, 0.20, 0.17
	Linux multinode-797386 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [05ab432af1c9ceb8c8581aa18daa8602c8d5ffab88a3c85c24b28e75eb810a16] <==
	I0816 17:43:06.849684       1 main.go:322] Node multinode-797386-m02 has CIDR [10.244.1.0/24] 
	I0816 17:43:16.853141       1 main.go:295] Handling node with IPs: map[192.168.39.218:{}]
	I0816 17:43:16.853190       1 main.go:299] handling current node
	I0816 17:43:16.853204       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0816 17:43:16.853210       1 main.go:322] Node multinode-797386-m02 has CIDR [10.244.1.0/24] 
	I0816 17:43:26.848809       1 main.go:295] Handling node with IPs: map[192.168.39.218:{}]
	I0816 17:43:26.848851       1 main.go:299] handling current node
	I0816 17:43:26.848866       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0816 17:43:26.848872       1 main.go:322] Node multinode-797386-m02 has CIDR [10.244.1.0/24] 
	I0816 17:43:36.849662       1 main.go:295] Handling node with IPs: map[192.168.39.218:{}]
	I0816 17:43:36.849709       1 main.go:299] handling current node
	I0816 17:43:36.849739       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0816 17:43:36.849746       1 main.go:322] Node multinode-797386-m02 has CIDR [10.244.1.0/24] 
	I0816 17:43:46.853888       1 main.go:295] Handling node with IPs: map[192.168.39.218:{}]
	I0816 17:43:46.853937       1 main.go:299] handling current node
	I0816 17:43:46.853957       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0816 17:43:46.853966       1 main.go:322] Node multinode-797386-m02 has CIDR [10.244.1.0/24] 
	I0816 17:43:56.854198       1 main.go:295] Handling node with IPs: map[192.168.39.218:{}]
	I0816 17:43:56.854248       1 main.go:299] handling current node
	I0816 17:43:56.854269       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0816 17:43:56.854276       1 main.go:322] Node multinode-797386-m02 has CIDR [10.244.1.0/24] 
	I0816 17:44:06.849028       1 main.go:295] Handling node with IPs: map[192.168.39.218:{}]
	I0816 17:44:06.849167       1 main.go:299] handling current node
	I0816 17:44:06.849213       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0816 17:44:06.849232       1 main.go:322] Node multinode-797386-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [d7de7a6593d24c1ff3638eb2b2d773183f0d8a3dc6dc55377f15f79d1c5a1b11] <==
	I0816 17:37:28.045902       1 main.go:322] Node multinode-797386-m03 has CIDR [10.244.3.0/24] 
	I0816 17:37:38.046030       1 main.go:295] Handling node with IPs: map[192.168.39.71:{}]
	I0816 17:37:38.046141       1 main.go:322] Node multinode-797386-m03 has CIDR [10.244.3.0/24] 
	I0816 17:37:38.046295       1 main.go:295] Handling node with IPs: map[192.168.39.218:{}]
	I0816 17:37:38.046322       1 main.go:299] handling current node
	I0816 17:37:38.046358       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0816 17:37:38.046376       1 main.go:322] Node multinode-797386-m02 has CIDR [10.244.1.0/24] 
	I0816 17:37:48.053569       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0816 17:37:48.053611       1 main.go:322] Node multinode-797386-m02 has CIDR [10.244.1.0/24] 
	I0816 17:37:48.053791       1 main.go:295] Handling node with IPs: map[192.168.39.71:{}]
	I0816 17:37:48.053811       1 main.go:322] Node multinode-797386-m03 has CIDR [10.244.3.0/24] 
	I0816 17:37:48.053892       1 main.go:295] Handling node with IPs: map[192.168.39.218:{}]
	I0816 17:37:48.053912       1 main.go:299] handling current node
	I0816 17:37:58.045480       1 main.go:295] Handling node with IPs: map[192.168.39.218:{}]
	I0816 17:37:58.045573       1 main.go:299] handling current node
	I0816 17:37:58.045602       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0816 17:37:58.045611       1 main.go:322] Node multinode-797386-m02 has CIDR [10.244.1.0/24] 
	I0816 17:37:58.045792       1 main.go:295] Handling node with IPs: map[192.168.39.71:{}]
	I0816 17:37:58.045814       1 main.go:322] Node multinode-797386-m03 has CIDR [10.244.3.0/24] 
	I0816 17:38:08.047106       1 main.go:295] Handling node with IPs: map[192.168.39.218:{}]
	I0816 17:38:08.047268       1 main.go:299] handling current node
	I0816 17:38:08.047316       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0816 17:38:08.047336       1 main.go:322] Node multinode-797386-m02 has CIDR [10.244.1.0/24] 
	I0816 17:38:08.047527       1 main.go:295] Handling node with IPs: map[192.168.39.71:{}]
	I0816 17:38:08.047559       1 main.go:322] Node multinode-797386-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [7e5a8f300597d04f1ee5f27ae2cc632a1587c6299983e4a6de87814f96e37c65] <==
	I0816 17:40:04.339680       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0816 17:40:04.343546       1 aggregator.go:171] initial CRD sync complete...
	I0816 17:40:04.343644       1 autoregister_controller.go:144] Starting autoregister controller
	I0816 17:40:04.343718       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0816 17:40:04.344621       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0816 17:40:04.344951       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0816 17:40:04.345035       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	E0816 17:40:04.377828       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0816 17:40:04.379875       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0816 17:40:04.397548       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0816 17:40:04.397621       1 policy_source.go:224] refreshing policies
	I0816 17:40:04.399331       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0816 17:40:04.433535       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0816 17:40:04.434003       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0816 17:40:04.436170       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0816 17:40:04.439204       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0816 17:40:04.448605       1 cache.go:39] Caches are synced for autoregister controller
	I0816 17:40:05.244904       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0816 17:40:06.504196       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0816 17:40:06.641341       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0816 17:40:06.661420       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0816 17:40:06.729513       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0816 17:40:06.735075       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0816 17:40:07.885122       1 controller.go:615] quota admission added evaluator for: endpoints
	I0816 17:40:07.937003       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [c48fc7ab9fc9bd52528c2098fbf029f5d200bc571e1de1f6d6ef946967e93e1d] <==
	E0816 17:34:34.473737       1 conn.go:339] Error on socket receive: read tcp 192.168.39.218:8443->192.168.39.1:38848: use of closed network connection
	E0816 17:34:34.638696       1 conn.go:339] Error on socket receive: read tcp 192.168.39.218:8443->192.168.39.1:38854: use of closed network connection
	E0816 17:34:34.797185       1 conn.go:339] Error on socket receive: read tcp 192.168.39.218:8443->192.168.39.1:38870: use of closed network connection
	E0816 17:34:34.959574       1 conn.go:339] Error on socket receive: read tcp 192.168.39.218:8443->192.168.39.1:38888: use of closed network connection
	I0816 17:38:17.773747       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	W0816 17:38:17.776851       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:38:17.776925       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:38:17.776962       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:38:17.785622       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:38:17.794962       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:38:17.797502       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:38:17.798129       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:38:17.798211       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:38:17.798268       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:38:17.798329       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:38:17.798382       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:38:17.798966       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:38:17.799067       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:38:17.799106       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:38:17.799139       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:38:17.799191       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:38:17.799242       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:38:17.799281       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:38:17.799315       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 17:38:17.799368       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [789e8d05ce4d8643459c20ed36046ab525699960cea246d15807f7a5f98866f3] <==
	I0816 17:41:22.384230       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-797386-m03\" does not exist"
	I0816 17:41:22.405029       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-797386-m03" podCIDRs=["10.244.2.0/24"]
	I0816 17:41:22.405457       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-797386-m03"
	I0816 17:41:22.405551       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-797386-m03"
	I0816 17:41:22.823886       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-797386-m03"
	I0816 17:41:22.949712       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-797386-m03"
	I0816 17:41:23.147561       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-797386-m03"
	I0816 17:41:32.490086       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-797386-m03"
	I0816 17:41:41.805890       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-797386-m02"
	I0816 17:41:41.806007       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-797386-m03"
	I0816 17:41:41.814643       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-797386-m03"
	I0816 17:41:42.914538       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-797386-m03"
	I0816 17:41:46.702658       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-797386-m03"
	I0816 17:41:46.722813       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-797386-m03"
	I0816 17:41:47.153146       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-797386-m02"
	I0816 17:41:47.153290       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-797386-m03"
	I0816 17:42:27.667865       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-jwxd2"
	I0816 17:42:27.700311       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-jwxd2"
	I0816 17:42:27.700358       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-fk9hf"
	I0816 17:42:27.727641       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-fk9hf"
	I0816 17:42:27.932044       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-797386-m02"
	I0816 17:42:27.950210       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-797386-m02"
	I0816 17:42:27.968400       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="12.706916ms"
	I0816 17:42:27.969114       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="36.795µs"
	I0816 17:42:32.988335       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-797386-m02"
	
	
	==> kube-controller-manager [a9a8478e7a74ec1f31463e518d0ac01e55e49029495ecbd94952d75b02e5e31f] <==
	I0816 17:35:51.381913       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-797386-m03"
	I0816 17:35:51.606046       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-797386-m02"
	I0816 17:35:51.606158       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-797386-m03"
	I0816 17:35:52.876802       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-797386-m02"
	I0816 17:35:52.876854       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-797386-m03\" does not exist"
	I0816 17:35:52.894683       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-797386-m03" podCIDRs=["10.244.3.0/24"]
	I0816 17:35:52.894760       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-797386-m03"
	I0816 17:35:52.894800       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-797386-m03"
	I0816 17:35:53.099895       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-797386-m03"
	I0816 17:35:53.425292       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-797386-m03"
	I0816 17:35:57.314508       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-797386-m03"
	I0816 17:36:03.203063       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-797386-m03"
	I0816 17:36:12.469626       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-797386-m03"
	I0816 17:36:12.470475       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-797386-m02"
	I0816 17:36:12.481423       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-797386-m03"
	I0816 17:36:17.135375       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-797386-m03"
	I0816 17:36:52.152879       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-797386-m02"
	I0816 17:36:52.153347       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-797386-m03"
	I0816 17:36:52.177243       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-797386-m02"
	I0816 17:36:52.216105       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="8.632246ms"
	I0816 17:36:52.216291       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="28.396µs"
	I0816 17:36:57.215258       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-797386-m03"
	I0816 17:36:57.233877       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-797386-m03"
	I0816 17:36:57.284010       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-797386-m02"
	I0816 17:37:07.357152       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-797386-m03"
	
	
	==> kube-proxy [40703b34f4634a7257846aa83155677fe4db38b0df5ae116f3ef7e14e7ced732] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0816 17:33:23.907726       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0816 17:33:23.926518       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.218"]
	E0816 17:33:23.926808       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0816 17:33:23.958144       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0816 17:33:23.958174       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0816 17:33:23.958206       1 server_linux.go:169] "Using iptables Proxier"
	I0816 17:33:23.961914       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0816 17:33:23.962217       1 server.go:483] "Version info" version="v1.31.0"
	I0816 17:33:23.962274       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 17:33:23.963489       1 config.go:197] "Starting service config controller"
	I0816 17:33:23.963555       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0816 17:33:23.963593       1 config.go:104] "Starting endpoint slice config controller"
	I0816 17:33:23.963608       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0816 17:33:23.964069       1 config.go:326] "Starting node config controller"
	I0816 17:33:23.965698       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0816 17:33:24.064358       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0816 17:33:24.064478       1 shared_informer.go:320] Caches are synced for service config
	I0816 17:33:24.065893       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [e50cd41dedf63835f7ab9caeedc1516aa542aba5eb4fba13647d34bbc9737997] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0816 17:40:06.102144       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0816 17:40:06.123046       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.218"]
	E0816 17:40:06.123104       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0816 17:40:06.216323       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0816 17:40:06.216366       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0816 17:40:06.216392       1 server_linux.go:169] "Using iptables Proxier"
	I0816 17:40:06.218679       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0816 17:40:06.218900       1 server.go:483] "Version info" version="v1.31.0"
	I0816 17:40:06.218911       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 17:40:06.220588       1 config.go:197] "Starting service config controller"
	I0816 17:40:06.220602       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0816 17:40:06.220618       1 config.go:104] "Starting endpoint slice config controller"
	I0816 17:40:06.220622       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0816 17:40:06.221121       1 config.go:326] "Starting node config controller"
	I0816 17:40:06.221130       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0816 17:40:06.322317       1 shared_informer.go:320] Caches are synced for node config
	I0816 17:40:06.322357       1 shared_informer.go:320] Caches are synced for service config
	I0816 17:40:06.322394       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [6c6740953e611ccc938310422e50ac5f9346f75cad1f1a8641b062847b43647f] <==
	E0816 17:33:16.180755       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0816 17:33:16.331246       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0816 17:33:16.331296       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:33:16.359866       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0816 17:33:16.360011       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 17:33:16.359978       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 17:33:16.360142       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 17:33:16.377801       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 17:33:16.378087       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:33:16.395497       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 17:33:16.395637       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0816 17:33:16.395727       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 17:33:16.395840       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:33:16.417509       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 17:33:16.417554       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:33:16.425543       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 17:33:16.425587       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:33:16.479378       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 17:33:16.479562       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 17:33:16.479928       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 17:33:16.479966       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:33:16.487689       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 17:33:16.487821       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0816 17:33:18.276640       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0816 17:38:17.763087       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [df7fce7cad33c0a2bcf3266bec644bd9c040b5cb853854e79bff3ab38f60e9b2] <==
	I0816 17:40:01.885255       1 serving.go:386] Generated self-signed cert in-memory
	W0816 17:40:04.330713       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0816 17:40:04.330798       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0816 17:40:04.330859       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0816 17:40:04.330882       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0816 17:40:04.376876       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0816 17:40:04.376951       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 17:40:04.386698       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0816 17:40:04.387191       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0816 17:40:04.388520       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0816 17:40:04.388597       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0816 17:40:04.489629       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 16 17:42:50 multinode-797386 kubelet[2994]: E0816 17:42:50.296554    2994 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723830170295118913,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:43:00 multinode-797386 kubelet[2994]: E0816 17:43:00.222926    2994 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 17:43:00 multinode-797386 kubelet[2994]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 17:43:00 multinode-797386 kubelet[2994]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 17:43:00 multinode-797386 kubelet[2994]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 17:43:00 multinode-797386 kubelet[2994]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 17:43:00 multinode-797386 kubelet[2994]: E0816 17:43:00.298731    2994 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723830180298372038,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:43:00 multinode-797386 kubelet[2994]: E0816 17:43:00.298758    2994 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723830180298372038,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:43:10 multinode-797386 kubelet[2994]: E0816 17:43:10.300552    2994 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723830190300189736,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:43:10 multinode-797386 kubelet[2994]: E0816 17:43:10.300587    2994 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723830190300189736,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:43:20 multinode-797386 kubelet[2994]: E0816 17:43:20.302211    2994 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723830200301852236,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:43:20 multinode-797386 kubelet[2994]: E0816 17:43:20.302246    2994 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723830200301852236,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:43:30 multinode-797386 kubelet[2994]: E0816 17:43:30.303677    2994 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723830210303172687,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:43:30 multinode-797386 kubelet[2994]: E0816 17:43:30.303967    2994 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723830210303172687,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:43:40 multinode-797386 kubelet[2994]: E0816 17:43:40.305410    2994 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723830220305085949,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:43:40 multinode-797386 kubelet[2994]: E0816 17:43:40.305471    2994 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723830220305085949,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:43:50 multinode-797386 kubelet[2994]: E0816 17:43:50.307474    2994 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723830230307038359,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:43:50 multinode-797386 kubelet[2994]: E0816 17:43:50.307882    2994 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723830230307038359,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:44:00 multinode-797386 kubelet[2994]: E0816 17:44:00.222007    2994 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 17:44:00 multinode-797386 kubelet[2994]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 17:44:00 multinode-797386 kubelet[2994]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 17:44:00 multinode-797386 kubelet[2994]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 17:44:00 multinode-797386 kubelet[2994]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 17:44:00 multinode-797386 kubelet[2994]: E0816 17:44:00.310237    2994 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723830240309885334,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 17:44:00 multinode-797386 kubelet[2994]: E0816 17:44:00.310283    2994 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723830240309885334,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 17:44:07.973585   47794 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19461-9545/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-797386 -n multinode-797386
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-797386 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.31s)

                                                
                                    
x
+
TestPreload (274.38s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-967491 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0816 17:48:21.062308   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/functional-654639/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-967491 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m11.814762851s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-967491 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-967491 image pull gcr.io/k8s-minikube/busybox: (2.798431008s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-967491
E0816 17:50:55.340339   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/client.crt: no such file or directory" logger="UnhandledError"
E0816 17:51:12.269436   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-967491: exit status 82 (2m0.444481139s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-967491"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-967491 failed: exit status 82
panic.go:626: *** TestPreload FAILED at 2024-08-16 17:52:24.253015018 +0000 UTC m=+3844.571560039
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-967491 -n test-preload-967491
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-967491 -n test-preload-967491: exit status 3 (18.451173276s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 17:52:42.700938   50693 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.114:22: connect: no route to host
	E0816 17:52:42.700954   50693 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.114:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-967491" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-967491" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-967491
--- FAIL: TestPreload (274.38s)

                                                
                                    
x
+
TestKubernetesUpgrade (457.93s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-108715 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-108715 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m25.155420438s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-108715] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19461
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19461-9545/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19461-9545/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-108715" primary control-plane node in "kubernetes-upgrade-108715" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 17:54:33.260574   51774 out.go:345] Setting OutFile to fd 1 ...
	I0816 17:54:33.261560   51774 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:54:33.261575   51774 out.go:358] Setting ErrFile to fd 2...
	I0816 17:54:33.261580   51774 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:54:33.261772   51774 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-9545/.minikube/bin
	I0816 17:54:33.262396   51774 out.go:352] Setting JSON to false
	I0816 17:54:33.263530   51774 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5771,"bootTime":1723825102,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0816 17:54:33.263612   51774 start.go:139] virtualization: kvm guest
	I0816 17:54:33.266166   51774 out.go:177] * [kubernetes-upgrade-108715] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0816 17:54:33.268360   51774 notify.go:220] Checking for updates...
	I0816 17:54:33.269282   51774 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 17:54:33.271399   51774 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 17:54:33.272738   51774 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19461-9545/kubeconfig
	I0816 17:54:33.274147   51774 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19461-9545/.minikube
	I0816 17:54:33.275479   51774 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 17:54:33.278090   51774 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 17:54:33.279715   51774 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 17:54:33.318328   51774 out.go:177] * Using the kvm2 driver based on user configuration
	I0816 17:54:33.319432   51774 start.go:297] selected driver: kvm2
	I0816 17:54:33.319448   51774 start.go:901] validating driver "kvm2" against <nil>
	I0816 17:54:33.319459   51774 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 17:54:33.320187   51774 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 17:54:33.320266   51774 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19461-9545/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 17:54:33.338120   51774 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0816 17:54:33.338176   51774 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 17:54:33.338443   51774 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0816 17:54:33.338469   51774 cni.go:84] Creating CNI manager for ""
	I0816 17:54:33.338476   51774 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 17:54:33.338482   51774 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0816 17:54:33.338528   51774 start.go:340] cluster config:
	{Name:kubernetes-upgrade-108715 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-108715 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 17:54:33.338641   51774 iso.go:125] acquiring lock: {Name:mke35866c1d0f078e1f027c2d727692722810e04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 17:54:33.340246   51774 out.go:177] * Starting "kubernetes-upgrade-108715" primary control-plane node in "kubernetes-upgrade-108715" cluster
	I0816 17:54:33.341339   51774 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0816 17:54:33.341374   51774 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0816 17:54:33.341382   51774 cache.go:56] Caching tarball of preloaded images
	I0816 17:54:33.341464   51774 preload.go:172] Found /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 17:54:33.341478   51774 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0816 17:54:33.341860   51774 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/kubernetes-upgrade-108715/config.json ...
	I0816 17:54:33.341888   51774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/kubernetes-upgrade-108715/config.json: {Name:mk7f97cf1178ec5a7c5628f2742314da03feb446 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:54:33.342054   51774 start.go:360] acquireMachinesLock for kubernetes-upgrade-108715: {Name:mke1ffa1f2d6d714bdd85e184816ba8f4dfd08f1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 17:54:33.342097   51774 start.go:364] duration metric: took 25.327µs to acquireMachinesLock for "kubernetes-upgrade-108715"
	I0816 17:54:33.342119   51774 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-108715 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-108715 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 17:54:33.342202   51774 start.go:125] createHost starting for "" (driver="kvm2")
	I0816 17:54:33.343646   51774 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 17:54:33.343774   51774 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:54:33.343812   51774 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:54:33.358184   51774 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41091
	I0816 17:54:33.358610   51774 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:54:33.359200   51774 main.go:141] libmachine: Using API Version  1
	I0816 17:54:33.359224   51774 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:54:33.359680   51774 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:54:33.359929   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Calling .GetMachineName
	I0816 17:54:33.360112   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Calling .DriverName
	I0816 17:54:33.360264   51774 start.go:159] libmachine.API.Create for "kubernetes-upgrade-108715" (driver="kvm2")
	I0816 17:54:33.360291   51774 client.go:168] LocalClient.Create starting
	I0816 17:54:33.360331   51774 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem
	I0816 17:54:33.360367   51774 main.go:141] libmachine: Decoding PEM data...
	I0816 17:54:33.360391   51774 main.go:141] libmachine: Parsing certificate...
	I0816 17:54:33.360453   51774 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem
	I0816 17:54:33.360489   51774 main.go:141] libmachine: Decoding PEM data...
	I0816 17:54:33.360505   51774 main.go:141] libmachine: Parsing certificate...
	I0816 17:54:33.360532   51774 main.go:141] libmachine: Running pre-create checks...
	I0816 17:54:33.360545   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Calling .PreCreateCheck
	I0816 17:54:33.360992   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Calling .GetConfigRaw
	I0816 17:54:33.361454   51774 main.go:141] libmachine: Creating machine...
	I0816 17:54:33.361470   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Calling .Create
	I0816 17:54:33.361612   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Creating KVM machine...
	I0816 17:54:33.362822   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | found existing default KVM network
	I0816 17:54:33.363544   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | I0816 17:54:33.363408   51832 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012d910}
	I0816 17:54:33.363569   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | created network xml: 
	I0816 17:54:33.363580   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | <network>
	I0816 17:54:33.363594   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG |   <name>mk-kubernetes-upgrade-108715</name>
	I0816 17:54:33.363606   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG |   <dns enable='no'/>
	I0816 17:54:33.363618   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG |   
	I0816 17:54:33.363636   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0816 17:54:33.363655   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG |     <dhcp>
	I0816 17:54:33.363663   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0816 17:54:33.363672   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG |     </dhcp>
	I0816 17:54:33.363681   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG |   </ip>
	I0816 17:54:33.363686   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG |   
	I0816 17:54:33.363695   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | </network>
	I0816 17:54:33.363704   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | 
	I0816 17:54:33.368288   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | trying to create private KVM network mk-kubernetes-upgrade-108715 192.168.39.0/24...
	I0816 17:54:33.436084   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | private KVM network mk-kubernetes-upgrade-108715 192.168.39.0/24 created
	I0816 17:54:33.436122   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | I0816 17:54:33.436040   51832 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19461-9545/.minikube
	I0816 17:54:33.436138   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Setting up store path in /home/jenkins/minikube-integration/19461-9545/.minikube/machines/kubernetes-upgrade-108715 ...
	I0816 17:54:33.436154   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Building disk image from file:///home/jenkins/minikube-integration/19461-9545/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0816 17:54:33.436238   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Downloading /home/jenkins/minikube-integration/19461-9545/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19461-9545/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0816 17:54:33.675160   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | I0816 17:54:33.675031   51832 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/kubernetes-upgrade-108715/id_rsa...
	I0816 17:54:33.792947   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | I0816 17:54:33.792719   51832 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/kubernetes-upgrade-108715/kubernetes-upgrade-108715.rawdisk...
	I0816 17:54:33.792985   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | Writing magic tar header
	I0816 17:54:33.793003   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Setting executable bit set on /home/jenkins/minikube-integration/19461-9545/.minikube/machines/kubernetes-upgrade-108715 (perms=drwx------)
	I0816 17:54:33.793026   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Setting executable bit set on /home/jenkins/minikube-integration/19461-9545/.minikube/machines (perms=drwxr-xr-x)
	I0816 17:54:33.793033   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Setting executable bit set on /home/jenkins/minikube-integration/19461-9545/.minikube (perms=drwxr-xr-x)
	I0816 17:54:33.793044   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Setting executable bit set on /home/jenkins/minikube-integration/19461-9545 (perms=drwxrwxr-x)
	I0816 17:54:33.793054   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0816 17:54:33.793065   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0816 17:54:33.793094   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Creating domain...
	I0816 17:54:33.793103   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | Writing SSH key tar header
	I0816 17:54:33.793118   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | I0816 17:54:33.792855   51832 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19461-9545/.minikube/machines/kubernetes-upgrade-108715 ...
	I0816 17:54:33.793126   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/kubernetes-upgrade-108715
	I0816 17:54:33.793133   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19461-9545/.minikube/machines
	I0816 17:54:33.793145   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19461-9545/.minikube
	I0816 17:54:33.793152   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19461-9545
	I0816 17:54:33.793159   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0816 17:54:33.793165   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | Checking permissions on dir: /home/jenkins
	I0816 17:54:33.793171   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | Checking permissions on dir: /home
	I0816 17:54:33.793179   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | Skipping /home - not owner
	I0816 17:54:33.794256   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) define libvirt domain using xml: 
	I0816 17:54:33.794280   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) <domain type='kvm'>
	I0816 17:54:33.794292   51774 main.go:141] libmachine: (kubernetes-upgrade-108715)   <name>kubernetes-upgrade-108715</name>
	I0816 17:54:33.794298   51774 main.go:141] libmachine: (kubernetes-upgrade-108715)   <memory unit='MiB'>2200</memory>
	I0816 17:54:33.794303   51774 main.go:141] libmachine: (kubernetes-upgrade-108715)   <vcpu>2</vcpu>
	I0816 17:54:33.794308   51774 main.go:141] libmachine: (kubernetes-upgrade-108715)   <features>
	I0816 17:54:33.794315   51774 main.go:141] libmachine: (kubernetes-upgrade-108715)     <acpi/>
	I0816 17:54:33.794323   51774 main.go:141] libmachine: (kubernetes-upgrade-108715)     <apic/>
	I0816 17:54:33.794328   51774 main.go:141] libmachine: (kubernetes-upgrade-108715)     <pae/>
	I0816 17:54:33.794337   51774 main.go:141] libmachine: (kubernetes-upgrade-108715)     
	I0816 17:54:33.794348   51774 main.go:141] libmachine: (kubernetes-upgrade-108715)   </features>
	I0816 17:54:33.794356   51774 main.go:141] libmachine: (kubernetes-upgrade-108715)   <cpu mode='host-passthrough'>
	I0816 17:54:33.794367   51774 main.go:141] libmachine: (kubernetes-upgrade-108715)   
	I0816 17:54:33.794376   51774 main.go:141] libmachine: (kubernetes-upgrade-108715)   </cpu>
	I0816 17:54:33.794383   51774 main.go:141] libmachine: (kubernetes-upgrade-108715)   <os>
	I0816 17:54:33.794389   51774 main.go:141] libmachine: (kubernetes-upgrade-108715)     <type>hvm</type>
	I0816 17:54:33.794397   51774 main.go:141] libmachine: (kubernetes-upgrade-108715)     <boot dev='cdrom'/>
	I0816 17:54:33.794403   51774 main.go:141] libmachine: (kubernetes-upgrade-108715)     <boot dev='hd'/>
	I0816 17:54:33.794411   51774 main.go:141] libmachine: (kubernetes-upgrade-108715)     <bootmenu enable='no'/>
	I0816 17:54:33.794416   51774 main.go:141] libmachine: (kubernetes-upgrade-108715)   </os>
	I0816 17:54:33.794423   51774 main.go:141] libmachine: (kubernetes-upgrade-108715)   <devices>
	I0816 17:54:33.794433   51774 main.go:141] libmachine: (kubernetes-upgrade-108715)     <disk type='file' device='cdrom'>
	I0816 17:54:33.794446   51774 main.go:141] libmachine: (kubernetes-upgrade-108715)       <source file='/home/jenkins/minikube-integration/19461-9545/.minikube/machines/kubernetes-upgrade-108715/boot2docker.iso'/>
	I0816 17:54:33.794455   51774 main.go:141] libmachine: (kubernetes-upgrade-108715)       <target dev='hdc' bus='scsi'/>
	I0816 17:54:33.794460   51774 main.go:141] libmachine: (kubernetes-upgrade-108715)       <readonly/>
	I0816 17:54:33.794465   51774 main.go:141] libmachine: (kubernetes-upgrade-108715)     </disk>
	I0816 17:54:33.794470   51774 main.go:141] libmachine: (kubernetes-upgrade-108715)     <disk type='file' device='disk'>
	I0816 17:54:33.794477   51774 main.go:141] libmachine: (kubernetes-upgrade-108715)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0816 17:54:33.794491   51774 main.go:141] libmachine: (kubernetes-upgrade-108715)       <source file='/home/jenkins/minikube-integration/19461-9545/.minikube/machines/kubernetes-upgrade-108715/kubernetes-upgrade-108715.rawdisk'/>
	I0816 17:54:33.794500   51774 main.go:141] libmachine: (kubernetes-upgrade-108715)       <target dev='hda' bus='virtio'/>
	I0816 17:54:33.794504   51774 main.go:141] libmachine: (kubernetes-upgrade-108715)     </disk>
	I0816 17:54:33.794539   51774 main.go:141] libmachine: (kubernetes-upgrade-108715)     <interface type='network'>
	I0816 17:54:33.794566   51774 main.go:141] libmachine: (kubernetes-upgrade-108715)       <source network='mk-kubernetes-upgrade-108715'/>
	I0816 17:54:33.794589   51774 main.go:141] libmachine: (kubernetes-upgrade-108715)       <model type='virtio'/>
	I0816 17:54:33.794602   51774 main.go:141] libmachine: (kubernetes-upgrade-108715)     </interface>
	I0816 17:54:33.794612   51774 main.go:141] libmachine: (kubernetes-upgrade-108715)     <interface type='network'>
	I0816 17:54:33.794623   51774 main.go:141] libmachine: (kubernetes-upgrade-108715)       <source network='default'/>
	I0816 17:54:33.794647   51774 main.go:141] libmachine: (kubernetes-upgrade-108715)       <model type='virtio'/>
	I0816 17:54:33.794667   51774 main.go:141] libmachine: (kubernetes-upgrade-108715)     </interface>
	I0816 17:54:33.794680   51774 main.go:141] libmachine: (kubernetes-upgrade-108715)     <serial type='pty'>
	I0816 17:54:33.794688   51774 main.go:141] libmachine: (kubernetes-upgrade-108715)       <target port='0'/>
	I0816 17:54:33.794696   51774 main.go:141] libmachine: (kubernetes-upgrade-108715)     </serial>
	I0816 17:54:33.794701   51774 main.go:141] libmachine: (kubernetes-upgrade-108715)     <console type='pty'>
	I0816 17:54:33.794710   51774 main.go:141] libmachine: (kubernetes-upgrade-108715)       <target type='serial' port='0'/>
	I0816 17:54:33.794726   51774 main.go:141] libmachine: (kubernetes-upgrade-108715)     </console>
	I0816 17:54:33.794739   51774 main.go:141] libmachine: (kubernetes-upgrade-108715)     <rng model='virtio'>
	I0816 17:54:33.794755   51774 main.go:141] libmachine: (kubernetes-upgrade-108715)       <backend model='random'>/dev/random</backend>
	I0816 17:54:33.794768   51774 main.go:141] libmachine: (kubernetes-upgrade-108715)     </rng>
	I0816 17:54:33.794778   51774 main.go:141] libmachine: (kubernetes-upgrade-108715)     
	I0816 17:54:33.794786   51774 main.go:141] libmachine: (kubernetes-upgrade-108715)     
	I0816 17:54:33.794793   51774 main.go:141] libmachine: (kubernetes-upgrade-108715)   </devices>
	I0816 17:54:33.794799   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) </domain>
	I0816 17:54:33.794804   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) 
	I0816 17:54:33.798898   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | domain kubernetes-upgrade-108715 has defined MAC address 52:54:00:d1:ec:3c in network default
	I0816 17:54:33.799482   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Ensuring networks are active...
	I0816 17:54:33.799506   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | domain kubernetes-upgrade-108715 has defined MAC address 52:54:00:73:ca:e7 in network mk-kubernetes-upgrade-108715
	I0816 17:54:33.800173   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Ensuring network default is active
	I0816 17:54:33.800468   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Ensuring network mk-kubernetes-upgrade-108715 is active
	I0816 17:54:33.801058   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Getting domain xml...
	I0816 17:54:33.801836   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Creating domain...
	I0816 17:54:35.033077   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Waiting to get IP...
	I0816 17:54:35.034065   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | domain kubernetes-upgrade-108715 has defined MAC address 52:54:00:73:ca:e7 in network mk-kubernetes-upgrade-108715
	I0816 17:54:35.034502   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | unable to find current IP address of domain kubernetes-upgrade-108715 in network mk-kubernetes-upgrade-108715
	I0816 17:54:35.034584   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | I0816 17:54:35.034489   51832 retry.go:31] will retry after 248.229364ms: waiting for machine to come up
	I0816 17:54:35.284047   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | domain kubernetes-upgrade-108715 has defined MAC address 52:54:00:73:ca:e7 in network mk-kubernetes-upgrade-108715
	I0816 17:54:35.284505   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | unable to find current IP address of domain kubernetes-upgrade-108715 in network mk-kubernetes-upgrade-108715
	I0816 17:54:35.284535   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | I0816 17:54:35.284464   51832 retry.go:31] will retry after 355.570291ms: waiting for machine to come up
	I0816 17:54:35.641896   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | domain kubernetes-upgrade-108715 has defined MAC address 52:54:00:73:ca:e7 in network mk-kubernetes-upgrade-108715
	I0816 17:54:35.642284   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | unable to find current IP address of domain kubernetes-upgrade-108715 in network mk-kubernetes-upgrade-108715
	I0816 17:54:35.642313   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | I0816 17:54:35.642241   51832 retry.go:31] will retry after 459.045009ms: waiting for machine to come up
	I0816 17:54:36.102803   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | domain kubernetes-upgrade-108715 has defined MAC address 52:54:00:73:ca:e7 in network mk-kubernetes-upgrade-108715
	I0816 17:54:36.103231   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | unable to find current IP address of domain kubernetes-upgrade-108715 in network mk-kubernetes-upgrade-108715
	I0816 17:54:36.103261   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | I0816 17:54:36.103191   51832 retry.go:31] will retry after 460.245624ms: waiting for machine to come up
	I0816 17:54:36.564741   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | domain kubernetes-upgrade-108715 has defined MAC address 52:54:00:73:ca:e7 in network mk-kubernetes-upgrade-108715
	I0816 17:54:36.565123   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | unable to find current IP address of domain kubernetes-upgrade-108715 in network mk-kubernetes-upgrade-108715
	I0816 17:54:36.565145   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | I0816 17:54:36.565079   51832 retry.go:31] will retry after 472.360457ms: waiting for machine to come up
	I0816 17:54:37.038695   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | domain kubernetes-upgrade-108715 has defined MAC address 52:54:00:73:ca:e7 in network mk-kubernetes-upgrade-108715
	I0816 17:54:37.039128   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | unable to find current IP address of domain kubernetes-upgrade-108715 in network mk-kubernetes-upgrade-108715
	I0816 17:54:37.039158   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | I0816 17:54:37.039077   51832 retry.go:31] will retry after 825.666302ms: waiting for machine to come up
	I0816 17:54:37.866661   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | domain kubernetes-upgrade-108715 has defined MAC address 52:54:00:73:ca:e7 in network mk-kubernetes-upgrade-108715
	I0816 17:54:37.867054   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | unable to find current IP address of domain kubernetes-upgrade-108715 in network mk-kubernetes-upgrade-108715
	I0816 17:54:37.867100   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | I0816 17:54:37.867012   51832 retry.go:31] will retry after 1.158314098s: waiting for machine to come up
	I0816 17:54:39.028154   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | domain kubernetes-upgrade-108715 has defined MAC address 52:54:00:73:ca:e7 in network mk-kubernetes-upgrade-108715
	I0816 17:54:39.028617   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | unable to find current IP address of domain kubernetes-upgrade-108715 in network mk-kubernetes-upgrade-108715
	I0816 17:54:39.028660   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | I0816 17:54:39.028572   51832 retry.go:31] will retry after 1.341915467s: waiting for machine to come up
	I0816 17:54:40.372613   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | domain kubernetes-upgrade-108715 has defined MAC address 52:54:00:73:ca:e7 in network mk-kubernetes-upgrade-108715
	I0816 17:54:40.373048   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | unable to find current IP address of domain kubernetes-upgrade-108715 in network mk-kubernetes-upgrade-108715
	I0816 17:54:40.373077   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | I0816 17:54:40.373011   51832 retry.go:31] will retry after 1.794300153s: waiting for machine to come up
	I0816 17:54:42.170256   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | domain kubernetes-upgrade-108715 has defined MAC address 52:54:00:73:ca:e7 in network mk-kubernetes-upgrade-108715
	I0816 17:54:42.170684   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | unable to find current IP address of domain kubernetes-upgrade-108715 in network mk-kubernetes-upgrade-108715
	I0816 17:54:42.170711   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | I0816 17:54:42.170651   51832 retry.go:31] will retry after 2.310710064s: waiting for machine to come up
	I0816 17:54:44.483029   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | domain kubernetes-upgrade-108715 has defined MAC address 52:54:00:73:ca:e7 in network mk-kubernetes-upgrade-108715
	I0816 17:54:44.483470   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | unable to find current IP address of domain kubernetes-upgrade-108715 in network mk-kubernetes-upgrade-108715
	I0816 17:54:44.483495   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | I0816 17:54:44.483410   51832 retry.go:31] will retry after 2.354451437s: waiting for machine to come up
	I0816 17:54:46.839059   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | domain kubernetes-upgrade-108715 has defined MAC address 52:54:00:73:ca:e7 in network mk-kubernetes-upgrade-108715
	I0816 17:54:46.839450   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | unable to find current IP address of domain kubernetes-upgrade-108715 in network mk-kubernetes-upgrade-108715
	I0816 17:54:46.839475   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | I0816 17:54:46.839409   51832 retry.go:31] will retry after 2.893501716s: waiting for machine to come up
	I0816 17:54:49.735035   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | domain kubernetes-upgrade-108715 has defined MAC address 52:54:00:73:ca:e7 in network mk-kubernetes-upgrade-108715
	I0816 17:54:49.735518   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | unable to find current IP address of domain kubernetes-upgrade-108715 in network mk-kubernetes-upgrade-108715
	I0816 17:54:49.735539   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | I0816 17:54:49.735488   51832 retry.go:31] will retry after 3.827507993s: waiting for machine to come up
	I0816 17:54:53.564333   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | domain kubernetes-upgrade-108715 has defined MAC address 52:54:00:73:ca:e7 in network mk-kubernetes-upgrade-108715
	I0816 17:54:53.564756   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Found IP for machine: 192.168.39.8
	I0816 17:54:53.564800   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | domain kubernetes-upgrade-108715 has current primary IP address 192.168.39.8 and MAC address 52:54:00:73:ca:e7 in network mk-kubernetes-upgrade-108715
	I0816 17:54:53.564813   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Reserving static IP address...
	I0816 17:54:53.565172   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-108715", mac: "52:54:00:73:ca:e7", ip: "192.168.39.8"} in network mk-kubernetes-upgrade-108715
	I0816 17:54:53.639173   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | Getting to WaitForSSH function...
	I0816 17:54:53.639203   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Reserved static IP address: 192.168.39.8
	I0816 17:54:53.639217   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Waiting for SSH to be available...
	I0816 17:54:53.641879   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | domain kubernetes-upgrade-108715 has defined MAC address 52:54:00:73:ca:e7 in network mk-kubernetes-upgrade-108715
	I0816 17:54:53.642271   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:ca:e7", ip: ""} in network mk-kubernetes-upgrade-108715: {Iface:virbr1 ExpiryTime:2024-08-16 18:54:47 +0000 UTC Type:0 Mac:52:54:00:73:ca:e7 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:minikube Clientid:01:52:54:00:73:ca:e7}
	I0816 17:54:53.642307   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | domain kubernetes-upgrade-108715 has defined IP address 192.168.39.8 and MAC address 52:54:00:73:ca:e7 in network mk-kubernetes-upgrade-108715
	I0816 17:54:53.642466   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | Using SSH client type: external
	I0816 17:54:53.642492   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | Using SSH private key: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/kubernetes-upgrade-108715/id_rsa (-rw-------)
	I0816 17:54:53.642516   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.8 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19461-9545/.minikube/machines/kubernetes-upgrade-108715/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 17:54:53.642529   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | About to run SSH command:
	I0816 17:54:53.642547   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | exit 0
	I0816 17:54:53.764520   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | SSH cmd err, output: <nil>: 
	I0816 17:54:53.764794   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) KVM machine creation complete!
	I0816 17:54:53.765082   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Calling .GetConfigRaw
	I0816 17:54:53.765816   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Calling .DriverName
	I0816 17:54:53.766040   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Calling .DriverName
	I0816 17:54:53.766258   51774 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0816 17:54:53.766276   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Calling .GetState
	I0816 17:54:53.767416   51774 main.go:141] libmachine: Detecting operating system of created instance...
	I0816 17:54:53.767433   51774 main.go:141] libmachine: Waiting for SSH to be available...
	I0816 17:54:53.767441   51774 main.go:141] libmachine: Getting to WaitForSSH function...
	I0816 17:54:53.767449   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Calling .GetSSHHostname
	I0816 17:54:53.769799   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | domain kubernetes-upgrade-108715 has defined MAC address 52:54:00:73:ca:e7 in network mk-kubernetes-upgrade-108715
	I0816 17:54:53.770077   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:ca:e7", ip: ""} in network mk-kubernetes-upgrade-108715: {Iface:virbr1 ExpiryTime:2024-08-16 18:54:47 +0000 UTC Type:0 Mac:52:54:00:73:ca:e7 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:kubernetes-upgrade-108715 Clientid:01:52:54:00:73:ca:e7}
	I0816 17:54:53.770104   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | domain kubernetes-upgrade-108715 has defined IP address 192.168.39.8 and MAC address 52:54:00:73:ca:e7 in network mk-kubernetes-upgrade-108715
	I0816 17:54:53.770216   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Calling .GetSSHPort
	I0816 17:54:53.770398   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Calling .GetSSHKeyPath
	I0816 17:54:53.770601   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Calling .GetSSHKeyPath
	I0816 17:54:53.770783   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Calling .GetSSHUsername
	I0816 17:54:53.770923   51774 main.go:141] libmachine: Using SSH client type: native
	I0816 17:54:53.771153   51774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.8 22 <nil> <nil>}
	I0816 17:54:53.771175   51774 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0816 17:54:53.871687   51774 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 17:54:53.871714   51774 main.go:141] libmachine: Detecting the provisioner...
	I0816 17:54:53.871725   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Calling .GetSSHHostname
	I0816 17:54:53.874520   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | domain kubernetes-upgrade-108715 has defined MAC address 52:54:00:73:ca:e7 in network mk-kubernetes-upgrade-108715
	I0816 17:54:53.874897   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:ca:e7", ip: ""} in network mk-kubernetes-upgrade-108715: {Iface:virbr1 ExpiryTime:2024-08-16 18:54:47 +0000 UTC Type:0 Mac:52:54:00:73:ca:e7 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:kubernetes-upgrade-108715 Clientid:01:52:54:00:73:ca:e7}
	I0816 17:54:53.874929   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | domain kubernetes-upgrade-108715 has defined IP address 192.168.39.8 and MAC address 52:54:00:73:ca:e7 in network mk-kubernetes-upgrade-108715
	I0816 17:54:53.875028   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Calling .GetSSHPort
	I0816 17:54:53.875247   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Calling .GetSSHKeyPath
	I0816 17:54:53.875400   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Calling .GetSSHKeyPath
	I0816 17:54:53.875598   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Calling .GetSSHUsername
	I0816 17:54:53.875722   51774 main.go:141] libmachine: Using SSH client type: native
	I0816 17:54:53.875879   51774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.8 22 <nil> <nil>}
	I0816 17:54:53.875890   51774 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0816 17:54:53.976806   51774 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0816 17:54:53.976865   51774 main.go:141] libmachine: found compatible host: buildroot
	I0816 17:54:53.976872   51774 main.go:141] libmachine: Provisioning with buildroot...
	I0816 17:54:53.976883   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Calling .GetMachineName
	I0816 17:54:53.977130   51774 buildroot.go:166] provisioning hostname "kubernetes-upgrade-108715"
	I0816 17:54:53.977156   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Calling .GetMachineName
	I0816 17:54:53.977388   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Calling .GetSSHHostname
	I0816 17:54:53.980239   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | domain kubernetes-upgrade-108715 has defined MAC address 52:54:00:73:ca:e7 in network mk-kubernetes-upgrade-108715
	I0816 17:54:53.980548   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:ca:e7", ip: ""} in network mk-kubernetes-upgrade-108715: {Iface:virbr1 ExpiryTime:2024-08-16 18:54:47 +0000 UTC Type:0 Mac:52:54:00:73:ca:e7 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:kubernetes-upgrade-108715 Clientid:01:52:54:00:73:ca:e7}
	I0816 17:54:53.980588   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | domain kubernetes-upgrade-108715 has defined IP address 192.168.39.8 and MAC address 52:54:00:73:ca:e7 in network mk-kubernetes-upgrade-108715
	I0816 17:54:53.980758   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Calling .GetSSHPort
	I0816 17:54:53.980922   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Calling .GetSSHKeyPath
	I0816 17:54:53.981087   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Calling .GetSSHKeyPath
	I0816 17:54:53.981315   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Calling .GetSSHUsername
	I0816 17:54:53.981486   51774 main.go:141] libmachine: Using SSH client type: native
	I0816 17:54:53.981679   51774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.8 22 <nil> <nil>}
	I0816 17:54:53.981696   51774 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-108715 && echo "kubernetes-upgrade-108715" | sudo tee /etc/hostname
	I0816 17:54:54.099395   51774 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-108715
	
	I0816 17:54:54.099425   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Calling .GetSSHHostname
	I0816 17:54:54.102056   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | domain kubernetes-upgrade-108715 has defined MAC address 52:54:00:73:ca:e7 in network mk-kubernetes-upgrade-108715
	I0816 17:54:54.102346   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:ca:e7", ip: ""} in network mk-kubernetes-upgrade-108715: {Iface:virbr1 ExpiryTime:2024-08-16 18:54:47 +0000 UTC Type:0 Mac:52:54:00:73:ca:e7 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:kubernetes-upgrade-108715 Clientid:01:52:54:00:73:ca:e7}
	I0816 17:54:54.102380   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | domain kubernetes-upgrade-108715 has defined IP address 192.168.39.8 and MAC address 52:54:00:73:ca:e7 in network mk-kubernetes-upgrade-108715
	I0816 17:54:54.102515   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Calling .GetSSHPort
	I0816 17:54:54.102739   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Calling .GetSSHKeyPath
	I0816 17:54:54.102896   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Calling .GetSSHKeyPath
	I0816 17:54:54.103026   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Calling .GetSSHUsername
	I0816 17:54:54.103175   51774 main.go:141] libmachine: Using SSH client type: native
	I0816 17:54:54.103341   51774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.8 22 <nil> <nil>}
	I0816 17:54:54.103357   51774 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-108715' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-108715/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-108715' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 17:54:54.214046   51774 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 17:54:54.214075   51774 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19461-9545/.minikube CaCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19461-9545/.minikube}
	I0816 17:54:54.214112   51774 buildroot.go:174] setting up certificates
	I0816 17:54:54.214126   51774 provision.go:84] configureAuth start
	I0816 17:54:54.214143   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Calling .GetMachineName
	I0816 17:54:54.214422   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Calling .GetIP
	I0816 17:54:54.217411   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | domain kubernetes-upgrade-108715 has defined MAC address 52:54:00:73:ca:e7 in network mk-kubernetes-upgrade-108715
	I0816 17:54:54.217731   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:ca:e7", ip: ""} in network mk-kubernetes-upgrade-108715: {Iface:virbr1 ExpiryTime:2024-08-16 18:54:47 +0000 UTC Type:0 Mac:52:54:00:73:ca:e7 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:kubernetes-upgrade-108715 Clientid:01:52:54:00:73:ca:e7}
	I0816 17:54:54.217760   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | domain kubernetes-upgrade-108715 has defined IP address 192.168.39.8 and MAC address 52:54:00:73:ca:e7 in network mk-kubernetes-upgrade-108715
	I0816 17:54:54.217904   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Calling .GetSSHHostname
	I0816 17:54:54.220745   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | domain kubernetes-upgrade-108715 has defined MAC address 52:54:00:73:ca:e7 in network mk-kubernetes-upgrade-108715
	I0816 17:54:54.221108   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:ca:e7", ip: ""} in network mk-kubernetes-upgrade-108715: {Iface:virbr1 ExpiryTime:2024-08-16 18:54:47 +0000 UTC Type:0 Mac:52:54:00:73:ca:e7 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:kubernetes-upgrade-108715 Clientid:01:52:54:00:73:ca:e7}
	I0816 17:54:54.221124   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | domain kubernetes-upgrade-108715 has defined IP address 192.168.39.8 and MAC address 52:54:00:73:ca:e7 in network mk-kubernetes-upgrade-108715
	I0816 17:54:54.221276   51774 provision.go:143] copyHostCerts
	I0816 17:54:54.221338   51774 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem, removing ...
	I0816 17:54:54.221359   51774 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem
	I0816 17:54:54.221432   51774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem (1082 bytes)
	I0816 17:54:54.221546   51774 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem, removing ...
	I0816 17:54:54.221556   51774 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem
	I0816 17:54:54.221595   51774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem (1123 bytes)
	I0816 17:54:54.221686   51774 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem, removing ...
	I0816 17:54:54.221695   51774 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem
	I0816 17:54:54.221729   51774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem (1679 bytes)
	I0816 17:54:54.221846   51774 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-108715 san=[127.0.0.1 192.168.39.8 kubernetes-upgrade-108715 localhost minikube]
	I0816 17:54:54.326449   51774 provision.go:177] copyRemoteCerts
	I0816 17:54:54.326508   51774 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 17:54:54.326536   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Calling .GetSSHHostname
	I0816 17:54:54.329775   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | domain kubernetes-upgrade-108715 has defined MAC address 52:54:00:73:ca:e7 in network mk-kubernetes-upgrade-108715
	I0816 17:54:54.330123   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:ca:e7", ip: ""} in network mk-kubernetes-upgrade-108715: {Iface:virbr1 ExpiryTime:2024-08-16 18:54:47 +0000 UTC Type:0 Mac:52:54:00:73:ca:e7 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:kubernetes-upgrade-108715 Clientid:01:52:54:00:73:ca:e7}
	I0816 17:54:54.330151   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | domain kubernetes-upgrade-108715 has defined IP address 192.168.39.8 and MAC address 52:54:00:73:ca:e7 in network mk-kubernetes-upgrade-108715
	I0816 17:54:54.330343   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Calling .GetSSHPort
	I0816 17:54:54.330511   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Calling .GetSSHKeyPath
	I0816 17:54:54.330660   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Calling .GetSSHUsername
	I0816 17:54:54.330772   51774 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/kubernetes-upgrade-108715/id_rsa Username:docker}
	I0816 17:54:54.409918   51774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 17:54:54.436510   51774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0816 17:54:54.462502   51774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 17:54:54.483905   51774 provision.go:87] duration metric: took 269.763002ms to configureAuth
	I0816 17:54:54.483934   51774 buildroot.go:189] setting minikube options for container-runtime
	I0816 17:54:54.484139   51774 config.go:182] Loaded profile config "kubernetes-upgrade-108715": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0816 17:54:54.484222   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Calling .GetSSHHostname
	I0816 17:54:54.486969   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | domain kubernetes-upgrade-108715 has defined MAC address 52:54:00:73:ca:e7 in network mk-kubernetes-upgrade-108715
	I0816 17:54:54.487285   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:ca:e7", ip: ""} in network mk-kubernetes-upgrade-108715: {Iface:virbr1 ExpiryTime:2024-08-16 18:54:47 +0000 UTC Type:0 Mac:52:54:00:73:ca:e7 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:kubernetes-upgrade-108715 Clientid:01:52:54:00:73:ca:e7}
	I0816 17:54:54.487320   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | domain kubernetes-upgrade-108715 has defined IP address 192.168.39.8 and MAC address 52:54:00:73:ca:e7 in network mk-kubernetes-upgrade-108715
	I0816 17:54:54.487484   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Calling .GetSSHPort
	I0816 17:54:54.487648   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Calling .GetSSHKeyPath
	I0816 17:54:54.487822   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Calling .GetSSHKeyPath
	I0816 17:54:54.487940   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Calling .GetSSHUsername
	I0816 17:54:54.488097   51774 main.go:141] libmachine: Using SSH client type: native
	I0816 17:54:54.488257   51774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.8 22 <nil> <nil>}
	I0816 17:54:54.488269   51774 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 17:54:54.741955   51774 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 17:54:54.741978   51774 main.go:141] libmachine: Checking connection to Docker...
	I0816 17:54:54.741989   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Calling .GetURL
	I0816 17:54:54.743280   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | Using libvirt version 6000000
	I0816 17:54:54.745599   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | domain kubernetes-upgrade-108715 has defined MAC address 52:54:00:73:ca:e7 in network mk-kubernetes-upgrade-108715
	I0816 17:54:54.745932   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:ca:e7", ip: ""} in network mk-kubernetes-upgrade-108715: {Iface:virbr1 ExpiryTime:2024-08-16 18:54:47 +0000 UTC Type:0 Mac:52:54:00:73:ca:e7 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:kubernetes-upgrade-108715 Clientid:01:52:54:00:73:ca:e7}
	I0816 17:54:54.745968   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | domain kubernetes-upgrade-108715 has defined IP address 192.168.39.8 and MAC address 52:54:00:73:ca:e7 in network mk-kubernetes-upgrade-108715
	I0816 17:54:54.746120   51774 main.go:141] libmachine: Docker is up and running!
	I0816 17:54:54.746135   51774 main.go:141] libmachine: Reticulating splines...
	I0816 17:54:54.746141   51774 client.go:171] duration metric: took 21.385841015s to LocalClient.Create
	I0816 17:54:54.746163   51774 start.go:167] duration metric: took 21.385899249s to libmachine.API.Create "kubernetes-upgrade-108715"
	I0816 17:54:54.746175   51774 start.go:293] postStartSetup for "kubernetes-upgrade-108715" (driver="kvm2")
	I0816 17:54:54.746188   51774 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 17:54:54.746209   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Calling .DriverName
	I0816 17:54:54.746467   51774 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 17:54:54.746489   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Calling .GetSSHHostname
	I0816 17:54:54.748572   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | domain kubernetes-upgrade-108715 has defined MAC address 52:54:00:73:ca:e7 in network mk-kubernetes-upgrade-108715
	I0816 17:54:54.748913   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:ca:e7", ip: ""} in network mk-kubernetes-upgrade-108715: {Iface:virbr1 ExpiryTime:2024-08-16 18:54:47 +0000 UTC Type:0 Mac:52:54:00:73:ca:e7 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:kubernetes-upgrade-108715 Clientid:01:52:54:00:73:ca:e7}
	I0816 17:54:54.748943   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | domain kubernetes-upgrade-108715 has defined IP address 192.168.39.8 and MAC address 52:54:00:73:ca:e7 in network mk-kubernetes-upgrade-108715
	I0816 17:54:54.749020   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Calling .GetSSHPort
	I0816 17:54:54.749194   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Calling .GetSSHKeyPath
	I0816 17:54:54.749324   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Calling .GetSSHUsername
	I0816 17:54:54.749477   51774 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/kubernetes-upgrade-108715/id_rsa Username:docker}
	I0816 17:54:54.830077   51774 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 17:54:54.833948   51774 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 17:54:54.833980   51774 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/addons for local assets ...
	I0816 17:54:54.834063   51774 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/files for local assets ...
	I0816 17:54:54.834193   51774 filesync.go:149] local asset: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem -> 167532.pem in /etc/ssl/certs
	I0816 17:54:54.834476   51774 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 17:54:54.843132   51774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /etc/ssl/certs/167532.pem (1708 bytes)
	I0816 17:54:54.866684   51774 start.go:296] duration metric: took 120.494561ms for postStartSetup
	I0816 17:54:54.866737   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Calling .GetConfigRaw
	I0816 17:54:54.867271   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Calling .GetIP
	I0816 17:54:54.869901   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | domain kubernetes-upgrade-108715 has defined MAC address 52:54:00:73:ca:e7 in network mk-kubernetes-upgrade-108715
	I0816 17:54:54.870198   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:ca:e7", ip: ""} in network mk-kubernetes-upgrade-108715: {Iface:virbr1 ExpiryTime:2024-08-16 18:54:47 +0000 UTC Type:0 Mac:52:54:00:73:ca:e7 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:kubernetes-upgrade-108715 Clientid:01:52:54:00:73:ca:e7}
	I0816 17:54:54.870220   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | domain kubernetes-upgrade-108715 has defined IP address 192.168.39.8 and MAC address 52:54:00:73:ca:e7 in network mk-kubernetes-upgrade-108715
	I0816 17:54:54.870474   51774 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/kubernetes-upgrade-108715/config.json ...
	I0816 17:54:54.870714   51774 start.go:128] duration metric: took 21.528500529s to createHost
	I0816 17:54:54.870744   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Calling .GetSSHHostname
	I0816 17:54:54.873008   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | domain kubernetes-upgrade-108715 has defined MAC address 52:54:00:73:ca:e7 in network mk-kubernetes-upgrade-108715
	I0816 17:54:54.873361   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:ca:e7", ip: ""} in network mk-kubernetes-upgrade-108715: {Iface:virbr1 ExpiryTime:2024-08-16 18:54:47 +0000 UTC Type:0 Mac:52:54:00:73:ca:e7 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:kubernetes-upgrade-108715 Clientid:01:52:54:00:73:ca:e7}
	I0816 17:54:54.873390   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | domain kubernetes-upgrade-108715 has defined IP address 192.168.39.8 and MAC address 52:54:00:73:ca:e7 in network mk-kubernetes-upgrade-108715
	I0816 17:54:54.873477   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Calling .GetSSHPort
	I0816 17:54:54.873681   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Calling .GetSSHKeyPath
	I0816 17:54:54.873811   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Calling .GetSSHKeyPath
	I0816 17:54:54.874002   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Calling .GetSSHUsername
	I0816 17:54:54.874233   51774 main.go:141] libmachine: Using SSH client type: native
	I0816 17:54:54.874409   51774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.8 22 <nil> <nil>}
	I0816 17:54:54.874421   51774 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 17:54:54.977316   51774 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723830894.951533215
	
	I0816 17:54:54.977336   51774 fix.go:216] guest clock: 1723830894.951533215
	I0816 17:54:54.977343   51774 fix.go:229] Guest: 2024-08-16 17:54:54.951533215 +0000 UTC Remote: 2024-08-16 17:54:54.870729921 +0000 UTC m=+21.654397386 (delta=80.803294ms)
	I0816 17:54:54.977361   51774 fix.go:200] guest clock delta is within tolerance: 80.803294ms
	I0816 17:54:54.977367   51774 start.go:83] releasing machines lock for "kubernetes-upgrade-108715", held for 21.635258816s
	I0816 17:54:54.977399   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Calling .DriverName
	I0816 17:54:54.977730   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Calling .GetIP
	I0816 17:54:54.980509   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | domain kubernetes-upgrade-108715 has defined MAC address 52:54:00:73:ca:e7 in network mk-kubernetes-upgrade-108715
	I0816 17:54:54.980886   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:ca:e7", ip: ""} in network mk-kubernetes-upgrade-108715: {Iface:virbr1 ExpiryTime:2024-08-16 18:54:47 +0000 UTC Type:0 Mac:52:54:00:73:ca:e7 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:kubernetes-upgrade-108715 Clientid:01:52:54:00:73:ca:e7}
	I0816 17:54:54.980921   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | domain kubernetes-upgrade-108715 has defined IP address 192.168.39.8 and MAC address 52:54:00:73:ca:e7 in network mk-kubernetes-upgrade-108715
	I0816 17:54:54.981048   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Calling .DriverName
	I0816 17:54:54.981542   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Calling .DriverName
	I0816 17:54:54.981683   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Calling .DriverName
	I0816 17:54:54.981760   51774 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 17:54:54.981790   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Calling .GetSSHHostname
	I0816 17:54:54.981845   51774 ssh_runner.go:195] Run: cat /version.json
	I0816 17:54:54.981860   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Calling .GetSSHHostname
	I0816 17:54:54.984345   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | domain kubernetes-upgrade-108715 has defined MAC address 52:54:00:73:ca:e7 in network mk-kubernetes-upgrade-108715
	I0816 17:54:54.984547   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | domain kubernetes-upgrade-108715 has defined MAC address 52:54:00:73:ca:e7 in network mk-kubernetes-upgrade-108715
	I0816 17:54:54.984731   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:ca:e7", ip: ""} in network mk-kubernetes-upgrade-108715: {Iface:virbr1 ExpiryTime:2024-08-16 18:54:47 +0000 UTC Type:0 Mac:52:54:00:73:ca:e7 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:kubernetes-upgrade-108715 Clientid:01:52:54:00:73:ca:e7}
	I0816 17:54:54.984758   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | domain kubernetes-upgrade-108715 has defined IP address 192.168.39.8 and MAC address 52:54:00:73:ca:e7 in network mk-kubernetes-upgrade-108715
	I0816 17:54:54.984886   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Calling .GetSSHPort
	I0816 17:54:54.984901   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:ca:e7", ip: ""} in network mk-kubernetes-upgrade-108715: {Iface:virbr1 ExpiryTime:2024-08-16 18:54:47 +0000 UTC Type:0 Mac:52:54:00:73:ca:e7 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:kubernetes-upgrade-108715 Clientid:01:52:54:00:73:ca:e7}
	I0816 17:54:54.984927   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | domain kubernetes-upgrade-108715 has defined IP address 192.168.39.8 and MAC address 52:54:00:73:ca:e7 in network mk-kubernetes-upgrade-108715
	I0816 17:54:54.985070   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Calling .GetSSHKeyPath
	I0816 17:54:54.985077   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Calling .GetSSHPort
	I0816 17:54:54.985250   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Calling .GetSSHKeyPath
	I0816 17:54:54.985250   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Calling .GetSSHUsername
	I0816 17:54:54.985403   51774 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/kubernetes-upgrade-108715/id_rsa Username:docker}
	I0816 17:54:54.985451   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Calling .GetSSHUsername
	I0816 17:54:54.985576   51774 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/kubernetes-upgrade-108715/id_rsa Username:docker}
	I0816 17:54:55.102192   51774 ssh_runner.go:195] Run: systemctl --version
	I0816 17:54:55.108242   51774 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 17:54:55.267231   51774 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 17:54:55.272739   51774 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 17:54:55.272822   51774 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 17:54:55.287528   51774 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 17:54:55.287550   51774 start.go:495] detecting cgroup driver to use...
	I0816 17:54:55.287621   51774 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 17:54:55.303677   51774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 17:54:55.316594   51774 docker.go:217] disabling cri-docker service (if available) ...
	I0816 17:54:55.316675   51774 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 17:54:55.328985   51774 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 17:54:55.341048   51774 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 17:54:55.463183   51774 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 17:54:55.627407   51774 docker.go:233] disabling docker service ...
	I0816 17:54:55.627485   51774 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 17:54:55.641949   51774 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 17:54:55.660420   51774 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 17:54:55.814281   51774 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 17:54:55.927247   51774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 17:54:55.940155   51774 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 17:54:55.959213   51774 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0816 17:54:55.959270   51774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:54:55.969816   51774 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 17:54:55.969887   51774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:54:55.980294   51774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:54:55.991696   51774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 17:54:56.002718   51774 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 17:54:56.012278   51774 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 17:54:56.020733   51774 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 17:54:56.020794   51774 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 17:54:56.032462   51774 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 17:54:56.041006   51774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 17:54:56.159356   51774 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 17:54:56.310977   51774 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 17:54:56.311041   51774 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 17:54:56.316356   51774 start.go:563] Will wait 60s for crictl version
	I0816 17:54:56.316404   51774 ssh_runner.go:195] Run: which crictl
	I0816 17:54:56.320028   51774 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 17:54:56.362109   51774 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 17:54:56.362206   51774 ssh_runner.go:195] Run: crio --version
	I0816 17:54:56.393375   51774 ssh_runner.go:195] Run: crio --version
	I0816 17:54:56.429980   51774 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0816 17:54:56.431319   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) Calling .GetIP
	I0816 17:54:56.434486   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | domain kubernetes-upgrade-108715 has defined MAC address 52:54:00:73:ca:e7 in network mk-kubernetes-upgrade-108715
	I0816 17:54:56.434924   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:ca:e7", ip: ""} in network mk-kubernetes-upgrade-108715: {Iface:virbr1 ExpiryTime:2024-08-16 18:54:47 +0000 UTC Type:0 Mac:52:54:00:73:ca:e7 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:kubernetes-upgrade-108715 Clientid:01:52:54:00:73:ca:e7}
	I0816 17:54:56.434946   51774 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | domain kubernetes-upgrade-108715 has defined IP address 192.168.39.8 and MAC address 52:54:00:73:ca:e7 in network mk-kubernetes-upgrade-108715
	I0816 17:54:56.435142   51774 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0816 17:54:56.439082   51774 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 17:54:56.450745   51774 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-108715 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-108715 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.8 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 17:54:56.450864   51774 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0816 17:54:56.450926   51774 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 17:54:56.484152   51774 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0816 17:54:56.484226   51774 ssh_runner.go:195] Run: which lz4
	I0816 17:54:56.488068   51774 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 17:54:56.491859   51774 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 17:54:56.491886   51774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0816 17:54:58.015919   51774 crio.go:462] duration metric: took 1.527886638s to copy over tarball
	I0816 17:54:58.016008   51774 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 17:55:00.700165   51774 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.684128548s)
	I0816 17:55:00.700196   51774 crio.go:469] duration metric: took 2.684248006s to extract the tarball
	I0816 17:55:00.700205   51774 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 17:55:00.742064   51774 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 17:55:00.785199   51774 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0816 17:55:00.785221   51774 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0816 17:55:00.785324   51774 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0816 17:55:00.785345   51774 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0816 17:55:00.785372   51774 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0816 17:55:00.785385   51774 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 17:55:00.785359   51774 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 17:55:00.785292   51774 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 17:55:00.785365   51774 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 17:55:00.785307   51774 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 17:55:00.786791   51774 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0816 17:55:00.786919   51774 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 17:55:00.786953   51774 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 17:55:00.787044   51774 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 17:55:00.787069   51774 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0816 17:55:00.787085   51774 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 17:55:00.787561   51774 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0816 17:55:00.788904   51774 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 17:55:01.020309   51774 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 17:55:01.056610   51774 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0816 17:55:01.056665   51774 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 17:55:01.056706   51774 ssh_runner.go:195] Run: which crictl
	I0816 17:55:01.059989   51774 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 17:55:01.079892   51774 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0816 17:55:01.084933   51774 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0816 17:55:01.088028   51774 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0816 17:55:01.089775   51774 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 17:55:01.096899   51774 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0816 17:55:01.110786   51774 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0816 17:55:01.146191   51774 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0816 17:55:01.213638   51774 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0816 17:55:01.213693   51774 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0816 17:55:01.213748   51774 ssh_runner.go:195] Run: which crictl
	I0816 17:55:01.215644   51774 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0816 17:55:01.215675   51774 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 17:55:01.215717   51774 ssh_runner.go:195] Run: which crictl
	I0816 17:55:01.237546   51774 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0816 17:55:01.237593   51774 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 17:55:01.237595   51774 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 17:55:01.237640   51774 ssh_runner.go:195] Run: which crictl
	I0816 17:55:01.268071   51774 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0816 17:55:01.268115   51774 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 17:55:01.268163   51774 ssh_runner.go:195] Run: which crictl
	I0816 17:55:01.276434   51774 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0816 17:55:01.276477   51774 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0816 17:55:01.276523   51774 ssh_runner.go:195] Run: which crictl
	I0816 17:55:01.291478   51774 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0816 17:55:01.291515   51774 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0816 17:55:01.291535   51774 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 17:55:01.291554   51774 ssh_runner.go:195] Run: which crictl
	I0816 17:55:01.291622   51774 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 17:55:01.321193   51774 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 17:55:01.321193   51774 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 17:55:01.321267   51774 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 17:55:01.321323   51774 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 17:55:01.323106   51774 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0816 17:55:01.377495   51774 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 17:55:01.377602   51774 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 17:55:01.420889   51774 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 17:55:01.431549   51774 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 17:55:01.457101   51774 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 17:55:01.457227   51774 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 17:55:01.465254   51774 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 17:55:01.520929   51774 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 17:55:01.521132   51774 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 17:55:01.530700   51774 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 17:55:01.593884   51774 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 17:55:01.621985   51774 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 17:55:01.621985   51774 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0816 17:55:01.622021   51774 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0816 17:55:01.634261   51774 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 17:55:01.642609   51774 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0816 17:55:01.646589   51774 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0816 17:55:01.673925   51774 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0816 17:55:01.687771   51774 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0816 17:55:01.801372   51774 cache_images.go:92] duration metric: took 1.016127806s to LoadCachedImages
	W0816 17:55:01.801472   51774 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0816 17:55:01.801488   51774 kubeadm.go:934] updating node { 192.168.39.8 8443 v1.20.0 crio true true} ...
	I0816 17:55:01.801628   51774 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-108715 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.8
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-108715 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 17:55:01.801729   51774 ssh_runner.go:195] Run: crio config
	I0816 17:55:01.847696   51774 cni.go:84] Creating CNI manager for ""
	I0816 17:55:01.847716   51774 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 17:55:01.847726   51774 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 17:55:01.847744   51774 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.8 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-108715 NodeName:kubernetes-upgrade-108715 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.8"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.8 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0816 17:55:01.847897   51774 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.8
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-108715"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.8
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.8"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 17:55:01.847955   51774 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0816 17:55:01.857702   51774 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 17:55:01.857772   51774 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 17:55:01.867058   51774 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (431 bytes)
	I0816 17:55:01.882941   51774 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 17:55:01.899144   51774 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0816 17:55:01.914977   51774 ssh_runner.go:195] Run: grep 192.168.39.8	control-plane.minikube.internal$ /etc/hosts
	I0816 17:55:01.918564   51774 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.8	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 17:55:01.929786   51774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 17:55:02.048738   51774 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 17:55:02.064042   51774 certs.go:68] Setting up /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/kubernetes-upgrade-108715 for IP: 192.168.39.8
	I0816 17:55:02.064064   51774 certs.go:194] generating shared ca certs ...
	I0816 17:55:02.064084   51774 certs.go:226] acquiring lock for ca certs: {Name:mk1e000575c94c138513704c2900b8a68810eb65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:55:02.064260   51774 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key
	I0816 17:55:02.064331   51774 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key
	I0816 17:55:02.064344   51774 certs.go:256] generating profile certs ...
	I0816 17:55:02.064416   51774 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/kubernetes-upgrade-108715/client.key
	I0816 17:55:02.064433   51774 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/kubernetes-upgrade-108715/client.crt with IP's: []
	I0816 17:55:02.250260   51774 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/kubernetes-upgrade-108715/client.crt ...
	I0816 17:55:02.250287   51774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/kubernetes-upgrade-108715/client.crt: {Name:mke8db88bbaef6ca3a0730c8829abeaa18ab1496 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:55:02.250470   51774 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/kubernetes-upgrade-108715/client.key ...
	I0816 17:55:02.250488   51774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/kubernetes-upgrade-108715/client.key: {Name:mk892645119c707f0f2ecc940f492daef38e19e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:55:02.250604   51774 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/kubernetes-upgrade-108715/apiserver.key.34d3d230
	I0816 17:55:02.250629   51774 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/kubernetes-upgrade-108715/apiserver.crt.34d3d230 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.8]
	I0816 17:55:02.305549   51774 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/kubernetes-upgrade-108715/apiserver.crt.34d3d230 ...
	I0816 17:55:02.305575   51774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/kubernetes-upgrade-108715/apiserver.crt.34d3d230: {Name:mkcfc54d46e516aaf66a07d1ad90033db5b7eb31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:55:02.305744   51774 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/kubernetes-upgrade-108715/apiserver.key.34d3d230 ...
	I0816 17:55:02.305761   51774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/kubernetes-upgrade-108715/apiserver.key.34d3d230: {Name:mk060d0850a2a794a1ae84d95bb2c3fc729bad6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:55:02.305854   51774 certs.go:381] copying /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/kubernetes-upgrade-108715/apiserver.crt.34d3d230 -> /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/kubernetes-upgrade-108715/apiserver.crt
	I0816 17:55:02.305949   51774 certs.go:385] copying /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/kubernetes-upgrade-108715/apiserver.key.34d3d230 -> /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/kubernetes-upgrade-108715/apiserver.key
	I0816 17:55:02.306014   51774 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/kubernetes-upgrade-108715/proxy-client.key
	I0816 17:55:02.306030   51774 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/kubernetes-upgrade-108715/proxy-client.crt with IP's: []
	I0816 17:55:02.457605   51774 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/kubernetes-upgrade-108715/proxy-client.crt ...
	I0816 17:55:02.457634   51774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/kubernetes-upgrade-108715/proxy-client.crt: {Name:mke33dc552cb1088ecb5068f961f5031de86d878 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:55:02.457805   51774 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/kubernetes-upgrade-108715/proxy-client.key ...
	I0816 17:55:02.457820   51774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/kubernetes-upgrade-108715/proxy-client.key: {Name:mk49de7addf590884386067985e894a85507ccb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:55:02.458030   51774 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem (1338 bytes)
	W0816 17:55:02.458111   51774 certs.go:480] ignoring /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753_empty.pem, impossibly tiny 0 bytes
	I0816 17:55:02.458125   51774 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 17:55:02.458150   51774 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem (1082 bytes)
	I0816 17:55:02.458173   51774 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem (1123 bytes)
	I0816 17:55:02.458193   51774 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem (1679 bytes)
	I0816 17:55:02.458233   51774 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem (1708 bytes)
	I0816 17:55:02.458866   51774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 17:55:02.482685   51774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 17:55:02.504193   51774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 17:55:02.525435   51774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 17:55:02.546801   51774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/kubernetes-upgrade-108715/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0816 17:55:02.568668   51774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/kubernetes-upgrade-108715/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 17:55:02.590933   51774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/kubernetes-upgrade-108715/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 17:55:02.612734   51774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/kubernetes-upgrade-108715/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 17:55:02.634318   51774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 17:55:02.657852   51774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem --> /usr/share/ca-certificates/16753.pem (1338 bytes)
	I0816 17:55:02.678866   51774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /usr/share/ca-certificates/167532.pem (1708 bytes)
	I0816 17:55:02.700141   51774 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 17:55:02.715913   51774 ssh_runner.go:195] Run: openssl version
	I0816 17:55:02.721506   51774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16753.pem && ln -fs /usr/share/ca-certificates/16753.pem /etc/ssl/certs/16753.pem"
	I0816 17:55:02.731521   51774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16753.pem
	I0816 17:55:02.735724   51774 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 17:00 /usr/share/ca-certificates/16753.pem
	I0816 17:55:02.735790   51774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16753.pem
	I0816 17:55:02.741154   51774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16753.pem /etc/ssl/certs/51391683.0"
	I0816 17:55:02.750901   51774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167532.pem && ln -fs /usr/share/ca-certificates/167532.pem /etc/ssl/certs/167532.pem"
	I0816 17:55:02.760745   51774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167532.pem
	I0816 17:55:02.764885   51774 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 17:00 /usr/share/ca-certificates/167532.pem
	I0816 17:55:02.764939   51774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167532.pem
	I0816 17:55:02.770072   51774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167532.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 17:55:02.779861   51774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 17:55:02.789521   51774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 17:55:02.793785   51774 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:49 /usr/share/ca-certificates/minikubeCA.pem
	I0816 17:55:02.793841   51774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 17:55:02.799639   51774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 17:55:02.809728   51774 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 17:55:02.813433   51774 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0816 17:55:02.813496   51774 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-108715 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-108715 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.8 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 17:55:02.813599   51774 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 17:55:02.813667   51774 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 17:55:02.848399   51774 cri.go:89] found id: ""
	I0816 17:55:02.848466   51774 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 17:55:02.860374   51774 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 17:55:02.869507   51774 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 17:55:02.880908   51774 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 17:55:02.880926   51774 kubeadm.go:157] found existing configuration files:
	
	I0816 17:55:02.880966   51774 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 17:55:02.889932   51774 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 17:55:02.890007   51774 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 17:55:02.900971   51774 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 17:55:02.909538   51774 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 17:55:02.909599   51774 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 17:55:02.923140   51774 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 17:55:02.935822   51774 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 17:55:02.935883   51774 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 17:55:02.948782   51774 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 17:55:02.961163   51774 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 17:55:02.961235   51774 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 17:55:02.970457   51774 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 17:55:03.071847   51774 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0816 17:55:03.075647   51774 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 17:55:03.212544   51774 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 17:55:03.212708   51774 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 17:55:03.212861   51774 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0816 17:55:03.385880   51774 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 17:55:03.440381   51774 out.go:235]   - Generating certificates and keys ...
	I0816 17:55:03.440506   51774 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 17:55:03.440603   51774 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 17:55:03.516408   51774 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0816 17:55:03.809051   51774 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0816 17:55:03.908862   51774 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0816 17:55:03.966761   51774 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0816 17:55:04.380105   51774 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0816 17:55:04.380342   51774 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-108715 localhost] and IPs [192.168.39.8 127.0.0.1 ::1]
	I0816 17:55:04.582767   51774 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0816 17:55:04.582986   51774 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-108715 localhost] and IPs [192.168.39.8 127.0.0.1 ::1]
	I0816 17:55:04.828832   51774 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0816 17:55:04.953372   51774 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0816 17:55:05.169658   51774 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0816 17:55:05.169769   51774 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 17:55:05.257182   51774 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 17:55:05.418001   51774 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 17:55:05.638941   51774 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 17:55:05.826832   51774 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 17:55:05.841855   51774 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 17:55:05.843839   51774 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 17:55:05.843939   51774 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 17:55:05.967632   51774 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 17:55:05.970727   51774 out.go:235]   - Booting up control plane ...
	I0816 17:55:05.970867   51774 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 17:55:05.974593   51774 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 17:55:05.982916   51774 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 17:55:05.983975   51774 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 17:55:05.991282   51774 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0816 17:55:45.984374   51774 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0816 17:55:45.985005   51774 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 17:55:45.985302   51774 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 17:55:50.986018   51774 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 17:55:50.986368   51774 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 17:56:00.985081   51774 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 17:56:00.985267   51774 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 17:56:20.984412   51774 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 17:56:20.984699   51774 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 17:57:00.986327   51774 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 17:57:00.986914   51774 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 17:57:00.986943   51774 kubeadm.go:310] 
	I0816 17:57:00.987032   51774 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0816 17:57:00.987128   51774 kubeadm.go:310] 		timed out waiting for the condition
	I0816 17:57:00.987138   51774 kubeadm.go:310] 
	I0816 17:57:00.987241   51774 kubeadm.go:310] 	This error is likely caused by:
	I0816 17:57:00.987343   51774 kubeadm.go:310] 		- The kubelet is not running
	I0816 17:57:00.987582   51774 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0816 17:57:00.987591   51774 kubeadm.go:310] 
	I0816 17:57:00.987822   51774 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0816 17:57:00.987916   51774 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0816 17:57:00.987980   51774 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0816 17:57:00.987991   51774 kubeadm.go:310] 
	I0816 17:57:00.988203   51774 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0816 17:57:00.988363   51774 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0816 17:57:00.988380   51774 kubeadm.go:310] 
	I0816 17:57:00.988655   51774 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0816 17:57:00.988874   51774 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0816 17:57:00.989051   51774 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0816 17:57:00.989354   51774 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0816 17:57:00.989384   51774 kubeadm.go:310] 
	I0816 17:57:00.989944   51774 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 17:57:00.990064   51774 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0816 17:57:00.990160   51774 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0816 17:57:00.990298   51774 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-108715 localhost] and IPs [192.168.39.8 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-108715 localhost] and IPs [192.168.39.8 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-108715 localhost] and IPs [192.168.39.8 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-108715 localhost] and IPs [192.168.39.8 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0816 17:57:00.990341   51774 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 17:57:01.455775   51774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 17:57:01.468980   51774 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 17:57:01.477770   51774 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 17:57:01.477796   51774 kubeadm.go:157] found existing configuration files:
	
	I0816 17:57:01.477840   51774 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 17:57:01.485799   51774 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 17:57:01.485843   51774 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 17:57:01.493910   51774 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 17:57:01.501738   51774 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 17:57:01.501793   51774 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 17:57:01.510817   51774 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 17:57:01.522979   51774 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 17:57:01.523022   51774 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 17:57:01.535913   51774 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 17:57:01.547443   51774 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 17:57:01.547510   51774 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 17:57:01.558647   51774 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 17:57:01.757728   51774 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 17:58:57.640168   51774 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0816 17:58:57.640279   51774 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0816 17:58:57.642107   51774 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0816 17:58:57.642169   51774 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 17:58:57.642248   51774 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 17:58:57.642385   51774 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 17:58:57.642530   51774 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0816 17:58:57.642617   51774 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 17:58:57.644329   51774 out.go:235]   - Generating certificates and keys ...
	I0816 17:58:57.644429   51774 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 17:58:57.644523   51774 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 17:58:57.644652   51774 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 17:58:57.644747   51774 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 17:58:57.644855   51774 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 17:58:57.644930   51774 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 17:58:57.644998   51774 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 17:58:57.645081   51774 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 17:58:57.645150   51774 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 17:58:57.645236   51774 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 17:58:57.645290   51774 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 17:58:57.645358   51774 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 17:58:57.645420   51774 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 17:58:57.645484   51774 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 17:58:57.645567   51774 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 17:58:57.645651   51774 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 17:58:57.645787   51774 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 17:58:57.645898   51774 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 17:58:57.645955   51774 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 17:58:57.646043   51774 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 17:58:57.647441   51774 out.go:235]   - Booting up control plane ...
	I0816 17:58:57.647562   51774 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 17:58:57.647662   51774 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 17:58:57.647753   51774 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 17:58:57.647853   51774 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 17:58:57.648021   51774 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0816 17:58:57.648091   51774 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0816 17:58:57.648177   51774 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 17:58:57.648385   51774 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 17:58:57.648479   51774 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 17:58:57.648707   51774 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 17:58:57.648798   51774 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 17:58:57.648968   51774 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 17:58:57.649050   51774 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 17:58:57.649224   51774 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 17:58:57.649278   51774 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 17:58:57.649419   51774 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 17:58:57.649426   51774 kubeadm.go:310] 
	I0816 17:58:57.649457   51774 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0816 17:58:57.649489   51774 kubeadm.go:310] 		timed out waiting for the condition
	I0816 17:58:57.649495   51774 kubeadm.go:310] 
	I0816 17:58:57.649522   51774 kubeadm.go:310] 	This error is likely caused by:
	I0816 17:58:57.649549   51774 kubeadm.go:310] 		- The kubelet is not running
	I0816 17:58:57.649631   51774 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0816 17:58:57.649637   51774 kubeadm.go:310] 
	I0816 17:58:57.649715   51774 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0816 17:58:57.649742   51774 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0816 17:58:57.649769   51774 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0816 17:58:57.649775   51774 kubeadm.go:310] 
	I0816 17:58:57.649851   51774 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0816 17:58:57.649914   51774 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0816 17:58:57.649920   51774 kubeadm.go:310] 
	I0816 17:58:57.650008   51774 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0816 17:58:57.650075   51774 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0816 17:58:57.650133   51774 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0816 17:58:57.650190   51774 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0816 17:58:57.650254   51774 kubeadm.go:394] duration metric: took 3m54.836764418s to StartCluster
	I0816 17:58:57.650298   51774 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 17:58:57.650385   51774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 17:58:57.650463   51774 kubeadm.go:310] 
	I0816 17:58:57.725480   51774 cri.go:89] found id: ""
	I0816 17:58:57.725520   51774 logs.go:276] 0 containers: []
	W0816 17:58:57.725532   51774 logs.go:278] No container was found matching "kube-apiserver"
	I0816 17:58:57.725539   51774 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 17:58:57.725603   51774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 17:58:57.763914   51774 cri.go:89] found id: ""
	I0816 17:58:57.763944   51774 logs.go:276] 0 containers: []
	W0816 17:58:57.763955   51774 logs.go:278] No container was found matching "etcd"
	I0816 17:58:57.763964   51774 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 17:58:57.764031   51774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 17:58:57.797068   51774 cri.go:89] found id: ""
	I0816 17:58:57.797096   51774 logs.go:276] 0 containers: []
	W0816 17:58:57.797104   51774 logs.go:278] No container was found matching "coredns"
	I0816 17:58:57.797110   51774 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 17:58:57.797164   51774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 17:58:57.835080   51774 cri.go:89] found id: ""
	I0816 17:58:57.835110   51774 logs.go:276] 0 containers: []
	W0816 17:58:57.835121   51774 logs.go:278] No container was found matching "kube-scheduler"
	I0816 17:58:57.835129   51774 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 17:58:57.835208   51774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 17:58:57.874245   51774 cri.go:89] found id: ""
	I0816 17:58:57.874279   51774 logs.go:276] 0 containers: []
	W0816 17:58:57.874287   51774 logs.go:278] No container was found matching "kube-proxy"
	I0816 17:58:57.874297   51774 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 17:58:57.874369   51774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 17:58:57.919589   51774 cri.go:89] found id: ""
	I0816 17:58:57.919616   51774 logs.go:276] 0 containers: []
	W0816 17:58:57.919636   51774 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 17:58:57.919645   51774 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 17:58:57.919704   51774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 17:58:57.963522   51774 cri.go:89] found id: ""
	I0816 17:58:57.963550   51774 logs.go:276] 0 containers: []
	W0816 17:58:57.963560   51774 logs.go:278] No container was found matching "kindnet"
	I0816 17:58:57.963569   51774 logs.go:123] Gathering logs for describe nodes ...
	I0816 17:58:57.963585   51774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 17:58:58.093225   51774 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 17:58:58.093253   51774 logs.go:123] Gathering logs for CRI-O ...
	I0816 17:58:58.093269   51774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 17:58:58.206075   51774 logs.go:123] Gathering logs for container status ...
	I0816 17:58:58.206118   51774 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 17:58:58.246544   51774 logs.go:123] Gathering logs for kubelet ...
	I0816 17:58:58.246578   51774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 17:58:58.323855   51774 logs.go:123] Gathering logs for dmesg ...
	I0816 17:58:58.323893   51774 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0816 17:58:58.356224   51774 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0816 17:58:58.356345   51774 out.go:270] * 
	* 
	W0816 17:58:58.356605   51774 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0816 17:58:58.356647   51774 out.go:270] * 
	* 
	W0816 17:58:58.357724   51774 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 17:58:58.360721   51774 out.go:201] 
	W0816 17:58:58.361745   51774 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0816 17:58:58.361784   51774 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0816 17:58:58.361810   51774 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0816 17:58:58.363230   51774 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-108715 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-108715
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-108715: (6.294712025s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-108715 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-108715 status --format={{.Host}}: exit status 7 (73.70048ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-108715 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-108715 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (48.668156046s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-108715 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-108715 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-108715 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (76.441695ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-108715] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19461
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19461-9545/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19461-9545/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-108715
	    minikube start -p kubernetes-upgrade-108715 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1087152 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start -p kubernetes-upgrade-108715 --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-108715 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-108715 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (2m11.499799369s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-08-16 18:02:05.112296255 +0000 UTC m=+4425.430841324
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-108715 -n kubernetes-upgrade-108715
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-108715 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-108715 logs -n 25: (4.177052847s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|-------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |   Profile   |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|-------------|---------|---------|---------------------|---------------------|
	| ssh     | -p auto-791304 sudo ip r s                           | auto-791304 | jenkins | v1.33.1 | 16 Aug 24 18:01 UTC | 16 Aug 24 18:01 UTC |
	| ssh     | -p auto-791304 sudo                                  | auto-791304 | jenkins | v1.33.1 | 16 Aug 24 18:01 UTC | 16 Aug 24 18:01 UTC |
	|         | iptables-save                                        |             |         |         |                     |                     |
	| ssh     | -p auto-791304 sudo iptables                         | auto-791304 | jenkins | v1.33.1 | 16 Aug 24 18:01 UTC | 16 Aug 24 18:01 UTC |
	|         | -t nat -L -n -v                                      |             |         |         |                     |                     |
	| ssh     | -p auto-791304 sudo systemctl                        | auto-791304 | jenkins | v1.33.1 | 16 Aug 24 18:01 UTC | 16 Aug 24 18:02 UTC |
	|         | status kubelet --all --full                          |             |         |         |                     |                     |
	|         | --no-pager                                           |             |         |         |                     |                     |
	| ssh     | -p auto-791304 sudo systemctl                        | auto-791304 | jenkins | v1.33.1 | 16 Aug 24 18:02 UTC | 16 Aug 24 18:02 UTC |
	|         | cat kubelet --no-pager                               |             |         |         |                     |                     |
	| ssh     | -p auto-791304 sudo journalctl                       | auto-791304 | jenkins | v1.33.1 | 16 Aug 24 18:02 UTC | 16 Aug 24 18:02 UTC |
	|         | -xeu kubelet --all --full                            |             |         |         |                     |                     |
	|         | --no-pager                                           |             |         |         |                     |                     |
	| ssh     | -p auto-791304 sudo cat                              | auto-791304 | jenkins | v1.33.1 | 16 Aug 24 18:02 UTC | 16 Aug 24 18:02 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |             |         |         |                     |                     |
	| ssh     | -p auto-791304 sudo cat                              | auto-791304 | jenkins | v1.33.1 | 16 Aug 24 18:02 UTC | 16 Aug 24 18:02 UTC |
	|         | /var/lib/kubelet/config.yaml                         |             |         |         |                     |                     |
	| ssh     | -p auto-791304 sudo systemctl                        | auto-791304 | jenkins | v1.33.1 | 16 Aug 24 18:02 UTC |                     |
	|         | status docker --all --full                           |             |         |         |                     |                     |
	|         | --no-pager                                           |             |         |         |                     |                     |
	| ssh     | -p auto-791304 sudo systemctl                        | auto-791304 | jenkins | v1.33.1 | 16 Aug 24 18:02 UTC | 16 Aug 24 18:02 UTC |
	|         | cat docker --no-pager                                |             |         |         |                     |                     |
	| ssh     | -p auto-791304 sudo cat                              | auto-791304 | jenkins | v1.33.1 | 16 Aug 24 18:02 UTC | 16 Aug 24 18:02 UTC |
	|         | /etc/docker/daemon.json                              |             |         |         |                     |                     |
	| ssh     | -p auto-791304 sudo docker                           | auto-791304 | jenkins | v1.33.1 | 16 Aug 24 18:02 UTC |                     |
	|         | system info                                          |             |         |         |                     |                     |
	| ssh     | -p auto-791304 sudo systemctl                        | auto-791304 | jenkins | v1.33.1 | 16 Aug 24 18:02 UTC |                     |
	|         | status cri-docker --all --full                       |             |         |         |                     |                     |
	|         | --no-pager                                           |             |         |         |                     |                     |
	| ssh     | -p auto-791304 sudo systemctl                        | auto-791304 | jenkins | v1.33.1 | 16 Aug 24 18:02 UTC | 16 Aug 24 18:02 UTC |
	|         | cat cri-docker --no-pager                            |             |         |         |                     |                     |
	| ssh     | -p auto-791304 sudo cat                              | auto-791304 | jenkins | v1.33.1 | 16 Aug 24 18:02 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |             |         |         |                     |                     |
	| ssh     | -p auto-791304 sudo cat                              | auto-791304 | jenkins | v1.33.1 | 16 Aug 24 18:02 UTC | 16 Aug 24 18:02 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |             |         |         |                     |                     |
	| ssh     | -p auto-791304 sudo                                  | auto-791304 | jenkins | v1.33.1 | 16 Aug 24 18:02 UTC | 16 Aug 24 18:02 UTC |
	|         | cri-dockerd --version                                |             |         |         |                     |                     |
	| ssh     | -p auto-791304 sudo systemctl                        | auto-791304 | jenkins | v1.33.1 | 16 Aug 24 18:02 UTC |                     |
	|         | status containerd --all --full                       |             |         |         |                     |                     |
	|         | --no-pager                                           |             |         |         |                     |                     |
	| ssh     | -p auto-791304 sudo systemctl                        | auto-791304 | jenkins | v1.33.1 | 16 Aug 24 18:02 UTC | 16 Aug 24 18:02 UTC |
	|         | cat containerd --no-pager                            |             |         |         |                     |                     |
	| ssh     | -p auto-791304 sudo cat                              | auto-791304 | jenkins | v1.33.1 | 16 Aug 24 18:02 UTC | 16 Aug 24 18:02 UTC |
	|         | /lib/systemd/system/containerd.service               |             |         |         |                     |                     |
	| ssh     | -p auto-791304 sudo cat                              | auto-791304 | jenkins | v1.33.1 | 16 Aug 24 18:02 UTC | 16 Aug 24 18:02 UTC |
	|         | /etc/containerd/config.toml                          |             |         |         |                     |                     |
	| ssh     | -p auto-791304 sudo containerd                       | auto-791304 | jenkins | v1.33.1 | 16 Aug 24 18:02 UTC | 16 Aug 24 18:02 UTC |
	|         | config dump                                          |             |         |         |                     |                     |
	| ssh     | -p auto-791304 sudo systemctl                        | auto-791304 | jenkins | v1.33.1 | 16 Aug 24 18:02 UTC | 16 Aug 24 18:02 UTC |
	|         | status crio --all --full                             |             |         |         |                     |                     |
	|         | --no-pager                                           |             |         |         |                     |                     |
	| ssh     | -p auto-791304 sudo systemctl                        | auto-791304 | jenkins | v1.33.1 | 16 Aug 24 18:02 UTC | 16 Aug 24 18:02 UTC |
	|         | cat crio --no-pager                                  |             |         |         |                     |                     |
	| ssh     | -p auto-791304 sudo find                             | auto-791304 | jenkins | v1.33.1 | 16 Aug 24 18:02 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |             |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |             |         |         |                     |                     |
	|---------|------------------------------------------------------|-------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 18:01:40
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 18:01:40.328327   60592 out.go:345] Setting OutFile to fd 1 ...
	I0816 18:01:40.328423   60592 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 18:01:40.328435   60592 out.go:358] Setting ErrFile to fd 2...
	I0816 18:01:40.328439   60592 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 18:01:40.328594   60592 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-9545/.minikube/bin
	I0816 18:01:40.329221   60592 out.go:352] Setting JSON to false
	I0816 18:01:40.330131   60592 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6198,"bootTime":1723825102,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0816 18:01:40.330187   60592 start.go:139] virtualization: kvm guest
	I0816 18:01:40.332237   60592 out.go:177] * [enable-default-cni-791304] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0816 18:01:40.333387   60592 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 18:01:40.333384   60592 notify.go:220] Checking for updates...
	I0816 18:01:40.334514   60592 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 18:01:40.335663   60592 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19461-9545/kubeconfig
	I0816 18:01:40.336966   60592 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19461-9545/.minikube
	I0816 18:01:40.338252   60592 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 18:01:40.339356   60592 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 18:01:40.341096   60592 config.go:182] Loaded profile config "auto-791304": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 18:01:40.341254   60592 config.go:182] Loaded profile config "flannel-791304": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 18:01:40.341373   60592 config.go:182] Loaded profile config "kubernetes-upgrade-108715": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 18:01:40.341499   60592 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 18:01:40.378546   60592 out.go:177] * Using the kvm2 driver based on user configuration
	I0816 18:01:40.379829   60592 start.go:297] selected driver: kvm2
	I0816 18:01:40.379850   60592 start.go:901] validating driver "kvm2" against <nil>
	I0816 18:01:40.379864   60592 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 18:01:40.380509   60592 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 18:01:40.380581   60592 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19461-9545/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 18:01:40.395252   60592 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0816 18:01:40.395297   60592 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	E0816 18:01:40.395510   60592 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0816 18:01:40.395552   60592 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 18:01:40.395630   60592 cni.go:84] Creating CNI manager for "bridge"
	I0816 18:01:40.395650   60592 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0816 18:01:40.395727   60592 start.go:340] cluster config:
	{Name:enable-default-cni-791304 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:enable-default-cni-791304 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPat
h: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 18:01:40.395842   60592 iso.go:125] acquiring lock: {Name:mke35866c1d0f078e1f027c2d727692722810e04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 18:01:40.397672   60592 out.go:177] * Starting "enable-default-cni-791304" primary control-plane node in "enable-default-cni-791304" cluster
	I0816 18:01:42.084190   58802 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.532698179s)
	I0816 18:01:42.084228   58802 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 18:01:42.084287   58802 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 18:01:42.090025   58802 start.go:563] Will wait 60s for crictl version
	I0816 18:01:42.090079   58802 ssh_runner.go:195] Run: which crictl
	I0816 18:01:42.093757   58802 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 18:01:42.137197   58802 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 18:01:42.137312   58802 ssh_runner.go:195] Run: crio --version
	I0816 18:01:42.167554   58802 ssh_runner.go:195] Run: crio --version
	I0816 18:01:42.198002   58802 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 18:01:42.199313   58802 main.go:141] libmachine: (kubernetes-upgrade-108715) Calling .GetIP
	I0816 18:01:42.202157   58802 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | domain kubernetes-upgrade-108715 has defined MAC address 52:54:00:73:ca:e7 in network mk-kubernetes-upgrade-108715
	I0816 18:01:42.202468   58802 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:ca:e7", ip: ""} in network mk-kubernetes-upgrade-108715: {Iface:virbr1 ExpiryTime:2024-08-16 18:59:22 +0000 UTC Type:0 Mac:52:54:00:73:ca:e7 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:kubernetes-upgrade-108715 Clientid:01:52:54:00:73:ca:e7}
	I0816 18:01:42.202495   58802 main.go:141] libmachine: (kubernetes-upgrade-108715) DBG | domain kubernetes-upgrade-108715 has defined IP address 192.168.39.8 and MAC address 52:54:00:73:ca:e7 in network mk-kubernetes-upgrade-108715
	I0816 18:01:42.202651   58802 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0816 18:01:42.206707   58802 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-108715 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:kubernetes-upgrade-108715 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.8 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 18:01:42.206808   58802 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 18:01:42.206857   58802 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 18:01:42.247184   58802 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 18:01:42.247205   58802 crio.go:433] Images already preloaded, skipping extraction
	I0816 18:01:42.247253   58802 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 18:01:42.281420   58802 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 18:01:42.281445   58802 cache_images.go:84] Images are preloaded, skipping loading
	I0816 18:01:42.281455   58802 kubeadm.go:934] updating node { 192.168.39.8 8443 v1.31.0 crio true true} ...
	I0816 18:01:42.281579   58802 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-108715 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.8
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:kubernetes-upgrade-108715 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 18:01:42.281669   58802 ssh_runner.go:195] Run: crio config
	I0816 18:01:42.331484   58802 cni.go:84] Creating CNI manager for ""
	I0816 18:01:42.331519   58802 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 18:01:42.331534   58802 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 18:01:42.331562   58802 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.8 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-108715 NodeName:kubernetes-upgrade-108715 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.8"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.8 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 18:01:42.331759   58802 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.8
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-108715"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.8
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.8"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 18:01:42.331830   58802 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 18:01:42.341155   58802 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 18:01:42.341214   58802 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 18:01:42.350420   58802 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0816 18:01:42.366058   58802 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 18:01:42.382073   58802 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2163 bytes)
	I0816 18:01:42.397772   58802 ssh_runner.go:195] Run: grep 192.168.39.8	control-plane.minikube.internal$ /etc/hosts
	I0816 18:01:42.401404   58802 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:01:42.539540   58802 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 18:01:42.553488   58802 certs.go:68] Setting up /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/kubernetes-upgrade-108715 for IP: 192.168.39.8
	I0816 18:01:42.553515   58802 certs.go:194] generating shared ca certs ...
	I0816 18:01:42.553538   58802 certs.go:226] acquiring lock for ca certs: {Name:mk1e000575c94c138513704c2900b8a68810eb65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:01:42.553701   58802 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key
	I0816 18:01:42.553739   58802 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key
	I0816 18:01:42.553751   58802 certs.go:256] generating profile certs ...
	I0816 18:01:42.553861   58802 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/kubernetes-upgrade-108715/client.key
	I0816 18:01:42.553915   58802 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/kubernetes-upgrade-108715/apiserver.key.34d3d230
	I0816 18:01:42.553947   58802 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/kubernetes-upgrade-108715/proxy-client.key
	I0816 18:01:42.554054   58802 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem (1338 bytes)
	W0816 18:01:42.554082   58802 certs.go:480] ignoring /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753_empty.pem, impossibly tiny 0 bytes
	I0816 18:01:42.554091   58802 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 18:01:42.554114   58802 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem (1082 bytes)
	I0816 18:01:42.554179   58802 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem (1123 bytes)
	I0816 18:01:42.554207   58802 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem (1679 bytes)
	I0816 18:01:42.554244   58802 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem (1708 bytes)
	I0816 18:01:42.554876   58802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 18:01:42.579509   58802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 18:01:42.602510   58802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 18:01:42.625526   58802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 18:01:42.649039   58802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/kubernetes-upgrade-108715/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0816 18:01:42.672674   58802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/kubernetes-upgrade-108715/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 18:01:42.695554   58802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/kubernetes-upgrade-108715/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 18:01:42.717894   58802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/kubernetes-upgrade-108715/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 18:01:42.745058   58802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /usr/share/ca-certificates/167532.pem (1708 bytes)
	I0816 18:01:42.767410   58802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 18:01:42.790210   58802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem --> /usr/share/ca-certificates/16753.pem (1338 bytes)
	I0816 18:01:42.811857   58802 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 18:01:42.828452   58802 ssh_runner.go:195] Run: openssl version
	I0816 18:01:42.834113   58802 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167532.pem && ln -fs /usr/share/ca-certificates/167532.pem /etc/ssl/certs/167532.pem"
	I0816 18:01:42.844175   58802 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167532.pem
	I0816 18:01:42.848527   58802 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 17:00 /usr/share/ca-certificates/167532.pem
	I0816 18:01:42.848588   58802 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167532.pem
	I0816 18:01:42.853957   58802 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167532.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 18:01:42.862834   58802 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 18:01:42.874189   58802 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:01:42.878665   58802 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:49 /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:01:42.878727   58802 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:01:42.884086   58802 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 18:01:42.893242   58802 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16753.pem && ln -fs /usr/share/ca-certificates/16753.pem /etc/ssl/certs/16753.pem"
	I0816 18:01:42.955980   58802 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16753.pem
	I0816 18:01:42.966218   58802 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 17:00 /usr/share/ca-certificates/16753.pem
	I0816 18:01:42.966284   58802 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16753.pem
	I0816 18:01:42.975205   58802 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16753.pem /etc/ssl/certs/51391683.0"
	I0816 18:01:42.990688   58802 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 18:01:43.020669   58802 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 18:01:43.028708   58802 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 18:01:43.102348   58802 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 18:01:43.137760   58802 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 18:01:43.199935   58802 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 18:01:43.250870   58802 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 18:01:43.292765   58802 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-108715 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.0 ClusterName:kubernetes-upgrade-108715 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.8 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 18:01:43.292861   58802 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 18:01:43.292930   58802 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 18:01:43.589517   58802 cri.go:89] found id: "cafaf7419f0fb7efb44d569991e2fb7a1178718f3973dcefafb57dfa0ee8d8eb"
	I0816 18:01:43.589544   58802 cri.go:89] found id: "0cc24755ca5cdf32a6ecb6e38ae2fc3106fda5204381d98b93ec93c1f0a20a6b"
	I0816 18:01:43.589549   58802 cri.go:89] found id: "957a3c579dac37732f7de755f90b40112c7bce95e55f8298dc1488d34d3831c8"
	I0816 18:01:43.589554   58802 cri.go:89] found id: "156e8fdaff0eb949662fd30d22d4e37859141fca03af9bbf8ee9db240c9346c2"
	I0816 18:01:43.589557   58802 cri.go:89] found id: "eb7cc17ca5aa883c98668c836a2f272bd68eb146cecae232b77dc1fe0449bc51"
	I0816 18:01:43.589562   58802 cri.go:89] found id: "55a5a0877e414e2242352bd767eb8a96bce24a2fb92c62a8c22b249a1d8ab755"
	I0816 18:01:43.589565   58802 cri.go:89] found id: "198159a79191b464c61b82ed4a8fbd7fee0116fe0c08d35775507dcabced5348"
	I0816 18:01:43.589569   58802 cri.go:89] found id: "3f14c97969c4500d7aa82d5af44a5f09c51bfab8d851f77c6a86f488907b4ba8"
	I0816 18:01:43.589574   58802 cri.go:89] found id: "9aaace5fc8ca7e0cd8106b44224d40b8013596cb9d78abd970645fdf855ebea7"
	I0816 18:01:43.589582   58802 cri.go:89] found id: "ca0c3f0020974e33768ea2c94049e982998dce891c8d2611033a35ac152711ab"
	I0816 18:01:43.589587   58802 cri.go:89] found id: "703a00a87f077f0a1c23bbab72df6c39004b940867d830e54241154d256efe88"
	I0816 18:01:43.589590   58802 cri.go:89] found id: "f36f3ad550a6ac80380dd6d03de8cf4a660c9ed16a440ec638bbca3fa94f114e"
	I0816 18:01:43.589595   58802 cri.go:89] found id: "923c2275e3a2ae3b7730c877733a3299e6c9a2987b78186de2e00cbda9c5e9ca"
	I0816 18:01:43.589599   58802 cri.go:89] found id: "49b8c30eb365b12dfff2f7d3ae2c1e74b66cedcdea10e87cb7169023fadd0e7b"
	I0816 18:01:43.589604   58802 cri.go:89] found id: "e1f54f78b9706bbdbeda583cae342c44105f013f227f9b9508c2df50fd4f02da"
	I0816 18:01:43.589610   58802 cri.go:89] found id: ""
	I0816 18:01:43.589658   58802 ssh_runner.go:195] Run: sudo runc list -f json
	I0816 18:01:40.478180   60323 main.go:141] libmachine: (flannel-791304) DBG | domain flannel-791304 has defined MAC address 52:54:00:a8:e0:92 in network mk-flannel-791304
	I0816 18:01:40.478662   60323 main.go:141] libmachine: (flannel-791304) DBG | unable to find current IP address of domain flannel-791304 in network mk-flannel-791304
	I0816 18:01:40.478689   60323 main.go:141] libmachine: (flannel-791304) DBG | I0816 18:01:40.478602   60346 retry.go:31] will retry after 1.263934992s: waiting for machine to come up
	I0816 18:01:41.744212   60323 main.go:141] libmachine: (flannel-791304) DBG | domain flannel-791304 has defined MAC address 52:54:00:a8:e0:92 in network mk-flannel-791304
	I0816 18:01:41.744885   60323 main.go:141] libmachine: (flannel-791304) DBG | unable to find current IP address of domain flannel-791304 in network mk-flannel-791304
	I0816 18:01:41.744909   60323 main.go:141] libmachine: (flannel-791304) DBG | I0816 18:01:41.744848   60346 retry.go:31] will retry after 1.121763322s: waiting for machine to come up
	I0816 18:01:42.868092   60323 main.go:141] libmachine: (flannel-791304) DBG | domain flannel-791304 has defined MAC address 52:54:00:a8:e0:92 in network mk-flannel-791304
	I0816 18:01:42.868540   60323 main.go:141] libmachine: (flannel-791304) DBG | unable to find current IP address of domain flannel-791304 in network mk-flannel-791304
	I0816 18:01:42.868568   60323 main.go:141] libmachine: (flannel-791304) DBG | I0816 18:01:42.868521   60346 retry.go:31] will retry after 1.744667154s: waiting for machine to come up
	I0816 18:01:44.615078   60323 main.go:141] libmachine: (flannel-791304) DBG | domain flannel-791304 has defined MAC address 52:54:00:a8:e0:92 in network mk-flannel-791304
	I0816 18:01:44.615633   60323 main.go:141] libmachine: (flannel-791304) DBG | unable to find current IP address of domain flannel-791304 in network mk-flannel-791304
	I0816 18:01:44.615679   60323 main.go:141] libmachine: (flannel-791304) DBG | I0816 18:01:44.615589   60346 retry.go:31] will retry after 2.59784451s: waiting for machine to come up
	I0816 18:01:40.398936   60592 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 18:01:40.398978   60592 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0816 18:01:40.398989   60592 cache.go:56] Caching tarball of preloaded images
	I0816 18:01:40.399076   60592 preload.go:172] Found /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 18:01:40.399086   60592 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0816 18:01:40.399170   60592 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/enable-default-cni-791304/config.json ...
	I0816 18:01:40.399186   60592 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/enable-default-cni-791304/config.json: {Name:mkd84c4b39f4cb128e71bb2f3bb5cfe1befd0ffd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:01:40.399315   60592 start.go:360] acquireMachinesLock for enable-default-cni-791304: {Name:mke1ffa1f2d6d714bdd85e184816ba8f4dfd08f1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	
	
	==> CRI-O <==
	Aug 16 18:02:05 kubernetes-upgrade-108715 crio[3195]: time="2024-08-16 18:02:05.928433396Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723831325928397271,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5b5f686d-17c1-4aae-9949-5c9359ef9009 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 18:02:05 kubernetes-upgrade-108715 crio[3195]: time="2024-08-16 18:02:05.929091174Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bed484b8-5c88-4482-8b7a-b6a9d466f8e8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:02:05 kubernetes-upgrade-108715 crio[3195]: time="2024-08-16 18:02:05.929185641Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bed484b8-5c88-4482-8b7a-b6a9d466f8e8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:02:05 kubernetes-upgrade-108715 crio[3195]: time="2024-08-16 18:02:05.933713366Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ac5865a7ccda1fb0d4c1b62acbc51b63233da8b77fd8f51d0c4ad87857a063c8,PodSandboxId:cb4e737d461f2d33a1916bdab27192335a2f8f3b7c6e3dc52dfd91d36326922f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723831321598789349,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4kxgb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e24eb4c3-6178-4382-925a-b3ecd3f3679d,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84f7fe79dcaa0f4b01d8576be0730e3c2094d4724edba621b30da7ac69ea72cd,PodSandboxId:8c68bf8799f28ef1fd3201381357e771bd62fa4e2dfe1ee67ad91ef71e51f0e1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723831317825212717,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-108715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb25ab7aa21e0ae716149c92d4111837,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5abd62ff40fcd3dad64d75571c156bf700f0d14a0b4d9dcb238040aed5cc1ec5,PodSandboxId:526336bb9313776abc86836817d227c9d2bad532ab14c6b61b8084f1a044091a,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723831317795473597,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-108715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caafa5300a5ae5afb934215b5237f295,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io
.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75be135429b71872d4de9e3f79c20aae7e00b4abca349adf7bb18a80e3f2a12f,PodSandboxId:257600fc02536caac5a06d737f7dc868cd70e9d4795ec7371a041dec877d774e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723831317797129594,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-108715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b6b00b0a3e07971c4fa30028e7175fe,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0628dfe7012997607c2178a37c68b33ce18f8b0f3ae6bcfa0b4f05d0ffc55a7c,PodSandboxId:1d1841af3982e6d7a9f295de3696ee68c1ae6d43a88c046d4d24ef0cdbe28152,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723831317776244339,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-108715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a15379ca26dc1695f5599c7c16e10e11,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70cfa7560ba283dccfb53d0de54d06ba0a1d224451e411735c445fcc4c731c24,PodSandboxId:e79f93c442788a9831b9f5b2c97e6d9d15c756f9fccbb41e562e7cab65a4c7f3,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723831304296480060,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-dtbhm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fc2af0d-f5d2-43bc-9aaf-f21d9170a235,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containe
rPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f48c3c9527f0a0fc293332069e08a59a3a9ac33c943ef3a49bc252af15fe99f8,PodSandboxId:0df0bb7cc9c394439be6dd515e379ade42bf4e50e6406fc56d960e75b87df442,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723831304167231501,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gvksp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bab26b94-a5d2-4da3-8c0d-cb4e750501d6,}
,Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c54c77f938aa573092520fa0da1f3d2980321e3ea8ae262748822ce6ff74090,PodSandboxId:526336bb9313776abc86836817d227c9d2bad532ab14c6b61b8084f1a044091a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723831303439375963,Labels:map[string]strin
g{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-108715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caafa5300a5ae5afb934215b5237f295,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01d24e14effae3981acc9e61f23098d55d0991557aa70e6ac9740dfd061d7f1b,PodSandboxId:1d1841af3982e6d7a9f295de3696ee68c1ae6d43a88c046d4d24ef0cdbe28152,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723831303419716430,Labels:map[string]string{io.kubernetes.container.
name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-108715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a15379ca26dc1695f5599c7c16e10e11,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41ac786f4c592b0c0736ca6627d4f0c4e8150fad80e489ded6bd8fddc2f46e92,PodSandboxId:257600fc02536caac5a06d737f7dc868cd70e9d4795ec7371a041dec877d774e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723831303364231722,Labels:map[string]string{io.kubernetes.contain
er.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-108715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b6b00b0a3e07971c4fa30028e7175fe,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f57a8cfc44d6c39887f29246c25f1e62eb8f7fb2d85abdacab38dcb3232abc0e,PodSandboxId:cb4e737d461f2d33a1916bdab27192335a2f8f3b7c6e3dc52dfd91d36326922f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723831303462420611,Labels:map[string]string{io.kubernetes
.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4kxgb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e24eb4c3-6178-4382-925a-b3ecd3f3679d,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a629a4bdefd15488bf20c9c30b78ec3aec32194d5524df19d8e683eb7a2c8d7d,PodSandboxId:8c68bf8799f28ef1fd3201381357e771bd62fa4e2dfe1ee67ad91ef71e51f0e1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723831303336096666,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler
,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-108715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb25ab7aa21e0ae716149c92d4111837,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cc24755ca5cdf32a6ecb6e38ae2fc3106fda5204381d98b93ec93c1f0a20a6b,PodSandboxId:0d68c6972897bfe88e02cff44b8f15f051d0f2c1cc96718df7bb4ccd1e6ee73b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723831210724111939,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.n
ame: coredns-6f6b679f8f-dtbhm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fc2af0d-f5d2-43bc-9aaf-f21d9170a235,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cafaf7419f0fb7efb44d569991e2fb7a1178718f3973dcefafb57dfa0ee8d8eb,PodSandboxId:b5760ee5e38e208c857e7705ece076f3379cced8443149fd6dd2ee0fe3792ae2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a
7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723831210863707684,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gvksp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bab26b94-a5d2-4da3-8c0d-cb4e750501d6,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca0c3f0020974e33768ea2c94049e982998dce891c8d2611033a35ac152711ab,PodSandboxId:995aae1bf3ead313aead339868fb19d1a23a99b6d01fab62fc02bc938226a1de,Metadata:&ContainerMetadata{Name
:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723831194923685206,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33e0bb00-d314-447b-96d7-1347e3b0b636,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bed484b8-5c88-4482-8b7a-b6a9d466f8e8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:02:05 kubernetes-upgrade-108715 crio[3195]: time="2024-08-16 18:02:05.991006052Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=54e163b6-8600-4fa6-9195-7423ece55d70 name=/runtime.v1.RuntimeService/Version
	Aug 16 18:02:05 kubernetes-upgrade-108715 crio[3195]: time="2024-08-16 18:02:05.991084848Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=54e163b6-8600-4fa6-9195-7423ece55d70 name=/runtime.v1.RuntimeService/Version
	Aug 16 18:02:05 kubernetes-upgrade-108715 crio[3195]: time="2024-08-16 18:02:05.992440235Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=edec659e-e4b2-4c46-b5c3-0ba6cc724d27 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 18:02:05 kubernetes-upgrade-108715 crio[3195]: time="2024-08-16 18:02:05.994055089Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723831325994010917,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=edec659e-e4b2-4c46-b5c3-0ba6cc724d27 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 18:02:05 kubernetes-upgrade-108715 crio[3195]: time="2024-08-16 18:02:05.995406764Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0817214c-ebb5-4f75-8f03-355e587a0b06 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:02:05 kubernetes-upgrade-108715 crio[3195]: time="2024-08-16 18:02:05.995605014Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0817214c-ebb5-4f75-8f03-355e587a0b06 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:02:05 kubernetes-upgrade-108715 crio[3195]: time="2024-08-16 18:02:05.997172561Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ac5865a7ccda1fb0d4c1b62acbc51b63233da8b77fd8f51d0c4ad87857a063c8,PodSandboxId:cb4e737d461f2d33a1916bdab27192335a2f8f3b7c6e3dc52dfd91d36326922f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723831321598789349,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4kxgb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e24eb4c3-6178-4382-925a-b3ecd3f3679d,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84f7fe79dcaa0f4b01d8576be0730e3c2094d4724edba621b30da7ac69ea72cd,PodSandboxId:8c68bf8799f28ef1fd3201381357e771bd62fa4e2dfe1ee67ad91ef71e51f0e1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723831317825212717,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-108715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb25ab7aa21e0ae716149c92d4111837,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5abd62ff40fcd3dad64d75571c156bf700f0d14a0b4d9dcb238040aed5cc1ec5,PodSandboxId:526336bb9313776abc86836817d227c9d2bad532ab14c6b61b8084f1a044091a,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723831317795473597,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-108715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caafa5300a5ae5afb934215b5237f295,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io
.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75be135429b71872d4de9e3f79c20aae7e00b4abca349adf7bb18a80e3f2a12f,PodSandboxId:257600fc02536caac5a06d737f7dc868cd70e9d4795ec7371a041dec877d774e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723831317797129594,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-108715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b6b00b0a3e07971c4fa30028e7175fe,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0628dfe7012997607c2178a37c68b33ce18f8b0f3ae6bcfa0b4f05d0ffc55a7c,PodSandboxId:1d1841af3982e6d7a9f295de3696ee68c1ae6d43a88c046d4d24ef0cdbe28152,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723831317776244339,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-108715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a15379ca26dc1695f5599c7c16e10e11,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70cfa7560ba283dccfb53d0de54d06ba0a1d224451e411735c445fcc4c731c24,PodSandboxId:e79f93c442788a9831b9f5b2c97e6d9d15c756f9fccbb41e562e7cab65a4c7f3,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723831304296480060,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-dtbhm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fc2af0d-f5d2-43bc-9aaf-f21d9170a235,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containe
rPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f48c3c9527f0a0fc293332069e08a59a3a9ac33c943ef3a49bc252af15fe99f8,PodSandboxId:0df0bb7cc9c394439be6dd515e379ade42bf4e50e6406fc56d960e75b87df442,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723831304167231501,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gvksp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bab26b94-a5d2-4da3-8c0d-cb4e750501d6,}
,Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c54c77f938aa573092520fa0da1f3d2980321e3ea8ae262748822ce6ff74090,PodSandboxId:526336bb9313776abc86836817d227c9d2bad532ab14c6b61b8084f1a044091a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723831303439375963,Labels:map[string]strin
g{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-108715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caafa5300a5ae5afb934215b5237f295,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01d24e14effae3981acc9e61f23098d55d0991557aa70e6ac9740dfd061d7f1b,PodSandboxId:1d1841af3982e6d7a9f295de3696ee68c1ae6d43a88c046d4d24ef0cdbe28152,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723831303419716430,Labels:map[string]string{io.kubernetes.container.
name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-108715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a15379ca26dc1695f5599c7c16e10e11,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41ac786f4c592b0c0736ca6627d4f0c4e8150fad80e489ded6bd8fddc2f46e92,PodSandboxId:257600fc02536caac5a06d737f7dc868cd70e9d4795ec7371a041dec877d774e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723831303364231722,Labels:map[string]string{io.kubernetes.contain
er.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-108715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b6b00b0a3e07971c4fa30028e7175fe,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f57a8cfc44d6c39887f29246c25f1e62eb8f7fb2d85abdacab38dcb3232abc0e,PodSandboxId:cb4e737d461f2d33a1916bdab27192335a2f8f3b7c6e3dc52dfd91d36326922f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723831303462420611,Labels:map[string]string{io.kubernetes
.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4kxgb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e24eb4c3-6178-4382-925a-b3ecd3f3679d,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a629a4bdefd15488bf20c9c30b78ec3aec32194d5524df19d8e683eb7a2c8d7d,PodSandboxId:8c68bf8799f28ef1fd3201381357e771bd62fa4e2dfe1ee67ad91ef71e51f0e1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723831303336096666,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler
,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-108715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb25ab7aa21e0ae716149c92d4111837,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cc24755ca5cdf32a6ecb6e38ae2fc3106fda5204381d98b93ec93c1f0a20a6b,PodSandboxId:0d68c6972897bfe88e02cff44b8f15f051d0f2c1cc96718df7bb4ccd1e6ee73b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723831210724111939,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.n
ame: coredns-6f6b679f8f-dtbhm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fc2af0d-f5d2-43bc-9aaf-f21d9170a235,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cafaf7419f0fb7efb44d569991e2fb7a1178718f3973dcefafb57dfa0ee8d8eb,PodSandboxId:b5760ee5e38e208c857e7705ece076f3379cced8443149fd6dd2ee0fe3792ae2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a
7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723831210863707684,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gvksp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bab26b94-a5d2-4da3-8c0d-cb4e750501d6,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca0c3f0020974e33768ea2c94049e982998dce891c8d2611033a35ac152711ab,PodSandboxId:995aae1bf3ead313aead339868fb19d1a23a99b6d01fab62fc02bc938226a1de,Metadata:&ContainerMetadata{Name
:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723831194923685206,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33e0bb00-d314-447b-96d7-1347e3b0b636,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0817214c-ebb5-4f75-8f03-355e587a0b06 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:02:06 kubernetes-upgrade-108715 crio[3195]: time="2024-08-16 18:02:06.058308612Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c8756663-5135-4479-ad91-9b890abaf780 name=/runtime.v1.RuntimeService/Version
	Aug 16 18:02:06 kubernetes-upgrade-108715 crio[3195]: time="2024-08-16 18:02:06.058446103Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c8756663-5135-4479-ad91-9b890abaf780 name=/runtime.v1.RuntimeService/Version
	Aug 16 18:02:06 kubernetes-upgrade-108715 crio[3195]: time="2024-08-16 18:02:06.059692903Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a44b3d5e-2a1f-4210-9ba5-cf083dd7638b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 18:02:06 kubernetes-upgrade-108715 crio[3195]: time="2024-08-16 18:02:06.060185429Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723831326060160623,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a44b3d5e-2a1f-4210-9ba5-cf083dd7638b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 18:02:06 kubernetes-upgrade-108715 crio[3195]: time="2024-08-16 18:02:06.060650204Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f9e2df3b-994c-43d2-b99f-127aca456100 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:02:06 kubernetes-upgrade-108715 crio[3195]: time="2024-08-16 18:02:06.060724509Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f9e2df3b-994c-43d2-b99f-127aca456100 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:02:06 kubernetes-upgrade-108715 crio[3195]: time="2024-08-16 18:02:06.061096037Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ac5865a7ccda1fb0d4c1b62acbc51b63233da8b77fd8f51d0c4ad87857a063c8,PodSandboxId:cb4e737d461f2d33a1916bdab27192335a2f8f3b7c6e3dc52dfd91d36326922f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723831321598789349,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4kxgb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e24eb4c3-6178-4382-925a-b3ecd3f3679d,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84f7fe79dcaa0f4b01d8576be0730e3c2094d4724edba621b30da7ac69ea72cd,PodSandboxId:8c68bf8799f28ef1fd3201381357e771bd62fa4e2dfe1ee67ad91ef71e51f0e1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723831317825212717,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-108715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb25ab7aa21e0ae716149c92d4111837,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5abd62ff40fcd3dad64d75571c156bf700f0d14a0b4d9dcb238040aed5cc1ec5,PodSandboxId:526336bb9313776abc86836817d227c9d2bad532ab14c6b61b8084f1a044091a,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723831317795473597,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-108715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caafa5300a5ae5afb934215b5237f295,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io
.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75be135429b71872d4de9e3f79c20aae7e00b4abca349adf7bb18a80e3f2a12f,PodSandboxId:257600fc02536caac5a06d737f7dc868cd70e9d4795ec7371a041dec877d774e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723831317797129594,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-108715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b6b00b0a3e07971c4fa30028e7175fe,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0628dfe7012997607c2178a37c68b33ce18f8b0f3ae6bcfa0b4f05d0ffc55a7c,PodSandboxId:1d1841af3982e6d7a9f295de3696ee68c1ae6d43a88c046d4d24ef0cdbe28152,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723831317776244339,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-108715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a15379ca26dc1695f5599c7c16e10e11,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70cfa7560ba283dccfb53d0de54d06ba0a1d224451e411735c445fcc4c731c24,PodSandboxId:e79f93c442788a9831b9f5b2c97e6d9d15c756f9fccbb41e562e7cab65a4c7f3,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723831304296480060,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-dtbhm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fc2af0d-f5d2-43bc-9aaf-f21d9170a235,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containe
rPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f48c3c9527f0a0fc293332069e08a59a3a9ac33c943ef3a49bc252af15fe99f8,PodSandboxId:0df0bb7cc9c394439be6dd515e379ade42bf4e50e6406fc56d960e75b87df442,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723831304167231501,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gvksp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bab26b94-a5d2-4da3-8c0d-cb4e750501d6,}
,Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c54c77f938aa573092520fa0da1f3d2980321e3ea8ae262748822ce6ff74090,PodSandboxId:526336bb9313776abc86836817d227c9d2bad532ab14c6b61b8084f1a044091a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723831303439375963,Labels:map[string]strin
g{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-108715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caafa5300a5ae5afb934215b5237f295,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01d24e14effae3981acc9e61f23098d55d0991557aa70e6ac9740dfd061d7f1b,PodSandboxId:1d1841af3982e6d7a9f295de3696ee68c1ae6d43a88c046d4d24ef0cdbe28152,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723831303419716430,Labels:map[string]string{io.kubernetes.container.
name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-108715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a15379ca26dc1695f5599c7c16e10e11,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41ac786f4c592b0c0736ca6627d4f0c4e8150fad80e489ded6bd8fddc2f46e92,PodSandboxId:257600fc02536caac5a06d737f7dc868cd70e9d4795ec7371a041dec877d774e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723831303364231722,Labels:map[string]string{io.kubernetes.contain
er.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-108715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b6b00b0a3e07971c4fa30028e7175fe,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f57a8cfc44d6c39887f29246c25f1e62eb8f7fb2d85abdacab38dcb3232abc0e,PodSandboxId:cb4e737d461f2d33a1916bdab27192335a2f8f3b7c6e3dc52dfd91d36326922f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723831303462420611,Labels:map[string]string{io.kubernetes
.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4kxgb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e24eb4c3-6178-4382-925a-b3ecd3f3679d,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a629a4bdefd15488bf20c9c30b78ec3aec32194d5524df19d8e683eb7a2c8d7d,PodSandboxId:8c68bf8799f28ef1fd3201381357e771bd62fa4e2dfe1ee67ad91ef71e51f0e1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723831303336096666,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler
,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-108715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb25ab7aa21e0ae716149c92d4111837,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cc24755ca5cdf32a6ecb6e38ae2fc3106fda5204381d98b93ec93c1f0a20a6b,PodSandboxId:0d68c6972897bfe88e02cff44b8f15f051d0f2c1cc96718df7bb4ccd1e6ee73b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723831210724111939,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.n
ame: coredns-6f6b679f8f-dtbhm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fc2af0d-f5d2-43bc-9aaf-f21d9170a235,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cafaf7419f0fb7efb44d569991e2fb7a1178718f3973dcefafb57dfa0ee8d8eb,PodSandboxId:b5760ee5e38e208c857e7705ece076f3379cced8443149fd6dd2ee0fe3792ae2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a
7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723831210863707684,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gvksp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bab26b94-a5d2-4da3-8c0d-cb4e750501d6,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca0c3f0020974e33768ea2c94049e982998dce891c8d2611033a35ac152711ab,PodSandboxId:995aae1bf3ead313aead339868fb19d1a23a99b6d01fab62fc02bc938226a1de,Metadata:&ContainerMetadata{Name
:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723831194923685206,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33e0bb00-d314-447b-96d7-1347e3b0b636,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f9e2df3b-994c-43d2-b99f-127aca456100 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:02:06 kubernetes-upgrade-108715 crio[3195]: time="2024-08-16 18:02:06.104059477Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0874e285-f088-436e-a667-a02ec8d11163 name=/runtime.v1.RuntimeService/Version
	Aug 16 18:02:06 kubernetes-upgrade-108715 crio[3195]: time="2024-08-16 18:02:06.104157695Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0874e285-f088-436e-a667-a02ec8d11163 name=/runtime.v1.RuntimeService/Version
	Aug 16 18:02:06 kubernetes-upgrade-108715 crio[3195]: time="2024-08-16 18:02:06.105442012Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=541b32b9-6b9f-4f59-8cc6-2df38821c800 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 18:02:06 kubernetes-upgrade-108715 crio[3195]: time="2024-08-16 18:02:06.106129161Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723831326106096835,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=541b32b9-6b9f-4f59-8cc6-2df38821c800 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 18:02:06 kubernetes-upgrade-108715 crio[3195]: time="2024-08-16 18:02:06.106880797Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4ef84d75-8085-4a8a-8c1f-d82f0c0ed1c9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:02:06 kubernetes-upgrade-108715 crio[3195]: time="2024-08-16 18:02:06.106996996Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4ef84d75-8085-4a8a-8c1f-d82f0c0ed1c9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:02:06 kubernetes-upgrade-108715 crio[3195]: time="2024-08-16 18:02:06.107392332Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ac5865a7ccda1fb0d4c1b62acbc51b63233da8b77fd8f51d0c4ad87857a063c8,PodSandboxId:cb4e737d461f2d33a1916bdab27192335a2f8f3b7c6e3dc52dfd91d36326922f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723831321598789349,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4kxgb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e24eb4c3-6178-4382-925a-b3ecd3f3679d,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84f7fe79dcaa0f4b01d8576be0730e3c2094d4724edba621b30da7ac69ea72cd,PodSandboxId:8c68bf8799f28ef1fd3201381357e771bd62fa4e2dfe1ee67ad91ef71e51f0e1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723831317825212717,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-108715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb25ab7aa21e0ae716149c92d4111837,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5abd62ff40fcd3dad64d75571c156bf700f0d14a0b4d9dcb238040aed5cc1ec5,PodSandboxId:526336bb9313776abc86836817d227c9d2bad532ab14c6b61b8084f1a044091a,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723831317795473597,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-108715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caafa5300a5ae5afb934215b5237f295,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io
.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75be135429b71872d4de9e3f79c20aae7e00b4abca349adf7bb18a80e3f2a12f,PodSandboxId:257600fc02536caac5a06d737f7dc868cd70e9d4795ec7371a041dec877d774e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723831317797129594,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-108715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b6b00b0a3e07971c4fa30028e7175fe,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0628dfe7012997607c2178a37c68b33ce18f8b0f3ae6bcfa0b4f05d0ffc55a7c,PodSandboxId:1d1841af3982e6d7a9f295de3696ee68c1ae6d43a88c046d4d24ef0cdbe28152,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723831317776244339,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-108715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a15379ca26dc1695f5599c7c16e10e11,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70cfa7560ba283dccfb53d0de54d06ba0a1d224451e411735c445fcc4c731c24,PodSandboxId:e79f93c442788a9831b9f5b2c97e6d9d15c756f9fccbb41e562e7cab65a4c7f3,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723831304296480060,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-dtbhm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fc2af0d-f5d2-43bc-9aaf-f21d9170a235,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containe
rPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f48c3c9527f0a0fc293332069e08a59a3a9ac33c943ef3a49bc252af15fe99f8,PodSandboxId:0df0bb7cc9c394439be6dd515e379ade42bf4e50e6406fc56d960e75b87df442,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723831304167231501,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gvksp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bab26b94-a5d2-4da3-8c0d-cb4e750501d6,}
,Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c54c77f938aa573092520fa0da1f3d2980321e3ea8ae262748822ce6ff74090,PodSandboxId:526336bb9313776abc86836817d227c9d2bad532ab14c6b61b8084f1a044091a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723831303439375963,Labels:map[string]strin
g{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-108715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caafa5300a5ae5afb934215b5237f295,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01d24e14effae3981acc9e61f23098d55d0991557aa70e6ac9740dfd061d7f1b,PodSandboxId:1d1841af3982e6d7a9f295de3696ee68c1ae6d43a88c046d4d24ef0cdbe28152,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723831303419716430,Labels:map[string]string{io.kubernetes.container.
name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-108715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a15379ca26dc1695f5599c7c16e10e11,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41ac786f4c592b0c0736ca6627d4f0c4e8150fad80e489ded6bd8fddc2f46e92,PodSandboxId:257600fc02536caac5a06d737f7dc868cd70e9d4795ec7371a041dec877d774e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723831303364231722,Labels:map[string]string{io.kubernetes.contain
er.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-108715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b6b00b0a3e07971c4fa30028e7175fe,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f57a8cfc44d6c39887f29246c25f1e62eb8f7fb2d85abdacab38dcb3232abc0e,PodSandboxId:cb4e737d461f2d33a1916bdab27192335a2f8f3b7c6e3dc52dfd91d36326922f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723831303462420611,Labels:map[string]string{io.kubernetes
.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4kxgb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e24eb4c3-6178-4382-925a-b3ecd3f3679d,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a629a4bdefd15488bf20c9c30b78ec3aec32194d5524df19d8e683eb7a2c8d7d,PodSandboxId:8c68bf8799f28ef1fd3201381357e771bd62fa4e2dfe1ee67ad91ef71e51f0e1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723831303336096666,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler
,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-108715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb25ab7aa21e0ae716149c92d4111837,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cc24755ca5cdf32a6ecb6e38ae2fc3106fda5204381d98b93ec93c1f0a20a6b,PodSandboxId:0d68c6972897bfe88e02cff44b8f15f051d0f2c1cc96718df7bb4ccd1e6ee73b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723831210724111939,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.n
ame: coredns-6f6b679f8f-dtbhm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fc2af0d-f5d2-43bc-9aaf-f21d9170a235,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cafaf7419f0fb7efb44d569991e2fb7a1178718f3973dcefafb57dfa0ee8d8eb,PodSandboxId:b5760ee5e38e208c857e7705ece076f3379cced8443149fd6dd2ee0fe3792ae2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a
7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723831210863707684,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gvksp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bab26b94-a5d2-4da3-8c0d-cb4e750501d6,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca0c3f0020974e33768ea2c94049e982998dce891c8d2611033a35ac152711ab,PodSandboxId:995aae1bf3ead313aead339868fb19d1a23a99b6d01fab62fc02bc938226a1de,Metadata:&ContainerMetadata{Name
:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723831194923685206,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33e0bb00-d314-447b-96d7-1347e3b0b636,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4ef84d75-8085-4a8a-8c1f-d82f0c0ed1c9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	ac5865a7ccda1       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   4 seconds ago        Running             kube-proxy                3                   cb4e737d461f2       kube-proxy-4kxgb
	84f7fe79dcaa0       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   8 seconds ago        Running             kube-scheduler            3                   8c68bf8799f28       kube-scheduler-kubernetes-upgrade-108715
	75be135429b71       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   8 seconds ago        Running             kube-controller-manager   3                   257600fc02536       kube-controller-manager-kubernetes-upgrade-108715
	5abd62ff40fcd       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   8 seconds ago        Running             etcd                      3                   526336bb93137       etcd-kubernetes-upgrade-108715
	0628dfe701299       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   8 seconds ago        Running             kube-apiserver            3                   1d1841af3982e       kube-apiserver-kubernetes-upgrade-108715
	70cfa7560ba28       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   21 seconds ago       Running             coredns                   2                   e79f93c442788       coredns-6f6b679f8f-dtbhm
	f48c3c9527f0a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   22 seconds ago       Running             coredns                   2                   0df0bb7cc9c39       coredns-6f6b679f8f-gvksp
	f57a8cfc44d6c       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   22 seconds ago       Exited              kube-proxy                2                   cb4e737d461f2       kube-proxy-4kxgb
	2c54c77f938aa       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   22 seconds ago       Exited              etcd                      2                   526336bb93137       etcd-kubernetes-upgrade-108715
	01d24e14effae       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   22 seconds ago       Exited              kube-apiserver            2                   1d1841af3982e       kube-apiserver-kubernetes-upgrade-108715
	41ac786f4c592       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   22 seconds ago       Exited              kube-controller-manager   2                   257600fc02536       kube-controller-manager-kubernetes-upgrade-108715
	a629a4bdefd15       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   22 seconds ago       Exited              kube-scheduler            2                   8c68bf8799f28       kube-scheduler-kubernetes-upgrade-108715
	cafaf7419f0fb       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   About a minute ago   Exited              coredns                   1                   b5760ee5e38e2       coredns-6f6b679f8f-gvksp
	0cc24755ca5cd       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   About a minute ago   Exited              coredns                   1                   0d68c6972897b       coredns-6f6b679f8f-dtbhm
	ca0c3f0020974       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   2 minutes ago        Exited              storage-provisioner       0                   995aae1bf3ead       storage-provisioner
	
	
	==> coredns [0cc24755ca5cdf32a6ecb6e38ae2fc3106fda5204381d98b93ec93c1f0a20a6b] <==
	
	
	==> coredns [70cfa7560ba283dccfb53d0de54d06ba0a1d224451e411735c445fcc4c731c24] <==
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:59292->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1382803603]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Aug-2024 18:01:44.619) (total time: 10775ms):
	Trace[1382803603]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:59292->10.96.0.1:443: read: connection reset by peer 10775ms (18:01:55.394)
	Trace[1382803603]: [10.775303409s] [10.775303409s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:59292->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:59288->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[2000105421]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Aug-2024 18:01:44.618) (total time: 10775ms):
	Trace[2000105421]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:59288->10.96.0.1:443: read: connection reset by peer 10775ms (18:01:55.394)
	Trace[2000105421]: [10.775992811s] [10.775992811s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:59288->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:59278->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1544606162]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Aug-2024 18:01:44.617) (total time: 10777ms):
	Trace[1544606162]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:59278->10.96.0.1:443: read: connection reset by peer 10776ms (18:01:55.394)
	Trace[1544606162]: [10.777336608s] [10.777336608s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:59278->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> coredns [cafaf7419f0fb7efb44d569991e2fb7a1178718f3973dcefafb57dfa0ee8d8eb] <==
	
	
	==> coredns [f48c3c9527f0a0fc293332069e08a59a3a9ac33c943ef3a49bc252af15fe99f8] <==
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.7:56934->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1960294400]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Aug-2024 18:01:44.536) (total time: 10858ms):
	Trace[1960294400]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.7:56934->10.96.0.1:443: read: connection reset by peer 10857ms (18:01:55.394)
	Trace[1960294400]: [10.858032211s] [10.858032211s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.7:56934->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.7:56924->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1270301848]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Aug-2024 18:01:44.536) (total time: 10858ms):
	Trace[1270301848]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.7:56924->10.96.0.1:443: read: connection reset by peer 10858ms (18:01:55.394)
	Trace[1270301848]: [10.858554095s] [10.858554095s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.7:56924->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.7:56908->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1423188651]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Aug-2024 18:01:44.530) (total time: 10864ms):
	Trace[1423188651]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.7:56908->10.96.0.1:443: read: connection reset by peer 10863ms (18:01:55.394)
	Trace[1423188651]: [10.864535286s] [10.864535286s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.7:56908->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-108715
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-108715
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 17:59:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-108715
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 18:02:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 18:02:01 +0000   Fri, 16 Aug 2024 17:59:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 18:02:01 +0000   Fri, 16 Aug 2024 17:59:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 18:02:01 +0000   Fri, 16 Aug 2024 17:59:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 18:02:01 +0000   Fri, 16 Aug 2024 17:59:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.8
	  Hostname:    kubernetes-upgrade-108715
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ab2920a05c6e46a4b52672fffe395363
	  System UUID:                ab2920a0-5c6e-46a4-b526-72fffe395363
	  Boot ID:                    53b9f660-3388-4a8b-8c5f-efae7b3758ac
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-dtbhm                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     2m13s
	  kube-system                 coredns-6f6b679f8f-gvksp                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     2m13s
	  kube-system                 etcd-kubernetes-upgrade-108715                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         2m12s
	  kube-system                 kube-apiserver-kubernetes-upgrade-108715             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m13s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-108715    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m16s
	  kube-system                 kube-proxy-4kxgb                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m14s
	  kube-system                 kube-scheduler-kubernetes-upgrade-108715             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m19s
	  kube-system                 storage-provisioner                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5s                     kube-proxy       
	  Normal  Starting                 2m13s                  kube-proxy       
	  Normal  Starting                 2m27s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m24s (x8 over 2m27s)  kubelet          Node kubernetes-upgrade-108715 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m24s (x7 over 2m27s)  kubelet          Node kubernetes-upgrade-108715 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m24s (x8 over 2m27s)  kubelet          Node kubernetes-upgrade-108715 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           2m14s                  node-controller  Node kubernetes-upgrade-108715 event: Registered Node kubernetes-upgrade-108715 in Controller
	  Normal  Starting                 10s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10s (x8 over 10s)      kubelet          Node kubernetes-upgrade-108715 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10s (x8 over 10s)      kubelet          Node kubernetes-upgrade-108715 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10s (x7 over 10s)      kubelet          Node kubernetes-upgrade-108715 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3s                     node-controller  Node kubernetes-upgrade-108715 event: Registered Node kubernetes-upgrade-108715 in Controller
	
	
	==> dmesg <==
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.269790] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[  +0.070225] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.052581] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.174453] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.124851] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.278497] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +3.748835] systemd-fstab-generator[735]: Ignoring "noauto" option for root device
	[  +2.162163] systemd-fstab-generator[855]: Ignoring "noauto" option for root device
	[  +0.073974] kauditd_printk_skb: 158 callbacks suppressed
	[ +12.837613] systemd-fstab-generator[1250]: Ignoring "noauto" option for root device
	[  +0.083764] kauditd_printk_skb: 69 callbacks suppressed
	[Aug16 18:00] kauditd_printk_skb: 109 callbacks suppressed
	[  +1.675856] systemd-fstab-generator[2803]: Ignoring "noauto" option for root device
	[  +0.234901] systemd-fstab-generator[2826]: Ignoring "noauto" option for root device
	[  +0.332270] systemd-fstab-generator[2874]: Ignoring "noauto" option for root device
	[  +0.291336] systemd-fstab-generator[2894]: Ignoring "noauto" option for root device
	[  +0.545912] systemd-fstab-generator[2992]: Ignoring "noauto" option for root device
	[Aug16 18:01] systemd-fstab-generator[3333]: Ignoring "noauto" option for root device
	[  +0.078271] kauditd_printk_skb: 207 callbacks suppressed
	[  +5.963411] kauditd_printk_skb: 113 callbacks suppressed
	[  +8.575027] systemd-fstab-generator[4323]: Ignoring "noauto" option for root device
	[Aug16 18:02] kauditd_printk_skb: 48 callbacks suppressed
	[  +1.960948] systemd-fstab-generator[4781]: Ignoring "noauto" option for root device
	
	
	==> etcd [2c54c77f938aa573092520fa0da1f3d2980321e3ea8ae262748822ce6ff74090] <==
	{"level":"warn","ts":"2024-08-16T18:01:44.062790Z","caller":"embed/config.go:687","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-08-16T18:01:44.063392Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.39.8:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://192.168.39.8:2380","--initial-cluster=kubernetes-upgrade-108715=https://192.168.39.8:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.39.8:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.39.8:2380","--name=kubernetes-upgrade-108715","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot-coun
t=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	{"level":"info","ts":"2024-08-16T18:01:44.065343Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	{"level":"warn","ts":"2024-08-16T18:01:44.065642Z","caller":"embed/config.go:687","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-08-16T18:01:44.065889Z","caller":"embed/etcd.go:128","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.39.8:2380"]}
	{"level":"info","ts":"2024-08-16T18:01:44.066633Z","caller":"embed/etcd.go:496","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-16T18:01:44.072002Z","caller":"embed/etcd.go:136","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.8:2379"]}
	{"level":"info","ts":"2024-08-16T18:01:44.074882Z","caller":"embed/etcd.go:310","msg":"starting an etcd server","etcd-version":"3.5.15","git-sha":"9a5533382","go-version":"go1.21.12","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"kubernetes-upgrade-108715","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.39.8:2380"],"listen-peer-urls":["https://192.168.39.8:2380"],"advertise-client-urls":["https://192.168.39.8:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.8:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initi
al-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	{"level":"info","ts":"2024-08-16T18:01:44.105032Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"26.629989ms"}
	{"level":"info","ts":"2024-08-16T18:01:44.178124Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-08-16T18:01:44.200488Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"ebeeb2da37a85eb1","local-member-id":"5d432f19cde6e0bf","commit-index":406}
	{"level":"info","ts":"2024-08-16T18:01:44.200617Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5d432f19cde6e0bf switched to configuration voters=()"}
	{"level":"info","ts":"2024-08-16T18:01:44.200672Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5d432f19cde6e0bf became follower at term 2"}
	{"level":"info","ts":"2024-08-16T18:01:44.200686Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 5d432f19cde6e0bf [peers: [], term: 2, commit: 406, applied: 0, lastindex: 406, lastterm: 2]"}
	{"level":"warn","ts":"2024-08-16T18:01:44.228719Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	
	
	==> etcd [5abd62ff40fcd3dad64d75571c156bf700f0d14a0b4d9dcb238040aed5cc1ec5] <==
	{"level":"info","ts":"2024-08-16T18:01:58.260455Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ebeeb2da37a85eb1","local-member-id":"5d432f19cde6e0bf","added-peer-id":"5d432f19cde6e0bf","added-peer-peer-urls":["https://192.168.39.8:2380"]}
	{"level":"info","ts":"2024-08-16T18:01:58.260594Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ebeeb2da37a85eb1","local-member-id":"5d432f19cde6e0bf","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-16T18:01:58.260638Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-16T18:01:58.265085Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T18:01:58.267978Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-16T18:01:58.268041Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.8:2380"}
	{"level":"info","ts":"2024-08-16T18:01:58.268723Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.8:2380"}
	{"level":"info","ts":"2024-08-16T18:01:58.271172Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"5d432f19cde6e0bf","initial-advertise-peer-urls":["https://192.168.39.8:2380"],"listen-peer-urls":["https://192.168.39.8:2380"],"advertise-client-urls":["https://192.168.39.8:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.8:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-16T18:01:58.271311Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-16T18:01:59.627290Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5d432f19cde6e0bf is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-16T18:01:59.627372Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5d432f19cde6e0bf became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-16T18:01:59.627423Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5d432f19cde6e0bf received MsgPreVoteResp from 5d432f19cde6e0bf at term 2"}
	{"level":"info","ts":"2024-08-16T18:01:59.627450Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5d432f19cde6e0bf became candidate at term 3"}
	{"level":"info","ts":"2024-08-16T18:01:59.627460Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5d432f19cde6e0bf received MsgVoteResp from 5d432f19cde6e0bf at term 3"}
	{"level":"info","ts":"2024-08-16T18:01:59.627472Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5d432f19cde6e0bf became leader at term 3"}
	{"level":"info","ts":"2024-08-16T18:01:59.627482Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 5d432f19cde6e0bf elected leader 5d432f19cde6e0bf at term 3"}
	{"level":"info","ts":"2024-08-16T18:01:59.632637Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"5d432f19cde6e0bf","local-member-attributes":"{Name:kubernetes-upgrade-108715 ClientURLs:[https://192.168.39.8:2379]}","request-path":"/0/members/5d432f19cde6e0bf/attributes","cluster-id":"ebeeb2da37a85eb1","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-16T18:01:59.632684Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-16T18:01:59.632959Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-16T18:01:59.633009Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-16T18:01:59.633051Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-16T18:01:59.633761Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T18:01:59.634108Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T18:01:59.634615Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-16T18:01:59.635009Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.8:2379"}
	
	
	==> kernel <==
	 18:02:09 up 2 min,  0 users,  load average: 0.76, 0.46, 0.19
	Linux kubernetes-upgrade-108715 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [01d24e14effae3981acc9e61f23098d55d0991557aa70e6ac9740dfd061d7f1b] <==
	I0816 18:01:44.324503       1 options.go:228] external host was not specified, using 192.168.39.8
	I0816 18:01:44.327583       1 server.go:142] Version: v1.31.0
	I0816 18:01:44.327612       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W0816 18:01:44.913364       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:01:44.917138       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0816 18:01:44.917269       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0816 18:01:44.925863       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0816 18:01:44.926476       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0816 18:01:44.926506       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0816 18:01:44.926679       1 instance.go:232] Using reconciler: lease
	W0816 18:01:44.928955       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:01:45.917952       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:01:45.918007       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:01:45.929870       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:01:47.336809       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:01:47.417639       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:01:47.838606       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:01:49.878651       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:01:50.276223       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:01:50.788669       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:01:53.690530       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [0628dfe7012997607c2178a37c68b33ce18f8b0f3ae6bcfa0b4f05d0ffc55a7c] <==
	I0816 18:02:01.241591       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0816 18:02:01.241890       1 policy_source.go:224] refreshing policies
	I0816 18:02:01.249918       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0816 18:02:01.249991       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0816 18:02:01.250122       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0816 18:02:01.251129       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0816 18:02:01.255642       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0816 18:02:01.279446       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0816 18:02:01.282499       1 shared_informer.go:320] Caches are synced for configmaps
	I0816 18:02:01.283355       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0816 18:02:01.283879       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0816 18:02:01.284257       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0816 18:02:01.284564       1 aggregator.go:171] initial CRD sync complete...
	I0816 18:02:01.284600       1 autoregister_controller.go:144] Starting autoregister controller
	I0816 18:02:01.284622       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0816 18:02:01.284644       1 cache.go:39] Caches are synced for autoregister controller
	E0816 18:02:01.368010       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0816 18:02:02.057250       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0816 18:02:03.272983       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0816 18:02:03.294479       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0816 18:02:03.371603       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0816 18:02:03.427435       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0816 18:02:03.445880       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0816 18:02:04.580647       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0816 18:02:04.785665       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [41ac786f4c592b0c0736ca6627d4f0c4e8150fad80e489ded6bd8fddc2f46e92] <==
	I0816 18:01:45.362114       1 serving.go:386] Generated self-signed cert in-memory
	I0816 18:01:45.825847       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0816 18:01:45.825941       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 18:01:45.829630       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0816 18:01:45.829791       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0816 18:01:45.830002       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0816 18:01:45.830037       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	
	
	==> kube-controller-manager [75be135429b71872d4de9e3f79c20aae7e00b4abca349adf7bb18a80e3f2a12f] <==
	I0816 18:02:04.574481       1 shared_informer.go:320] Caches are synced for ephemeral
	I0816 18:02:04.574526       1 shared_informer.go:320] Caches are synced for HPA
	I0816 18:02:04.575442       1 shared_informer.go:320] Caches are synced for taint
	I0816 18:02:04.575596       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0816 18:02:04.575691       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="kubernetes-upgrade-108715"
	I0816 18:02:04.575728       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0816 18:02:04.577407       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0816 18:02:04.585085       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0816 18:02:04.587133       1 shared_informer.go:320] Caches are synced for daemon sets
	I0816 18:02:04.589894       1 shared_informer.go:320] Caches are synced for endpoint
	I0816 18:02:04.595969       1 shared_informer.go:320] Caches are synced for job
	I0816 18:02:04.602330       1 shared_informer.go:320] Caches are synced for disruption
	I0816 18:02:04.605066       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0816 18:02:04.605298       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="114.059µs"
	I0816 18:02:04.612187       1 shared_informer.go:320] Caches are synced for persistent volume
	I0816 18:02:04.636385       1 shared_informer.go:320] Caches are synced for deployment
	I0816 18:02:04.649008       1 shared_informer.go:320] Caches are synced for namespace
	I0816 18:02:04.649025       1 shared_informer.go:320] Caches are synced for resource quota
	I0816 18:02:04.659405       1 shared_informer.go:320] Caches are synced for resource quota
	I0816 18:02:04.680019       1 shared_informer.go:320] Caches are synced for service account
	I0816 18:02:04.719285       1 shared_informer.go:320] Caches are synced for attach detach
	I0816 18:02:04.726975       1 shared_informer.go:320] Caches are synced for cronjob
	I0816 18:02:05.187575       1 shared_informer.go:320] Caches are synced for garbage collector
	I0816 18:02:05.187717       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0816 18:02:05.234805       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-proxy [ac5865a7ccda1fb0d4c1b62acbc51b63233da8b77fd8f51d0c4ad87857a063c8] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0816 18:02:01.814242       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0816 18:02:01.830121       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.8"]
	E0816 18:02:01.830329       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0816 18:02:01.882201       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0816 18:02:01.882373       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0816 18:02:01.882426       1 server_linux.go:169] "Using iptables Proxier"
	I0816 18:02:01.886683       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0816 18:02:01.887309       1 server.go:483] "Version info" version="v1.31.0"
	I0816 18:02:01.887402       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 18:02:01.889482       1 config.go:197] "Starting service config controller"
	I0816 18:02:01.889545       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0816 18:02:01.889594       1 config.go:104] "Starting endpoint slice config controller"
	I0816 18:02:01.889618       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0816 18:02:01.890380       1 config.go:326] "Starting node config controller"
	I0816 18:02:01.891897       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0816 18:02:01.990424       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0816 18:02:01.990486       1 shared_informer.go:320] Caches are synced for service config
	I0816 18:02:01.992068       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [f57a8cfc44d6c39887f29246c25f1e62eb8f7fb2d85abdacab38dcb3232abc0e] <==
	
	
	==> kube-scheduler [84f7fe79dcaa0f4b01d8576be0730e3c2094d4724edba621b30da7ac69ea72cd] <==
	I0816 18:01:58.775606       1 serving.go:386] Generated self-signed cert in-memory
	W0816 18:02:01.161530       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0816 18:02:01.161576       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0816 18:02:01.161590       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0816 18:02:01.161596       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0816 18:02:01.209376       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0816 18:02:01.213642       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 18:02:01.220899       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0816 18:02:01.226420       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0816 18:02:01.238125       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0816 18:02:01.238314       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0816 18:02:01.327197       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [a629a4bdefd15488bf20c9c30b78ec3aec32194d5524df19d8e683eb7a2c8d7d] <==
	I0816 18:01:44.628229       1 serving.go:386] Generated self-signed cert in-memory
	W0816 18:01:55.395761       1 authentication.go:370] Error looking up in-cluster authentication configuration: Get "https://192.168.39.8:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.39.8:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.8:48182->192.168.39.8:8443: read: connection reset by peer
	W0816 18:01:55.395866       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0816 18:01:55.395877       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0816 18:01:55.411181       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0816 18:01:55.411222       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E0816 18:01:55.411245       1 event.go:401] "Unable start event watcher (will not retry!)" err="broadcaster already stopped"
	I0816 18:01:55.413123       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0816 18:01:55.413161       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0816 18:01:55.413197       1 shared_informer.go:316] "Unhandled Error" err="unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file" logger="UnhandledError"
	I0816 18:01:55.413328       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	E0816 18:01:55.413426       1 server.go:267] "waiting for handlers to sync" err="context canceled"
	E0816 18:01:55.413493       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 16 18:01:57 kubernetes-upgrade-108715 kubelet[4330]: I0816 18:01:57.762251    4330 scope.go:117] "RemoveContainer" containerID="41ac786f4c592b0c0736ca6627d4f0c4e8150fad80e489ded6bd8fddc2f46e92"
	Aug 16 18:01:57 kubernetes-upgrade-108715 kubelet[4330]: I0816 18:01:57.763339    4330 scope.go:117] "RemoveContainer" containerID="a629a4bdefd15488bf20c9c30b78ec3aec32194d5524df19d8e683eb7a2c8d7d"
	Aug 16 18:01:57 kubernetes-upgrade-108715 kubelet[4330]: I0816 18:01:57.764342    4330 scope.go:117] "RemoveContainer" containerID="2c54c77f938aa573092520fa0da1f3d2980321e3ea8ae262748822ce6ff74090"
	Aug 16 18:01:57 kubernetes-upgrade-108715 kubelet[4330]: E0816 18:01:57.883660    4330 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-108715?timeout=10s\": dial tcp 192.168.39.8:8443: connect: connection refused" interval="800ms"
	Aug 16 18:01:58 kubernetes-upgrade-108715 kubelet[4330]: I0816 18:01:58.091136    4330 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-108715"
	Aug 16 18:01:58 kubernetes-upgrade-108715 kubelet[4330]: E0816 18:01:58.092148    4330 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.8:8443: connect: connection refused" node="kubernetes-upgrade-108715"
	Aug 16 18:01:58 kubernetes-upgrade-108715 kubelet[4330]: W0816 18:01:58.108101    4330 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.39.8:8443: connect: connection refused
	Aug 16 18:01:58 kubernetes-upgrade-108715 kubelet[4330]: E0816 18:01:58.108234    4330 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.39.8:8443: connect: connection refused" logger="UnhandledError"
	Aug 16 18:01:58 kubernetes-upgrade-108715 kubelet[4330]: I0816 18:01:58.894053    4330 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-108715"
	Aug 16 18:02:01 kubernetes-upgrade-108715 kubelet[4330]: I0816 18:02:01.256749    4330 apiserver.go:52] "Watching apiserver"
	Aug 16 18:02:01 kubernetes-upgrade-108715 kubelet[4330]: I0816 18:02:01.280487    4330 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Aug 16 18:02:01 kubernetes-upgrade-108715 kubelet[4330]: I0816 18:02:01.310703    4330 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/33e0bb00-d314-447b-96d7-1347e3b0b636-tmp\") pod \"storage-provisioner\" (UID: \"33e0bb00-d314-447b-96d7-1347e3b0b636\") " pod="kube-system/storage-provisioner"
	Aug 16 18:02:01 kubernetes-upgrade-108715 kubelet[4330]: I0816 18:02:01.310860    4330 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e24eb4c3-6178-4382-925a-b3ecd3f3679d-xtables-lock\") pod \"kube-proxy-4kxgb\" (UID: \"e24eb4c3-6178-4382-925a-b3ecd3f3679d\") " pod="kube-system/kube-proxy-4kxgb"
	Aug 16 18:02:01 kubernetes-upgrade-108715 kubelet[4330]: I0816 18:02:01.310907    4330 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e24eb4c3-6178-4382-925a-b3ecd3f3679d-lib-modules\") pod \"kube-proxy-4kxgb\" (UID: \"e24eb4c3-6178-4382-925a-b3ecd3f3679d\") " pod="kube-system/kube-proxy-4kxgb"
	Aug 16 18:02:01 kubernetes-upgrade-108715 kubelet[4330]: I0816 18:02:01.371661    4330 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-108715"
	Aug 16 18:02:01 kubernetes-upgrade-108715 kubelet[4330]: I0816 18:02:01.371885    4330 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-108715"
	Aug 16 18:02:01 kubernetes-upgrade-108715 kubelet[4330]: I0816 18:02:01.371990    4330 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 16 18:02:01 kubernetes-upgrade-108715 kubelet[4330]: I0816 18:02:01.373277    4330 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 16 18:02:01 kubernetes-upgrade-108715 kubelet[4330]: I0816 18:02:01.580786    4330 scope.go:117] "RemoveContainer" containerID="f57a8cfc44d6c39887f29246c25f1e62eb8f7fb2d85abdacab38dcb3232abc0e"
	Aug 16 18:02:01 kubernetes-upgrade-108715 kubelet[4330]: I0816 18:02:01.581555    4330 scope.go:117] "RemoveContainer" containerID="ca0c3f0020974e33768ea2c94049e982998dce891c8d2611033a35ac152711ab"
	Aug 16 18:02:01 kubernetes-upgrade-108715 kubelet[4330]: E0816 18:02:01.592615    4330 log.go:32] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = the container name \"k8s_storage-provisioner_storage-provisioner_kube-system_33e0bb00-d314-447b-96d7-1347e3b0b636_1\" is already in use by 3d5b172c96359757ab4cbea09e0bee6bcaca652dfaeb447dbfa12c794b872e6f. You have to remove that container to be able to reuse that name: that name is already in use" podSandboxID="c17b33534cefb5bf0e8b2a245331100b26441310fbb26bd94444216de4bf1b5a"
	Aug 16 18:02:01 kubernetes-upgrade-108715 kubelet[4330]: E0816 18:02:01.592920    4330 kuberuntime_manager.go:1272] "Unhandled Error" err="container &Container{Name:storage-provisioner,Image:gcr.io/k8s-minikube/storage-provisioner:v5,Command:[/storage-provisioner],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9fmvq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevi
ce{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod storage-provisioner_kube-system(33e0bb00-d314-447b-96d7-1347e3b0b636): CreateContainerError: the container name \"k8s_storage-provisioner_storage-provisioner_kube-system_33e0bb00-d314-447b-96d7-1347e3b0b636_1\" is already in use by 3d5b172c96359757ab4cbea09e0bee6bcaca652dfaeb447dbfa12c794b872e6f. You have to remove that container to be able to reuse that name: that name is already in use" logger="UnhandledError"
	Aug 16 18:02:01 kubernetes-upgrade-108715 kubelet[4330]: E0816 18:02:01.594265    4330 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CreateContainerError: \"the container name \\\"k8s_storage-provisioner_storage-provisioner_kube-system_33e0bb00-d314-447b-96d7-1347e3b0b636_1\\\" is already in use by 3d5b172c96359757ab4cbea09e0bee6bcaca652dfaeb447dbfa12c794b872e6f. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/storage-provisioner" podUID="33e0bb00-d314-447b-96d7-1347e3b0b636"
	Aug 16 18:02:07 kubernetes-upgrade-108715 kubelet[4330]: E0816 18:02:07.390236    4330 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723831327389744453,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:02:07 kubernetes-upgrade-108715 kubelet[4330]: E0816 18:02:07.390277    4330 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723831327389744453,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [ca0c3f0020974e33768ea2c94049e982998dce891c8d2611033a35ac152711ab] <==
	I0816 17:59:55.086138       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0816 17:59:55.116693       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0816 17:59:55.121656       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0816 17:59:55.133258       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0816 17:59:55.135399       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-108715_e9b601a4-23d9-4590-970a-e9946f142647!
	I0816 17:59:55.142168       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3d093661-355a-4daa-8276-cf6bd4bbb59e", APIVersion:"v1", ResourceVersion:"359", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-108715_e9b601a4-23d9-4590-970a-e9946f142647 became leader
	I0816 17:59:55.237950       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-108715_e9b601a4-23d9-4590-970a-e9946f142647!
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 18:02:05.501533   62107 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19461-9545/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-108715 -n kubernetes-upgrade-108715
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-108715 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-108715" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-108715
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-108715: (1.172735683s)
--- FAIL: TestKubernetesUpgrade (457.93s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (287.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-783465 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-783465 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m46.935977573s)

                                                
                                                
-- stdout --
	* [old-k8s-version-783465] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19461
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19461-9545/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19461-9545/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-783465" primary control-plane node in "old-k8s-version-783465" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 18:04:10.377168   67727 out.go:345] Setting OutFile to fd 1 ...
	I0816 18:04:10.377408   67727 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 18:04:10.377417   67727 out.go:358] Setting ErrFile to fd 2...
	I0816 18:04:10.377421   67727 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 18:04:10.377622   67727 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-9545/.minikube/bin
	I0816 18:04:10.378311   67727 out.go:352] Setting JSON to false
	I0816 18:04:10.379414   67727 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6348,"bootTime":1723825102,"procs":307,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0816 18:04:10.379476   67727 start.go:139] virtualization: kvm guest
	I0816 18:04:10.381385   67727 out.go:177] * [old-k8s-version-783465] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0816 18:04:10.382740   67727 notify.go:220] Checking for updates...
	I0816 18:04:10.383368   67727 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 18:04:10.384618   67727 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 18:04:10.386100   67727 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19461-9545/kubeconfig
	I0816 18:04:10.387470   67727 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19461-9545/.minikube
	I0816 18:04:10.388751   67727 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 18:04:10.389916   67727 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 18:04:10.391555   67727 config.go:182] Loaded profile config "calico-791304": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 18:04:10.391666   67727 config.go:182] Loaded profile config "custom-flannel-791304": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 18:04:10.391741   67727 config.go:182] Loaded profile config "kindnet-791304": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 18:04:10.391824   67727 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 18:04:10.430966   67727 out.go:177] * Using the kvm2 driver based on user configuration
	I0816 18:04:10.432266   67727 start.go:297] selected driver: kvm2
	I0816 18:04:10.432282   67727 start.go:901] validating driver "kvm2" against <nil>
	I0816 18:04:10.432293   67727 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 18:04:10.433087   67727 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 18:04:10.433157   67727 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19461-9545/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 18:04:10.450273   67727 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0816 18:04:10.450351   67727 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 18:04:10.450597   67727 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 18:04:10.450668   67727 cni.go:84] Creating CNI manager for ""
	I0816 18:04:10.450689   67727 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 18:04:10.450703   67727 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0816 18:04:10.450769   67727 start.go:340] cluster config:
	{Name:old-k8s-version-783465 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-783465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 18:04:10.450915   67727 iso.go:125] acquiring lock: {Name:mke35866c1d0f078e1f027c2d727692722810e04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 18:04:10.452593   67727 out.go:177] * Starting "old-k8s-version-783465" primary control-plane node in "old-k8s-version-783465" cluster
	I0816 18:04:10.454132   67727 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0816 18:04:10.454178   67727 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0816 18:04:10.454189   67727 cache.go:56] Caching tarball of preloaded images
	I0816 18:04:10.454281   67727 preload.go:172] Found /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 18:04:10.454295   67727 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0816 18:04:10.454425   67727 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/config.json ...
	I0816 18:04:10.454455   67727 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/config.json: {Name:mkade4cf5d56ea0f5bb661a6ae704f7309fcb271 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:04:10.454651   67727 start.go:360] acquireMachinesLock for old-k8s-version-783465: {Name:mke1ffa1f2d6d714bdd85e184816ba8f4dfd08f1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 18:04:26.001246   67727 start.go:364] duration metric: took 15.546519365s to acquireMachinesLock for "old-k8s-version-783465"
	I0816 18:04:26.001340   67727 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-783465 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-783465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 18:04:26.001476   67727 start.go:125] createHost starting for "" (driver="kvm2")
	I0816 18:04:26.003389   67727 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 18:04:26.003612   67727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:04:26.003651   67727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:04:26.020119   67727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40265
	I0816 18:04:26.020606   67727 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:04:26.021180   67727 main.go:141] libmachine: Using API Version  1
	I0816 18:04:26.021203   67727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:04:26.021508   67727 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:04:26.021677   67727 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetMachineName
	I0816 18:04:26.021803   67727 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	I0816 18:04:26.021921   67727 start.go:159] libmachine.API.Create for "old-k8s-version-783465" (driver="kvm2")
	I0816 18:04:26.021944   67727 client.go:168] LocalClient.Create starting
	I0816 18:04:26.021973   67727 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem
	I0816 18:04:26.022008   67727 main.go:141] libmachine: Decoding PEM data...
	I0816 18:04:26.022028   67727 main.go:141] libmachine: Parsing certificate...
	I0816 18:04:26.022081   67727 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem
	I0816 18:04:26.022101   67727 main.go:141] libmachine: Decoding PEM data...
	I0816 18:04:26.022114   67727 main.go:141] libmachine: Parsing certificate...
	I0816 18:04:26.022141   67727 main.go:141] libmachine: Running pre-create checks...
	I0816 18:04:26.022154   67727 main.go:141] libmachine: (old-k8s-version-783465) Calling .PreCreateCheck
	I0816 18:04:26.022521   67727 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetConfigRaw
	I0816 18:04:26.022868   67727 main.go:141] libmachine: Creating machine...
	I0816 18:04:26.022881   67727 main.go:141] libmachine: (old-k8s-version-783465) Calling .Create
	I0816 18:04:26.023015   67727 main.go:141] libmachine: (old-k8s-version-783465) Creating KVM machine...
	I0816 18:04:26.024091   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | found existing default KVM network
	I0816 18:04:26.025933   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:04:26.025798   67950 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f7f0}
	I0816 18:04:26.025989   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | created network xml: 
	I0816 18:04:26.026011   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | <network>
	I0816 18:04:26.026026   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG |   <name>mk-old-k8s-version-783465</name>
	I0816 18:04:26.026038   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG |   <dns enable='no'/>
	I0816 18:04:26.026051   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG |   
	I0816 18:04:26.026063   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0816 18:04:26.026086   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG |     <dhcp>
	I0816 18:04:26.026097   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0816 18:04:26.026108   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG |     </dhcp>
	I0816 18:04:26.026120   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG |   </ip>
	I0816 18:04:26.026130   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG |   
	I0816 18:04:26.026142   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | </network>
	I0816 18:04:26.026150   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | 
	I0816 18:04:26.031074   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | trying to create private KVM network mk-old-k8s-version-783465 192.168.39.0/24...
	I0816 18:04:26.107732   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | private KVM network mk-old-k8s-version-783465 192.168.39.0/24 created
	I0816 18:04:26.107767   67727 main.go:141] libmachine: (old-k8s-version-783465) Setting up store path in /home/jenkins/minikube-integration/19461-9545/.minikube/machines/old-k8s-version-783465 ...
	I0816 18:04:26.107793   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:04:26.107714   67950 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19461-9545/.minikube
	I0816 18:04:26.107807   67727 main.go:141] libmachine: (old-k8s-version-783465) Building disk image from file:///home/jenkins/minikube-integration/19461-9545/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0816 18:04:26.107829   67727 main.go:141] libmachine: (old-k8s-version-783465) Downloading /home/jenkins/minikube-integration/19461-9545/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19461-9545/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0816 18:04:26.362118   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:04:26.362003   67950 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/old-k8s-version-783465/id_rsa...
	I0816 18:04:26.447875   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:04:26.447767   67950 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/old-k8s-version-783465/old-k8s-version-783465.rawdisk...
	I0816 18:04:26.447906   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | Writing magic tar header
	I0816 18:04:26.447923   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | Writing SSH key tar header
	I0816 18:04:26.447935   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:04:26.447880   67950 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19461-9545/.minikube/machines/old-k8s-version-783465 ...
	I0816 18:04:26.447955   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/old-k8s-version-783465
	I0816 18:04:26.448003   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19461-9545/.minikube/machines
	I0816 18:04:26.448046   67727 main.go:141] libmachine: (old-k8s-version-783465) Setting executable bit set on /home/jenkins/minikube-integration/19461-9545/.minikube/machines/old-k8s-version-783465 (perms=drwx------)
	I0816 18:04:26.448060   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19461-9545/.minikube
	I0816 18:04:26.448078   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19461-9545
	I0816 18:04:26.448092   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0816 18:04:26.448108   67727 main.go:141] libmachine: (old-k8s-version-783465) Setting executable bit set on /home/jenkins/minikube-integration/19461-9545/.minikube/machines (perms=drwxr-xr-x)
	I0816 18:04:26.448128   67727 main.go:141] libmachine: (old-k8s-version-783465) Setting executable bit set on /home/jenkins/minikube-integration/19461-9545/.minikube (perms=drwxr-xr-x)
	I0816 18:04:26.448142   67727 main.go:141] libmachine: (old-k8s-version-783465) Setting executable bit set on /home/jenkins/minikube-integration/19461-9545 (perms=drwxrwxr-x)
	I0816 18:04:26.448157   67727 main.go:141] libmachine: (old-k8s-version-783465) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0816 18:04:26.448169   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | Checking permissions on dir: /home/jenkins
	I0816 18:04:26.448180   67727 main.go:141] libmachine: (old-k8s-version-783465) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0816 18:04:26.448195   67727 main.go:141] libmachine: (old-k8s-version-783465) Creating domain...
	I0816 18:04:26.448207   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | Checking permissions on dir: /home
	I0816 18:04:26.448223   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | Skipping /home - not owner
	I0816 18:04:26.449276   67727 main.go:141] libmachine: (old-k8s-version-783465) define libvirt domain using xml: 
	I0816 18:04:26.449308   67727 main.go:141] libmachine: (old-k8s-version-783465) <domain type='kvm'>
	I0816 18:04:26.449319   67727 main.go:141] libmachine: (old-k8s-version-783465)   <name>old-k8s-version-783465</name>
	I0816 18:04:26.449328   67727 main.go:141] libmachine: (old-k8s-version-783465)   <memory unit='MiB'>2200</memory>
	I0816 18:04:26.449336   67727 main.go:141] libmachine: (old-k8s-version-783465)   <vcpu>2</vcpu>
	I0816 18:04:26.449358   67727 main.go:141] libmachine: (old-k8s-version-783465)   <features>
	I0816 18:04:26.449372   67727 main.go:141] libmachine: (old-k8s-version-783465)     <acpi/>
	I0816 18:04:26.449379   67727 main.go:141] libmachine: (old-k8s-version-783465)     <apic/>
	I0816 18:04:26.449390   67727 main.go:141] libmachine: (old-k8s-version-783465)     <pae/>
	I0816 18:04:26.449405   67727 main.go:141] libmachine: (old-k8s-version-783465)     
	I0816 18:04:26.449416   67727 main.go:141] libmachine: (old-k8s-version-783465)   </features>
	I0816 18:04:26.449422   67727 main.go:141] libmachine: (old-k8s-version-783465)   <cpu mode='host-passthrough'>
	I0816 18:04:26.449430   67727 main.go:141] libmachine: (old-k8s-version-783465)   
	I0816 18:04:26.449435   67727 main.go:141] libmachine: (old-k8s-version-783465)   </cpu>
	I0816 18:04:26.449443   67727 main.go:141] libmachine: (old-k8s-version-783465)   <os>
	I0816 18:04:26.449449   67727 main.go:141] libmachine: (old-k8s-version-783465)     <type>hvm</type>
	I0816 18:04:26.449455   67727 main.go:141] libmachine: (old-k8s-version-783465)     <boot dev='cdrom'/>
	I0816 18:04:26.449460   67727 main.go:141] libmachine: (old-k8s-version-783465)     <boot dev='hd'/>
	I0816 18:04:26.449466   67727 main.go:141] libmachine: (old-k8s-version-783465)     <bootmenu enable='no'/>
	I0816 18:04:26.449476   67727 main.go:141] libmachine: (old-k8s-version-783465)   </os>
	I0816 18:04:26.449504   67727 main.go:141] libmachine: (old-k8s-version-783465)   <devices>
	I0816 18:04:26.449530   67727 main.go:141] libmachine: (old-k8s-version-783465)     <disk type='file' device='cdrom'>
	I0816 18:04:26.449555   67727 main.go:141] libmachine: (old-k8s-version-783465)       <source file='/home/jenkins/minikube-integration/19461-9545/.minikube/machines/old-k8s-version-783465/boot2docker.iso'/>
	I0816 18:04:26.449567   67727 main.go:141] libmachine: (old-k8s-version-783465)       <target dev='hdc' bus='scsi'/>
	I0816 18:04:26.449578   67727 main.go:141] libmachine: (old-k8s-version-783465)       <readonly/>
	I0816 18:04:26.449589   67727 main.go:141] libmachine: (old-k8s-version-783465)     </disk>
	I0816 18:04:26.449599   67727 main.go:141] libmachine: (old-k8s-version-783465)     <disk type='file' device='disk'>
	I0816 18:04:26.449612   67727 main.go:141] libmachine: (old-k8s-version-783465)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0816 18:04:26.449626   67727 main.go:141] libmachine: (old-k8s-version-783465)       <source file='/home/jenkins/minikube-integration/19461-9545/.minikube/machines/old-k8s-version-783465/old-k8s-version-783465.rawdisk'/>
	I0816 18:04:26.449638   67727 main.go:141] libmachine: (old-k8s-version-783465)       <target dev='hda' bus='virtio'/>
	I0816 18:04:26.449648   67727 main.go:141] libmachine: (old-k8s-version-783465)     </disk>
	I0816 18:04:26.449660   67727 main.go:141] libmachine: (old-k8s-version-783465)     <interface type='network'>
	I0816 18:04:26.449671   67727 main.go:141] libmachine: (old-k8s-version-783465)       <source network='mk-old-k8s-version-783465'/>
	I0816 18:04:26.449682   67727 main.go:141] libmachine: (old-k8s-version-783465)       <model type='virtio'/>
	I0816 18:04:26.449690   67727 main.go:141] libmachine: (old-k8s-version-783465)     </interface>
	I0816 18:04:26.449711   67727 main.go:141] libmachine: (old-k8s-version-783465)     <interface type='network'>
	I0816 18:04:26.449721   67727 main.go:141] libmachine: (old-k8s-version-783465)       <source network='default'/>
	I0816 18:04:26.449734   67727 main.go:141] libmachine: (old-k8s-version-783465)       <model type='virtio'/>
	I0816 18:04:26.449748   67727 main.go:141] libmachine: (old-k8s-version-783465)     </interface>
	I0816 18:04:26.449760   67727 main.go:141] libmachine: (old-k8s-version-783465)     <serial type='pty'>
	I0816 18:04:26.449771   67727 main.go:141] libmachine: (old-k8s-version-783465)       <target port='0'/>
	I0816 18:04:26.449780   67727 main.go:141] libmachine: (old-k8s-version-783465)     </serial>
	I0816 18:04:26.449790   67727 main.go:141] libmachine: (old-k8s-version-783465)     <console type='pty'>
	I0816 18:04:26.449802   67727 main.go:141] libmachine: (old-k8s-version-783465)       <target type='serial' port='0'/>
	I0816 18:04:26.449812   67727 main.go:141] libmachine: (old-k8s-version-783465)     </console>
	I0816 18:04:26.449821   67727 main.go:141] libmachine: (old-k8s-version-783465)     <rng model='virtio'>
	I0816 18:04:26.449831   67727 main.go:141] libmachine: (old-k8s-version-783465)       <backend model='random'>/dev/random</backend>
	I0816 18:04:26.449837   67727 main.go:141] libmachine: (old-k8s-version-783465)     </rng>
	I0816 18:04:26.449842   67727 main.go:141] libmachine: (old-k8s-version-783465)     
	I0816 18:04:26.449848   67727 main.go:141] libmachine: (old-k8s-version-783465)     
	I0816 18:04:26.449856   67727 main.go:141] libmachine: (old-k8s-version-783465)   </devices>
	I0816 18:04:26.449861   67727 main.go:141] libmachine: (old-k8s-version-783465) </domain>
	I0816 18:04:26.449868   67727 main.go:141] libmachine: (old-k8s-version-783465) 
	I0816 18:04:26.453926   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:db:c2:d9 in network default
	I0816 18:04:26.454500   67727 main.go:141] libmachine: (old-k8s-version-783465) Ensuring networks are active...
	I0816 18:04:26.454525   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:04:26.455222   67727 main.go:141] libmachine: (old-k8s-version-783465) Ensuring network default is active
	I0816 18:04:26.455565   67727 main.go:141] libmachine: (old-k8s-version-783465) Ensuring network mk-old-k8s-version-783465 is active
	I0816 18:04:26.456069   67727 main.go:141] libmachine: (old-k8s-version-783465) Getting domain xml...
	I0816 18:04:26.456932   67727 main.go:141] libmachine: (old-k8s-version-783465) Creating domain...
	I0816 18:04:27.815273   67727 main.go:141] libmachine: (old-k8s-version-783465) Waiting to get IP...
	I0816 18:04:27.816028   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:04:27.816566   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:04:27.816585   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:04:27.816554   67950 retry.go:31] will retry after 303.320971ms: waiting for machine to come up
	I0816 18:04:28.121144   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:04:28.121689   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:04:28.121712   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:04:28.121645   67950 retry.go:31] will retry after 317.884539ms: waiting for machine to come up
	I0816 18:04:28.444242   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:04:28.445078   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:04:28.445102   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:04:28.444986   67950 retry.go:31] will retry after 477.094134ms: waiting for machine to come up
	I0816 18:04:28.923768   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:04:28.924334   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:04:28.924358   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:04:28.924257   67950 retry.go:31] will retry after 556.20592ms: waiting for machine to come up
	I0816 18:04:29.481818   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:04:29.482372   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:04:29.482426   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:04:29.482358   67950 retry.go:31] will retry after 758.448212ms: waiting for machine to come up
	I0816 18:04:30.242812   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:04:30.243520   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:04:30.243546   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:04:30.243468   67950 retry.go:31] will retry after 680.477807ms: waiting for machine to come up
	I0816 18:04:30.925362   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:04:30.925969   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:04:30.925998   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:04:30.925886   67950 retry.go:31] will retry after 970.693624ms: waiting for machine to come up
	I0816 18:04:31.898276   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:04:31.898747   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:04:31.898776   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:04:31.898703   67950 retry.go:31] will retry after 1.122432816s: waiting for machine to come up
	I0816 18:04:33.022738   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:04:33.023244   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:04:33.023293   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:04:33.023203   67950 retry.go:31] will retry after 1.500265931s: waiting for machine to come up
	I0816 18:04:34.525627   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:04:34.526242   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:04:34.526273   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:04:34.526182   67950 retry.go:31] will retry after 1.425680484s: waiting for machine to come up
	I0816 18:04:35.953979   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:04:35.954665   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:04:35.954710   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:04:35.954587   67950 retry.go:31] will retry after 2.432510938s: waiting for machine to come up
	I0816 18:04:38.391052   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:04:38.391616   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:04:38.391703   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:04:38.391658   67950 retry.go:31] will retry after 2.2296108s: waiting for machine to come up
	I0816 18:04:40.623114   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:04:40.623998   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:04:40.624023   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:04:40.623916   67950 retry.go:31] will retry after 4.077958106s: waiting for machine to come up
	I0816 18:04:44.705791   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:04:44.765485   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:04:44.765515   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:04:44.765416   67950 retry.go:31] will retry after 4.926584617s: waiting for machine to come up
	I0816 18:04:49.693644   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:04:49.694227   67727 main.go:141] libmachine: (old-k8s-version-783465) Found IP for machine: 192.168.39.211
	I0816 18:04:49.694259   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has current primary IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:04:49.694269   67727 main.go:141] libmachine: (old-k8s-version-783465) Reserving static IP address...
	I0816 18:04:49.694604   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-783465", mac: "52:54:00:d1:97:35", ip: "192.168.39.211"} in network mk-old-k8s-version-783465
	I0816 18:04:49.768046   67727 main.go:141] libmachine: (old-k8s-version-783465) Reserved static IP address: 192.168.39.211
	I0816 18:04:49.768072   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | Getting to WaitForSSH function...
	I0816 18:04:49.768081   67727 main.go:141] libmachine: (old-k8s-version-783465) Waiting for SSH to be available...
	I0816 18:04:49.770801   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:04:49.771230   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:04:41 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d1:97:35}
	I0816 18:04:49.771260   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:04:49.771417   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | Using SSH client type: external
	I0816 18:04:49.771443   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | Using SSH private key: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/old-k8s-version-783465/id_rsa (-rw-------)
	I0816 18:04:49.771470   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.211 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19461-9545/.minikube/machines/old-k8s-version-783465/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 18:04:49.771485   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | About to run SSH command:
	I0816 18:04:49.771501   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | exit 0
	I0816 18:04:49.896854   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | SSH cmd err, output: <nil>: 
	I0816 18:04:49.897097   67727 main.go:141] libmachine: (old-k8s-version-783465) KVM machine creation complete!
	I0816 18:04:49.897434   67727 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetConfigRaw
	I0816 18:04:49.897906   67727 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	I0816 18:04:49.898103   67727 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	I0816 18:04:49.898259   67727 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0816 18:04:49.898275   67727 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetState
	I0816 18:04:49.899551   67727 main.go:141] libmachine: Detecting operating system of created instance...
	I0816 18:04:49.899576   67727 main.go:141] libmachine: Waiting for SSH to be available...
	I0816 18:04:49.899590   67727 main.go:141] libmachine: Getting to WaitForSSH function...
	I0816 18:04:49.899604   67727 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:04:49.901549   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:04:49.901960   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:04:41 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:04:49.901991   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:04:49.902093   67727 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHPort
	I0816 18:04:49.902264   67727 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:04:49.902418   67727 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:04:49.902528   67727 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHUsername
	I0816 18:04:49.902714   67727 main.go:141] libmachine: Using SSH client type: native
	I0816 18:04:49.902914   67727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0816 18:04:49.902926   67727 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0816 18:04:50.007823   67727 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 18:04:50.007852   67727 main.go:141] libmachine: Detecting the provisioner...
	I0816 18:04:50.007860   67727 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:04:50.010880   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:04:50.011254   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:04:41 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:04:50.011292   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:04:50.011480   67727 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHPort
	I0816 18:04:50.011670   67727 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:04:50.011851   67727 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:04:50.011979   67727 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHUsername
	I0816 18:04:50.012153   67727 main.go:141] libmachine: Using SSH client type: native
	I0816 18:04:50.012360   67727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0816 18:04:50.012373   67727 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0816 18:04:50.116832   67727 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0816 18:04:50.116942   67727 main.go:141] libmachine: found compatible host: buildroot
	I0816 18:04:50.116953   67727 main.go:141] libmachine: Provisioning with buildroot...
	I0816 18:04:50.116959   67727 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetMachineName
	I0816 18:04:50.117209   67727 buildroot.go:166] provisioning hostname "old-k8s-version-783465"
	I0816 18:04:50.117238   67727 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetMachineName
	I0816 18:04:50.117420   67727 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:04:50.119828   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:04:50.120229   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:04:41 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:04:50.120253   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:04:50.120413   67727 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHPort
	I0816 18:04:50.120601   67727 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:04:50.120754   67727 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:04:50.120889   67727 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHUsername
	I0816 18:04:50.121014   67727 main.go:141] libmachine: Using SSH client type: native
	I0816 18:04:50.121236   67727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0816 18:04:50.121254   67727 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-783465 && echo "old-k8s-version-783465" | sudo tee /etc/hostname
	I0816 18:04:50.237998   67727 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-783465
	
	I0816 18:04:50.238027   67727 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:04:50.240850   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:04:50.241192   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:04:41 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:04:50.241214   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:04:50.241425   67727 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHPort
	I0816 18:04:50.241598   67727 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:04:50.241770   67727 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:04:50.241871   67727 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHUsername
	I0816 18:04:50.242013   67727 main.go:141] libmachine: Using SSH client type: native
	I0816 18:04:50.242193   67727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0816 18:04:50.242209   67727 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-783465' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-783465/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-783465' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 18:04:50.356202   67727 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 18:04:50.356239   67727 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19461-9545/.minikube CaCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19461-9545/.minikube}
	I0816 18:04:50.356291   67727 buildroot.go:174] setting up certificates
	I0816 18:04:50.356308   67727 provision.go:84] configureAuth start
	I0816 18:04:50.356324   67727 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetMachineName
	I0816 18:04:50.356663   67727 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetIP
	I0816 18:04:50.359528   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:04:50.359875   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:04:41 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:04:50.359896   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:04:50.360096   67727 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:04:50.362506   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:04:50.362829   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:04:41 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:04:50.362858   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:04:50.362972   67727 provision.go:143] copyHostCerts
	I0816 18:04:50.363032   67727 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem, removing ...
	I0816 18:04:50.363049   67727 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem
	I0816 18:04:50.363105   67727 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem (1082 bytes)
	I0816 18:04:50.363219   67727 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem, removing ...
	I0816 18:04:50.363233   67727 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem
	I0816 18:04:50.363256   67727 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem (1123 bytes)
	I0816 18:04:50.363318   67727 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem, removing ...
	I0816 18:04:50.363332   67727 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem
	I0816 18:04:50.363351   67727 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem (1679 bytes)
	I0816 18:04:50.363410   67727 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-783465 san=[127.0.0.1 192.168.39.211 localhost minikube old-k8s-version-783465]
	I0816 18:04:50.623009   67727 provision.go:177] copyRemoteCerts
	I0816 18:04:50.623083   67727 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 18:04:50.623111   67727 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:04:50.626300   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:04:50.626757   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:04:41 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:04:50.626789   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:04:50.626993   67727 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHPort
	I0816 18:04:50.627217   67727 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:04:50.627401   67727 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHUsername
	I0816 18:04:50.627572   67727 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/old-k8s-version-783465/id_rsa Username:docker}
	I0816 18:04:50.721477   67727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 18:04:50.749809   67727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0816 18:04:50.778213   67727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 18:04:50.808443   67727 provision.go:87] duration metric: took 452.116204ms to configureAuth
	I0816 18:04:50.808481   67727 buildroot.go:189] setting minikube options for container-runtime
	I0816 18:04:50.808704   67727 config.go:182] Loaded profile config "old-k8s-version-783465": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0816 18:04:50.808835   67727 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:04:50.811953   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:04:50.812374   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:04:41 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:04:50.812406   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:04:50.812678   67727 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHPort
	I0816 18:04:50.812858   67727 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:04:50.812989   67727 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:04:50.813248   67727 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHUsername
	I0816 18:04:50.813439   67727 main.go:141] libmachine: Using SSH client type: native
	I0816 18:04:50.813651   67727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0816 18:04:50.813674   67727 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 18:04:51.126819   67727 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 18:04:51.126853   67727 main.go:141] libmachine: Checking connection to Docker...
	I0816 18:04:51.126873   67727 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetURL
	I0816 18:04:51.128426   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | Using libvirt version 6000000
	I0816 18:04:51.131281   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:04:51.131723   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:04:41 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:04:51.131751   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:04:51.132017   67727 main.go:141] libmachine: Docker is up and running!
	I0816 18:04:51.132034   67727 main.go:141] libmachine: Reticulating splines...
	I0816 18:04:51.132042   67727 client.go:171] duration metric: took 25.110087628s to LocalClient.Create
	I0816 18:04:51.132075   67727 start.go:167] duration metric: took 25.110153449s to libmachine.API.Create "old-k8s-version-783465"
	I0816 18:04:51.132091   67727 start.go:293] postStartSetup for "old-k8s-version-783465" (driver="kvm2")
	I0816 18:04:51.132107   67727 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 18:04:51.132131   67727 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	I0816 18:04:51.132398   67727 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 18:04:51.132428   67727 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:04:51.134859   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:04:51.135209   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:04:41 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:04:51.135256   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:04:51.135423   67727 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHPort
	I0816 18:04:51.135628   67727 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:04:51.135813   67727 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHUsername
	I0816 18:04:51.135961   67727 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/old-k8s-version-783465/id_rsa Username:docker}
	I0816 18:04:51.222652   67727 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 18:04:51.226879   67727 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 18:04:51.226905   67727 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/addons for local assets ...
	I0816 18:04:51.226965   67727 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/files for local assets ...
	I0816 18:04:51.227092   67727 filesync.go:149] local asset: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem -> 167532.pem in /etc/ssl/certs
	I0816 18:04:51.227217   67727 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 18:04:51.235975   67727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /etc/ssl/certs/167532.pem (1708 bytes)
	I0816 18:04:51.258385   67727 start.go:296] duration metric: took 126.275787ms for postStartSetup
	I0816 18:04:51.258456   67727 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetConfigRaw
	I0816 18:04:51.259041   67727 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetIP
	I0816 18:04:51.261618   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:04:51.261983   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:04:41 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:04:51.262005   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:04:51.262230   67727 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/config.json ...
	I0816 18:04:51.262419   67727 start.go:128] duration metric: took 25.26093016s to createHost
	I0816 18:04:51.262446   67727 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:04:51.264767   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:04:51.265119   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:04:41 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:04:51.265147   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:04:51.265261   67727 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHPort
	I0816 18:04:51.265430   67727 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:04:51.265582   67727 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:04:51.265721   67727 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHUsername
	I0816 18:04:51.265869   67727 main.go:141] libmachine: Using SSH client type: native
	I0816 18:04:51.266038   67727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0816 18:04:51.266052   67727 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 18:04:51.373012   67727 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723831491.329706508
	
	I0816 18:04:51.373045   67727 fix.go:216] guest clock: 1723831491.329706508
	I0816 18:04:51.373057   67727 fix.go:229] Guest: 2024-08-16 18:04:51.329706508 +0000 UTC Remote: 2024-08-16 18:04:51.262434378 +0000 UTC m=+40.928270001 (delta=67.27213ms)
	I0816 18:04:51.373099   67727 fix.go:200] guest clock delta is within tolerance: 67.27213ms
	I0816 18:04:51.373122   67727 start.go:83] releasing machines lock for "old-k8s-version-783465", held for 25.371826821s
	I0816 18:04:51.373162   67727 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	I0816 18:04:51.373453   67727 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetIP
	I0816 18:04:51.376265   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:04:51.376699   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:04:41 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:04:51.376726   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:04:51.376835   67727 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	I0816 18:04:51.377395   67727 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	I0816 18:04:51.377599   67727 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	I0816 18:04:51.377685   67727 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 18:04:51.377739   67727 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:04:51.377795   67727 ssh_runner.go:195] Run: cat /version.json
	I0816 18:04:51.377817   67727 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:04:51.380524   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:04:51.380825   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:04:41 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:04:51.380859   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:04:51.380881   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:04:51.381062   67727 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHPort
	I0816 18:04:51.381224   67727 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:04:51.381327   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:04:41 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:04:51.381355   67727 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHUsername
	I0816 18:04:51.381392   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:04:51.381525   67727 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHPort
	I0816 18:04:51.381696   67727 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:04:51.381714   67727 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/old-k8s-version-783465/id_rsa Username:docker}
	I0816 18:04:51.381845   67727 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHUsername
	I0816 18:04:51.381975   67727 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/old-k8s-version-783465/id_rsa Username:docker}
	I0816 18:04:51.465403   67727 ssh_runner.go:195] Run: systemctl --version
	I0816 18:04:51.504348   67727 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 18:04:51.666270   67727 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 18:04:51.672101   67727 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 18:04:51.672188   67727 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 18:04:51.688077   67727 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 18:04:51.688105   67727 start.go:495] detecting cgroup driver to use...
	I0816 18:04:51.688190   67727 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 18:04:51.704503   67727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 18:04:51.718396   67727 docker.go:217] disabling cri-docker service (if available) ...
	I0816 18:04:51.718474   67727 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 18:04:51.733326   67727 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 18:04:51.748640   67727 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 18:04:51.872208   67727 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 18:04:52.014235   67727 docker.go:233] disabling docker service ...
	I0816 18:04:52.014307   67727 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 18:04:52.028357   67727 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 18:04:52.041095   67727 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 18:04:52.181175   67727 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 18:04:52.300912   67727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 18:04:52.315015   67727 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 18:04:52.333927   67727 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0816 18:04:52.333995   67727 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:04:52.343792   67727 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 18:04:52.343859   67727 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:04:52.353707   67727 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:04:52.363928   67727 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:04:52.375905   67727 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 18:04:52.387339   67727 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 18:04:52.396921   67727 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 18:04:52.396998   67727 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 18:04:52.409615   67727 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 18:04:52.419374   67727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:04:52.533836   67727 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 18:04:52.687074   67727 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 18:04:52.687157   67727 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 18:04:52.692251   67727 start.go:563] Will wait 60s for crictl version
	I0816 18:04:52.692330   67727 ssh_runner.go:195] Run: which crictl
	I0816 18:04:52.696292   67727 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 18:04:52.738782   67727 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 18:04:52.738874   67727 ssh_runner.go:195] Run: crio --version
	I0816 18:04:52.765960   67727 ssh_runner.go:195] Run: crio --version
	I0816 18:04:52.796834   67727 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0816 18:04:52.798279   67727 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetIP
	I0816 18:04:52.801658   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:04:52.802017   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:04:41 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:04:52.802045   67727 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:04:52.802254   67727 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0816 18:04:52.806215   67727 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 18:04:52.817882   67727 kubeadm.go:883] updating cluster {Name:old-k8s-version-783465 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-783465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 18:04:52.817998   67727 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0816 18:04:52.818048   67727 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 18:04:52.849218   67727 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0816 18:04:52.849296   67727 ssh_runner.go:195] Run: which lz4
	I0816 18:04:52.853204   67727 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 18:04:52.858084   67727 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 18:04:52.858117   67727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0816 18:04:54.372113   67727 crio.go:462] duration metric: took 1.518936744s to copy over tarball
	I0816 18:04:54.372208   67727 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 18:04:57.314585   67727 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.942346412s)
	I0816 18:04:57.314619   67727 crio.go:469] duration metric: took 2.942466318s to extract the tarball
	I0816 18:04:57.314630   67727 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 18:04:57.360096   67727 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 18:04:57.413097   67727 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0816 18:04:57.413127   67727 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0816 18:04:57.413212   67727 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:04:57.413215   67727 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 18:04:57.413282   67727 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 18:04:57.413268   67727 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0816 18:04:57.413324   67727 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0816 18:04:57.413334   67727 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 18:04:57.413365   67727 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0816 18:04:57.413217   67727 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 18:04:57.414704   67727 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0816 18:04:57.414799   67727 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 18:04:57.414981   67727 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 18:04:57.414704   67727 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0816 18:04:57.415095   67727 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0816 18:04:57.415096   67727 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:04:57.415351   67727 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 18:04:57.415635   67727 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 18:04:57.662032   67727 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0816 18:04:57.718044   67727 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0816 18:04:57.718094   67727 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0816 18:04:57.718134   67727 ssh_runner.go:195] Run: which crictl
	I0816 18:04:57.721668   67727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 18:04:57.753825   67727 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 18:04:57.755524   67727 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0816 18:04:57.758412   67727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 18:04:57.770277   67727 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0816 18:04:57.788156   67727 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0816 18:04:57.788163   67727 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0816 18:04:57.817873   67727 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0816 18:04:57.882425   67727 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0816 18:04:57.882480   67727 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 18:04:57.882534   67727 ssh_runner.go:195] Run: which crictl
	I0816 18:04:57.898657   67727 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0816 18:04:57.898723   67727 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 18:04:57.898738   67727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 18:04:57.898763   67727 ssh_runner.go:195] Run: which crictl
	I0816 18:04:57.945471   67727 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0816 18:04:57.945521   67727 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0816 18:04:57.945562   67727 ssh_runner.go:195] Run: which crictl
	I0816 18:04:57.966423   67727 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0816 18:04:57.966462   67727 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0816 18:04:57.966510   67727 ssh_runner.go:195] Run: which crictl
	I0816 18:04:57.974964   67727 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0816 18:04:57.974998   67727 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 18:04:57.975033   67727 ssh_runner.go:195] Run: which crictl
	I0816 18:04:57.994283   67727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 18:04:57.994438   67727 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0816 18:04:57.994465   67727 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 18:04:57.994507   67727 ssh_runner.go:195] Run: which crictl
	I0816 18:04:58.011895   67727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 18:04:58.012005   67727 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0816 18:04:58.012063   67727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 18:04:58.012126   67727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 18:04:58.012189   67727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 18:04:58.120235   67727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 18:04:58.120415   67727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 18:04:58.149436   67727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 18:04:58.149472   67727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 18:04:58.149522   67727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 18:04:58.149539   67727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 18:04:58.252809   67727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 18:04:58.252885   67727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 18:04:58.263283   67727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 18:04:58.263322   67727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 18:04:58.266699   67727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 18:04:58.266882   67727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 18:04:58.294013   67727 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:04:58.387034   67727 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0816 18:04:58.387125   67727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 18:04:58.415126   67727 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0816 18:04:58.415181   67727 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0816 18:04:58.415215   67727 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0816 18:04:58.418165   67727 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0816 18:04:58.557930   67727 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0816 18:04:58.557982   67727 cache_images.go:92] duration metric: took 1.144841631s to LoadCachedImages
	W0816 18:04:58.558053   67727 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0816 18:04:58.558066   67727 kubeadm.go:934] updating node { 192.168.39.211 8443 v1.20.0 crio true true} ...
	I0816 18:04:58.558158   67727 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-783465 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.211
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-783465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 18:04:58.558220   67727 ssh_runner.go:195] Run: crio config
	I0816 18:04:58.611979   67727 cni.go:84] Creating CNI manager for ""
	I0816 18:04:58.612002   67727 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 18:04:58.612017   67727 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 18:04:58.612037   67727 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.211 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-783465 NodeName:old-k8s-version-783465 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.211"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.211 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0816 18:04:58.612178   67727 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.211
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-783465"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.211
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.211"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 18:04:58.612243   67727 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0816 18:04:58.625551   67727 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 18:04:58.625635   67727 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 18:04:58.639572   67727 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0816 18:04:58.662843   67727 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 18:04:58.682405   67727 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0816 18:04:58.698782   67727 ssh_runner.go:195] Run: grep 192.168.39.211	control-plane.minikube.internal$ /etc/hosts
	I0816 18:04:58.702666   67727 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.211	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 18:04:58.716049   67727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:04:58.907584   67727 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 18:04:58.929704   67727 certs.go:68] Setting up /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465 for IP: 192.168.39.211
	I0816 18:04:58.929745   67727 certs.go:194] generating shared ca certs ...
	I0816 18:04:58.929767   67727 certs.go:226] acquiring lock for ca certs: {Name:mk1e000575c94c138513704c2900b8a68810eb65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:04:58.929929   67727 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key
	I0816 18:04:58.929985   67727 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key
	I0816 18:04:58.930001   67727 certs.go:256] generating profile certs ...
	I0816 18:04:58.930092   67727 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/client.key
	I0816 18:04:58.930117   67727 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/client.crt with IP's: []
	I0816 18:04:59.298455   67727 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/client.crt ...
	I0816 18:04:59.298499   67727 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/client.crt: {Name:mkdfa0b60f174905704f834a17584e6688dce53a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:04:59.298744   67727 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/client.key ...
	I0816 18:04:59.298776   67727 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/client.key: {Name:mkc2b68d34c746875c956299ecee5c69e1b213dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:04:59.298909   67727 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/apiserver.key.94c45fb6
	I0816 18:04:59.298932   67727 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/apiserver.crt.94c45fb6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.211]
	I0816 18:04:59.505794   67727 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/apiserver.crt.94c45fb6 ...
	I0816 18:04:59.505821   67727 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/apiserver.crt.94c45fb6: {Name:mk7a4d4d245692e84982a5e745e580184bf0c088 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:04:59.505975   67727 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/apiserver.key.94c45fb6 ...
	I0816 18:04:59.505991   67727 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/apiserver.key.94c45fb6: {Name:mkbaf666d35cf81d9f36cc4dfaf9a5e4d6ec1d0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:04:59.506086   67727 certs.go:381] copying /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/apiserver.crt.94c45fb6 -> /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/apiserver.crt
	I0816 18:04:59.506184   67727 certs.go:385] copying /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/apiserver.key.94c45fb6 -> /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/apiserver.key
	I0816 18:04:59.506261   67727 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/proxy-client.key
	I0816 18:04:59.506282   67727 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/proxy-client.crt with IP's: []
	I0816 18:04:59.656238   67727 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/proxy-client.crt ...
	I0816 18:04:59.656265   67727 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/proxy-client.crt: {Name:mkab3b33af340baede9fca51f959f58f40ed9a35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:04:59.656446   67727 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/proxy-client.key ...
	I0816 18:04:59.656469   67727 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/proxy-client.key: {Name:mk92170505d6653e646675f1adf2fafa79061ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:04:59.656726   67727 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem (1338 bytes)
	W0816 18:04:59.656768   67727 certs.go:480] ignoring /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753_empty.pem, impossibly tiny 0 bytes
	I0816 18:04:59.656778   67727 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 18:04:59.656838   67727 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem (1082 bytes)
	I0816 18:04:59.656866   67727 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem (1123 bytes)
	I0816 18:04:59.656887   67727 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem (1679 bytes)
	I0816 18:04:59.656931   67727 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem (1708 bytes)
	I0816 18:04:59.657465   67727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 18:04:59.691363   67727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 18:04:59.717803   67727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 18:04:59.744369   67727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 18:04:59.769143   67727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0816 18:04:59.793488   67727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 18:04:59.820631   67727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 18:04:59.855672   67727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 18:04:59.896808   67727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /usr/share/ca-certificates/167532.pem (1708 bytes)
	I0816 18:04:59.939802   67727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 18:04:59.967542   67727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem --> /usr/share/ca-certificates/16753.pem (1338 bytes)
	I0816 18:04:59.998783   67727 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 18:05:00.018311   67727 ssh_runner.go:195] Run: openssl version
	I0816 18:05:00.024180   67727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 18:05:00.035139   67727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:05:00.040787   67727 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:49 /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:05:00.040838   67727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:05:00.048840   67727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 18:05:00.061516   67727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16753.pem && ln -fs /usr/share/ca-certificates/16753.pem /etc/ssl/certs/16753.pem"
	I0816 18:05:00.072672   67727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16753.pem
	I0816 18:05:00.078190   67727 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 17:00 /usr/share/ca-certificates/16753.pem
	I0816 18:05:00.078253   67727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16753.pem
	I0816 18:05:00.085005   67727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16753.pem /etc/ssl/certs/51391683.0"
	I0816 18:05:00.098717   67727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167532.pem && ln -fs /usr/share/ca-certificates/167532.pem /etc/ssl/certs/167532.pem"
	I0816 18:05:00.111112   67727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167532.pem
	I0816 18:05:00.116560   67727 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 17:00 /usr/share/ca-certificates/167532.pem
	I0816 18:05:00.116612   67727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167532.pem
	I0816 18:05:00.122612   67727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167532.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 18:05:00.133816   67727 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 18:05:00.137744   67727 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0816 18:05:00.137803   67727 kubeadm.go:392] StartCluster: {Name:old-k8s-version-783465 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-783465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 18:05:00.137880   67727 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 18:05:00.137938   67727 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 18:05:00.184010   67727 cri.go:89] found id: ""
	I0816 18:05:00.184091   67727 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 18:05:00.194347   67727 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 18:05:00.204968   67727 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 18:05:00.218073   67727 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 18:05:00.218095   67727 kubeadm.go:157] found existing configuration files:
	
	I0816 18:05:00.218154   67727 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 18:05:00.230102   67727 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 18:05:00.230166   67727 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 18:05:00.240966   67727 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 18:05:00.251464   67727 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 18:05:00.251538   67727 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 18:05:00.261531   67727 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 18:05:00.272001   67727 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 18:05:00.272059   67727 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 18:05:00.283906   67727 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 18:05:00.296484   67727 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 18:05:00.296530   67727 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 18:05:00.309537   67727 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 18:05:00.450076   67727 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0816 18:05:00.450347   67727 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 18:05:00.620272   67727 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 18:05:00.620446   67727 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 18:05:00.620560   67727 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0816 18:05:00.865565   67727 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 18:05:00.938750   67727 out.go:235]   - Generating certificates and keys ...
	I0816 18:05:00.938866   67727 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 18:05:00.938956   67727 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 18:05:01.059333   67727 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0816 18:05:01.300543   67727 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0816 18:05:01.608129   67727 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0816 18:05:01.765654   67727 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0816 18:05:01.965330   67727 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0816 18:05:01.965641   67727 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-783465] and IPs [192.168.39.211 127.0.0.1 ::1]
	I0816 18:05:02.796965   67727 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0816 18:05:02.797152   67727 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-783465] and IPs [192.168.39.211 127.0.0.1 ::1]
	I0816 18:05:03.366069   67727 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0816 18:05:03.566293   67727 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0816 18:05:03.690272   67727 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0816 18:05:03.690547   67727 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 18:05:03.818268   67727 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 18:05:04.226782   67727 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 18:05:04.455323   67727 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 18:05:04.761119   67727 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 18:05:04.778869   67727 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 18:05:04.779876   67727 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 18:05:04.779943   67727 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 18:05:04.908766   67727 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 18:05:04.910626   67727 out.go:235]   - Booting up control plane ...
	I0816 18:05:04.910752   67727 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 18:05:04.918444   67727 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 18:05:04.920344   67727 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 18:05:04.920453   67727 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 18:05:04.933455   67727 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0816 18:05:44.899429   67727 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0816 18:05:44.900295   67727 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:05:44.900550   67727 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:05:49.899977   67727 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:05:49.900247   67727 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:05:59.899317   67727 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:05:59.899614   67727 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:06:19.899799   67727 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:06:19.900082   67727 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:06:59.899015   67727 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:06:59.899532   67727 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:06:59.899575   67727 kubeadm.go:310] 
	I0816 18:06:59.899679   67727 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0816 18:06:59.899793   67727 kubeadm.go:310] 		timed out waiting for the condition
	I0816 18:06:59.899828   67727 kubeadm.go:310] 
	I0816 18:06:59.899965   67727 kubeadm.go:310] 	This error is likely caused by:
	I0816 18:06:59.900064   67727 kubeadm.go:310] 		- The kubelet is not running
	I0816 18:06:59.900310   67727 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0816 18:06:59.900323   67727 kubeadm.go:310] 
	I0816 18:06:59.900514   67727 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0816 18:06:59.900597   67727 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0816 18:06:59.900692   67727 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0816 18:06:59.900703   67727 kubeadm.go:310] 
	I0816 18:06:59.900933   67727 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0816 18:06:59.901111   67727 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0816 18:06:59.901124   67727 kubeadm.go:310] 
	I0816 18:06:59.901309   67727 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0816 18:06:59.901515   67727 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0816 18:06:59.901781   67727 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0816 18:06:59.901918   67727 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0816 18:06:59.901971   67727 kubeadm.go:310] 
	I0816 18:06:59.902281   67727 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 18:06:59.902581   67727 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0816 18:06:59.902688   67727 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0816 18:06:59.902793   67727 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-783465] and IPs [192.168.39.211 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-783465] and IPs [192.168.39.211 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-783465] and IPs [192.168.39.211 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-783465] and IPs [192.168.39.211 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0816 18:06:59.902838   67727 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 18:07:00.370612   67727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 18:07:00.385579   67727 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 18:07:00.394962   67727 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 18:07:00.394980   67727 kubeadm.go:157] found existing configuration files:
	
	I0816 18:07:00.395029   67727 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 18:07:00.403714   67727 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 18:07:00.403783   67727 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 18:07:00.412424   67727 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 18:07:00.420798   67727 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 18:07:00.420847   67727 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 18:07:00.429422   67727 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 18:07:00.438168   67727 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 18:07:00.438217   67727 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 18:07:00.446797   67727 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 18:07:00.455483   67727 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 18:07:00.455537   67727 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 18:07:00.465656   67727 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 18:07:00.676292   67727 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 18:08:56.671584   67727 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0816 18:08:56.671663   67727 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0816 18:08:56.673478   67727 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0816 18:08:56.673530   67727 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 18:08:56.673613   67727 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 18:08:56.673715   67727 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 18:08:56.673812   67727 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0816 18:08:56.673896   67727 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 18:08:56.675589   67727 out.go:235]   - Generating certificates and keys ...
	I0816 18:08:56.675676   67727 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 18:08:56.675753   67727 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 18:08:56.675853   67727 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 18:08:56.675939   67727 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 18:08:56.676042   67727 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 18:08:56.676117   67727 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 18:08:56.676251   67727 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 18:08:56.676338   67727 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 18:08:56.676439   67727 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 18:08:56.676538   67727 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 18:08:56.676642   67727 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 18:08:56.676736   67727 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 18:08:56.676819   67727 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 18:08:56.676902   67727 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 18:08:56.676987   67727 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 18:08:56.677059   67727 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 18:08:56.677151   67727 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 18:08:56.677238   67727 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 18:08:56.677312   67727 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 18:08:56.677401   67727 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 18:08:56.678682   67727 out.go:235]   - Booting up control plane ...
	I0816 18:08:56.678778   67727 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 18:08:56.678882   67727 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 18:08:56.678939   67727 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 18:08:56.679009   67727 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 18:08:56.679138   67727 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0816 18:08:56.679215   67727 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0816 18:08:56.679301   67727 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:08:56.679497   67727 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:08:56.679577   67727 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:08:56.679788   67727 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:08:56.679876   67727 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:08:56.680027   67727 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:08:56.680085   67727 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:08:56.680236   67727 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:08:56.680288   67727 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:08:56.680492   67727 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:08:56.680499   67727 kubeadm.go:310] 
	I0816 18:08:56.680530   67727 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0816 18:08:56.680579   67727 kubeadm.go:310] 		timed out waiting for the condition
	I0816 18:08:56.680590   67727 kubeadm.go:310] 
	I0816 18:08:56.680660   67727 kubeadm.go:310] 	This error is likely caused by:
	I0816 18:08:56.680708   67727 kubeadm.go:310] 		- The kubelet is not running
	I0816 18:08:56.680819   67727 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0816 18:08:56.680828   67727 kubeadm.go:310] 
	I0816 18:08:56.680947   67727 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0816 18:08:56.680996   67727 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0816 18:08:56.681050   67727 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0816 18:08:56.681060   67727 kubeadm.go:310] 
	I0816 18:08:56.681178   67727 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0816 18:08:56.681250   67727 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0816 18:08:56.681256   67727 kubeadm.go:310] 
	I0816 18:08:56.681343   67727 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0816 18:08:56.681412   67727 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0816 18:08:56.681476   67727 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0816 18:08:56.681581   67727 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0816 18:08:56.681616   67727 kubeadm.go:310] 
	I0816 18:08:56.681659   67727 kubeadm.go:394] duration metric: took 3m56.543860949s to StartCluster
	I0816 18:08:56.681709   67727 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:08:56.681771   67727 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:08:56.724315   67727 cri.go:89] found id: ""
	I0816 18:08:56.724358   67727 logs.go:276] 0 containers: []
	W0816 18:08:56.724369   67727 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:08:56.724376   67727 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:08:56.724445   67727 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:08:56.760020   67727 cri.go:89] found id: ""
	I0816 18:08:56.760053   67727 logs.go:276] 0 containers: []
	W0816 18:08:56.760063   67727 logs.go:278] No container was found matching "etcd"
	I0816 18:08:56.760071   67727 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:08:56.760123   67727 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:08:56.792369   67727 cri.go:89] found id: ""
	I0816 18:08:56.792397   67727 logs.go:276] 0 containers: []
	W0816 18:08:56.792408   67727 logs.go:278] No container was found matching "coredns"
	I0816 18:08:56.792415   67727 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:08:56.792480   67727 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:08:56.826417   67727 cri.go:89] found id: ""
	I0816 18:08:56.826449   67727 logs.go:276] 0 containers: []
	W0816 18:08:56.826458   67727 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:08:56.826466   67727 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:08:56.826528   67727 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:08:56.859130   67727 cri.go:89] found id: ""
	I0816 18:08:56.859163   67727 logs.go:276] 0 containers: []
	W0816 18:08:56.859176   67727 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:08:56.859182   67727 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:08:56.859239   67727 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:08:56.891458   67727 cri.go:89] found id: ""
	I0816 18:08:56.891488   67727 logs.go:276] 0 containers: []
	W0816 18:08:56.891499   67727 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:08:56.891507   67727 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:08:56.891569   67727 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:08:56.922554   67727 cri.go:89] found id: ""
	I0816 18:08:56.922583   67727 logs.go:276] 0 containers: []
	W0816 18:08:56.922592   67727 logs.go:278] No container was found matching "kindnet"
	I0816 18:08:56.922600   67727 logs.go:123] Gathering logs for dmesg ...
	I0816 18:08:56.922612   67727 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:08:56.935232   67727 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:08:56.935266   67727 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:08:57.042654   67727 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:08:57.042681   67727 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:08:57.042696   67727 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:08:57.146531   67727 logs.go:123] Gathering logs for container status ...
	I0816 18:08:57.146571   67727 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:08:57.194086   67727 logs.go:123] Gathering logs for kubelet ...
	I0816 18:08:57.194116   67727 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0816 18:08:57.255140   67727 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0816 18:08:57.255233   67727 out.go:270] * 
	* 
	W0816 18:08:57.255303   67727 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0816 18:08:57.255323   67727 out.go:270] * 
	* 
	W0816 18:08:57.256110   67727 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 18:08:57.258876   67727 out.go:201] 
	W0816 18:08:57.260073   67727 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0816 18:08:57.260122   67727 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0816 18:08:57.260149   67727 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0816 18:08:57.261516   67727 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-783465 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-783465 -n old-k8s-version-783465
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-783465 -n old-k8s-version-783465: exit status 6 (215.413714ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 18:08:57.518032   74378 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-783465" does not appear in /home/jenkins/minikube-integration/19461-9545/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-783465" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (287.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (138.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-777541 --alsologtostderr -v=3
E0816 18:06:28.340778   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/auto-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:06:33.462630   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/auto-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:06:43.704707   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/auto-791304/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-777541 --alsologtostderr -v=3: exit status 82 (2m0.50105541s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-777541"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 18:06:27.368705   73458 out.go:345] Setting OutFile to fd 1 ...
	I0816 18:06:27.368957   73458 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 18:06:27.368967   73458 out.go:358] Setting ErrFile to fd 2...
	I0816 18:06:27.368972   73458 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 18:06:27.369160   73458 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-9545/.minikube/bin
	I0816 18:06:27.369412   73458 out.go:352] Setting JSON to false
	I0816 18:06:27.369499   73458 mustload.go:65] Loading cluster: embed-certs-777541
	I0816 18:06:27.369810   73458 config.go:182] Loaded profile config "embed-certs-777541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 18:06:27.369889   73458 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/embed-certs-777541/config.json ...
	I0816 18:06:27.370066   73458 mustload.go:65] Loading cluster: embed-certs-777541
	I0816 18:06:27.370193   73458 config.go:182] Loaded profile config "embed-certs-777541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 18:06:27.370226   73458 stop.go:39] StopHost: embed-certs-777541
	I0816 18:06:27.370631   73458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:06:27.370681   73458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:06:27.386237   73458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41393
	I0816 18:06:27.386777   73458 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:06:27.387394   73458 main.go:141] libmachine: Using API Version  1
	I0816 18:06:27.387421   73458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:06:27.387792   73458 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:06:27.389825   73458 out.go:177] * Stopping node "embed-certs-777541"  ...
	I0816 18:06:27.390931   73458 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0816 18:06:27.390968   73458 main.go:141] libmachine: (embed-certs-777541) Calling .DriverName
	I0816 18:06:27.391187   73458 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0816 18:06:27.391222   73458 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:06:27.393793   73458 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:06:27.394229   73458 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:05:07 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:06:27.394259   73458 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:06:27.394405   73458 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHPort
	I0816 18:06:27.394570   73458 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:06:27.394707   73458 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHUsername
	I0816 18:06:27.394868   73458 sshutil.go:53] new ssh client: &{IP:192.168.61.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/embed-certs-777541/id_rsa Username:docker}
	I0816 18:06:27.502112   73458 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0816 18:06:27.563088   73458 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0816 18:06:27.629976   73458 main.go:141] libmachine: Stopping "embed-certs-777541"...
	I0816 18:06:27.630025   73458 main.go:141] libmachine: (embed-certs-777541) Calling .GetState
	I0816 18:06:27.631777   73458 main.go:141] libmachine: (embed-certs-777541) Calling .Stop
	I0816 18:06:27.635433   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 0/120
	I0816 18:06:28.637361   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 1/120
	I0816 18:06:29.639191   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 2/120
	I0816 18:06:30.640916   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 3/120
	I0816 18:06:31.642214   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 4/120
	I0816 18:06:32.643846   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 5/120
	I0816 18:06:33.645486   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 6/120
	I0816 18:06:34.647096   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 7/120
	I0816 18:06:35.648447   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 8/120
	I0816 18:06:36.649718   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 9/120
	I0816 18:06:37.651636   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 10/120
	I0816 18:06:38.653183   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 11/120
	I0816 18:06:39.654389   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 12/120
	I0816 18:06:40.655753   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 13/120
	I0816 18:06:41.657225   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 14/120
	I0816 18:06:42.658757   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 15/120
	I0816 18:06:43.660175   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 16/120
	I0816 18:06:44.661400   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 17/120
	I0816 18:06:45.662563   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 18/120
	I0816 18:06:46.663744   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 19/120
	I0816 18:06:47.665651   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 20/120
	I0816 18:06:48.666994   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 21/120
	I0816 18:06:49.668315   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 22/120
	I0816 18:06:50.669769   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 23/120
	I0816 18:06:51.671135   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 24/120
	I0816 18:06:52.672959   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 25/120
	I0816 18:06:53.674378   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 26/120
	I0816 18:06:54.675495   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 27/120
	I0816 18:06:55.677165   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 28/120
	I0816 18:06:56.679273   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 29/120
	I0816 18:06:57.681118   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 30/120
	I0816 18:06:58.683133   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 31/120
	I0816 18:06:59.684335   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 32/120
	I0816 18:07:00.685912   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 33/120
	I0816 18:07:01.687260   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 34/120
	I0816 18:07:02.688861   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 35/120
	I0816 18:07:03.690687   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 36/120
	I0816 18:07:04.691662   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 37/120
	I0816 18:07:05.692664   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 38/120
	I0816 18:07:06.693699   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 39/120
	I0816 18:07:07.695665   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 40/120
	I0816 18:07:08.696806   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 41/120
	I0816 18:07:09.698816   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 42/120
	I0816 18:07:10.699861   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 43/120
	I0816 18:07:11.701154   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 44/120
	I0816 18:07:12.702630   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 45/120
	I0816 18:07:13.703897   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 46/120
	I0816 18:07:14.705176   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 47/120
	I0816 18:07:15.707156   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 48/120
	I0816 18:07:16.708618   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 49/120
	I0816 18:07:17.710729   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 50/120
	I0816 18:07:18.711778   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 51/120
	I0816 18:07:19.712968   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 52/120
	I0816 18:07:20.714737   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 53/120
	I0816 18:07:21.715894   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 54/120
	I0816 18:07:22.717709   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 55/120
	I0816 18:07:23.718886   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 56/120
	I0816 18:07:24.720095   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 57/120
	I0816 18:07:25.721223   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 58/120
	I0816 18:07:26.723197   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 59/120
	I0816 18:07:27.724992   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 60/120
	I0816 18:07:28.725983   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 61/120
	I0816 18:07:29.726942   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 62/120
	I0816 18:07:30.727996   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 63/120
	I0816 18:07:31.728961   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 64/120
	I0816 18:07:32.730729   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 65/120
	I0816 18:07:33.732050   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 66/120
	I0816 18:07:34.733050   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 67/120
	I0816 18:07:35.734674   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 68/120
	I0816 18:07:36.735778   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 69/120
	I0816 18:07:37.737762   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 70/120
	I0816 18:07:38.738711   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 71/120
	I0816 18:07:39.739969   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 72/120
	I0816 18:07:40.741081   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 73/120
	I0816 18:07:41.742696   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 74/120
	I0816 18:07:42.744256   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 75/120
	I0816 18:07:43.745226   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 76/120
	I0816 18:07:44.746829   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 77/120
	I0816 18:07:45.747849   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 78/120
	I0816 18:07:46.748833   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 79/120
	I0816 18:07:47.750402   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 80/120
	I0816 18:07:48.751520   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 81/120
	I0816 18:07:49.752881   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 82/120
	I0816 18:07:50.754737   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 83/120
	I0816 18:07:51.755886   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 84/120
	I0816 18:07:52.757493   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 85/120
	I0816 18:07:53.758687   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 86/120
	I0816 18:07:54.759962   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 87/120
	I0816 18:07:55.761526   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 88/120
	I0816 18:07:56.762973   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 89/120
	I0816 18:07:57.765072   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 90/120
	I0816 18:07:58.767171   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 91/120
	I0816 18:07:59.768576   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 92/120
	I0816 18:08:00.770234   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 93/120
	I0816 18:08:01.772414   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 94/120
	I0816 18:08:02.774037   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 95/120
	I0816 18:08:03.775465   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 96/120
	I0816 18:08:04.777004   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 97/120
	I0816 18:08:05.778371   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 98/120
	I0816 18:08:06.779801   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 99/120
	I0816 18:08:07.781856   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 100/120
	I0816 18:08:08.783448   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 101/120
	I0816 18:08:09.784848   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 102/120
	I0816 18:08:10.786354   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 103/120
	I0816 18:08:11.787863   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 104/120
	I0816 18:08:12.790134   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 105/120
	I0816 18:08:13.791584   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 106/120
	I0816 18:08:14.793457   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 107/120
	I0816 18:08:15.794964   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 108/120
	I0816 18:08:16.796447   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 109/120
	I0816 18:08:17.798786   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 110/120
	I0816 18:08:18.800795   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 111/120
	I0816 18:08:19.802542   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 112/120
	I0816 18:08:20.804136   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 113/120
	I0816 18:08:21.805568   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 114/120
	I0816 18:08:22.807539   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 115/120
	I0816 18:08:23.809067   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 116/120
	I0816 18:08:24.810642   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 117/120
	I0816 18:08:25.812142   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 118/120
	I0816 18:08:26.813603   73458 main.go:141] libmachine: (embed-certs-777541) Waiting for machine to stop 119/120
	I0816 18:08:27.815102   73458 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0816 18:08:27.815170   73458 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0816 18:08:27.816984   73458 out.go:201] 
	W0816 18:08:27.818207   73458 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0816 18:08:27.818222   73458 out.go:270] * 
	* 
	W0816 18:08:27.820858   73458 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 18:08:27.822323   73458 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-777541 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-777541 -n embed-certs-777541
E0816 18:08:29.750744   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/flannel-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:08:29.899903   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/enable-default-cni-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:08:29.906288   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/enable-default-cni-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:08:29.917626   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/enable-default-cni-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:08:29.938954   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/enable-default-cni-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:08:29.980370   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/enable-default-cni-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:08:30.061759   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/enable-default-cni-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:08:30.223344   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/enable-default-cni-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:08:30.545071   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/enable-default-cni-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:08:31.187474   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/enable-default-cni-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:08:32.469131   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/enable-default-cni-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:08:35.031143   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/enable-default-cni-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:08:40.152692   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/enable-default-cni-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:08:43.679595   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/bridge-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:08:43.685958   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/bridge-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:08:43.697349   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/bridge-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:08:43.718699   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/bridge-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:08:43.760138   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/bridge-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:08:43.841663   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/bridge-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:08:44.002918   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/bridge-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:08:44.324615   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/bridge-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:08:44.966554   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/bridge-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:08:46.248481   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/bridge-791304/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-777541 -n embed-certs-777541: exit status 3 (18.460931632s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 18:08:46.284913   74160 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.218:22: connect: no route to host
	E0816 18:08:46.284933   74160 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.218:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-777541" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (138.96s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (138.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-864476 --alsologtostderr -v=3
E0816 18:07:04.186416   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/auto-791304/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-864476 --alsologtostderr -v=3: exit status 82 (2m0.500155532s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-864476"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 18:07:02.422184   73723 out.go:345] Setting OutFile to fd 1 ...
	I0816 18:07:02.422289   73723 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 18:07:02.422299   73723 out.go:358] Setting ErrFile to fd 2...
	I0816 18:07:02.422303   73723 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 18:07:02.422468   73723 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-9545/.minikube/bin
	I0816 18:07:02.422727   73723 out.go:352] Setting JSON to false
	I0816 18:07:02.422802   73723 mustload.go:65] Loading cluster: no-preload-864476
	I0816 18:07:02.423967   73723 config.go:182] Loaded profile config "no-preload-864476": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 18:07:02.424275   73723 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/no-preload-864476/config.json ...
	I0816 18:07:02.424475   73723 mustload.go:65] Loading cluster: no-preload-864476
	I0816 18:07:02.424586   73723 config.go:182] Loaded profile config "no-preload-864476": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 18:07:02.424610   73723 stop.go:39] StopHost: no-preload-864476
	I0816 18:07:02.424975   73723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:07:02.425014   73723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:07:02.442334   73723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34535
	I0816 18:07:02.442813   73723 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:07:02.443354   73723 main.go:141] libmachine: Using API Version  1
	I0816 18:07:02.443378   73723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:07:02.443691   73723 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:07:02.445765   73723 out.go:177] * Stopping node "no-preload-864476"  ...
	I0816 18:07:02.447383   73723 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0816 18:07:02.447429   73723 main.go:141] libmachine: (no-preload-864476) Calling .DriverName
	I0816 18:07:02.447652   73723 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0816 18:07:02.447675   73723 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:07:02.450347   73723 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:07:02.450688   73723 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:05:31 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:07:02.450714   73723 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:07:02.450882   73723 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHPort
	I0816 18:07:02.451030   73723 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:07:02.451180   73723 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHUsername
	I0816 18:07:02.451333   73723 sshutil.go:53] new ssh client: &{IP:192.168.50.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/no-preload-864476/id_rsa Username:docker}
	I0816 18:07:02.541634   73723 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0816 18:07:02.601707   73723 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0816 18:07:02.671686   73723 main.go:141] libmachine: Stopping "no-preload-864476"...
	I0816 18:07:02.671713   73723 main.go:141] libmachine: (no-preload-864476) Calling .GetState
	I0816 18:07:02.673394   73723 main.go:141] libmachine: (no-preload-864476) Calling .Stop
	I0816 18:07:02.676935   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 0/120
	I0816 18:07:03.678178   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 1/120
	I0816 18:07:04.679559   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 2/120
	I0816 18:07:05.680855   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 3/120
	I0816 18:07:06.682115   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 4/120
	I0816 18:07:07.683673   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 5/120
	I0816 18:07:08.685380   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 6/120
	I0816 18:07:09.687710   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 7/120
	I0816 18:07:10.689366   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 8/120
	I0816 18:07:11.691748   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 9/120
	I0816 18:07:12.694084   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 10/120
	I0816 18:07:13.696488   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 11/120
	I0816 18:07:14.697835   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 12/120
	I0816 18:07:15.699244   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 13/120
	I0816 18:07:16.700536   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 14/120
	I0816 18:07:17.702525   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 15/120
	I0816 18:07:18.704786   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 16/120
	I0816 18:07:19.706159   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 17/120
	I0816 18:07:20.707442   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 18/120
	I0816 18:07:21.708659   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 19/120
	I0816 18:07:22.710871   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 20/120
	I0816 18:07:23.712396   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 21/120
	I0816 18:07:24.713618   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 22/120
	I0816 18:07:25.714922   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 23/120
	I0816 18:07:26.716201   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 24/120
	I0816 18:07:27.718040   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 25/120
	I0816 18:07:28.719376   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 26/120
	I0816 18:07:29.720874   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 27/120
	I0816 18:07:30.722229   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 28/120
	I0816 18:07:31.723640   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 29/120
	I0816 18:07:32.725706   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 30/120
	I0816 18:07:33.727101   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 31/120
	I0816 18:07:34.728355   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 32/120
	I0816 18:07:35.729558   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 33/120
	I0816 18:07:36.730952   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 34/120
	I0816 18:07:37.733025   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 35/120
	I0816 18:07:38.734449   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 36/120
	I0816 18:07:39.735936   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 37/120
	I0816 18:07:40.737528   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 38/120
	I0816 18:07:41.738953   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 39/120
	I0816 18:07:42.740882   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 40/120
	I0816 18:07:43.742298   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 41/120
	I0816 18:07:44.743676   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 42/120
	I0816 18:07:45.745157   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 43/120
	I0816 18:07:46.746524   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 44/120
	I0816 18:07:47.748417   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 45/120
	I0816 18:07:48.749843   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 46/120
	I0816 18:07:49.751219   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 47/120
	I0816 18:07:50.752984   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 48/120
	I0816 18:07:51.754351   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 49/120
	I0816 18:07:52.756593   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 50/120
	I0816 18:07:53.758115   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 51/120
	I0816 18:07:54.759783   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 52/120
	I0816 18:07:55.761263   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 53/120
	I0816 18:07:56.762686   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 54/120
	I0816 18:07:57.764794   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 55/120
	I0816 18:07:58.767348   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 56/120
	I0816 18:07:59.769465   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 57/120
	I0816 18:08:00.770831   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 58/120
	I0816 18:08:01.772279   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 59/120
	I0816 18:08:02.774365   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 60/120
	I0816 18:08:03.775562   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 61/120
	I0816 18:08:04.777007   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 62/120
	I0816 18:08:05.778713   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 63/120
	I0816 18:08:06.780057   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 64/120
	I0816 18:08:07.781954   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 65/120
	I0816 18:08:08.783384   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 66/120
	I0816 18:08:09.784841   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 67/120
	I0816 18:08:10.786368   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 68/120
	I0816 18:08:11.787990   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 69/120
	I0816 18:08:12.790314   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 70/120
	I0816 18:08:13.791741   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 71/120
	I0816 18:08:14.793778   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 72/120
	I0816 18:08:15.795711   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 73/120
	I0816 18:08:16.796949   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 74/120
	I0816 18:08:17.798601   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 75/120
	I0816 18:08:18.800235   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 76/120
	I0816 18:08:19.801822   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 77/120
	I0816 18:08:20.803507   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 78/120
	I0816 18:08:21.805177   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 79/120
	I0816 18:08:22.807574   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 80/120
	I0816 18:08:23.809171   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 81/120
	I0816 18:08:24.811153   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 82/120
	I0816 18:08:25.812428   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 83/120
	I0816 18:08:26.813875   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 84/120
	I0816 18:08:27.816182   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 85/120
	I0816 18:08:28.817612   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 86/120
	I0816 18:08:29.819106   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 87/120
	I0816 18:08:30.820550   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 88/120
	I0816 18:08:31.822004   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 89/120
	I0816 18:08:32.824185   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 90/120
	I0816 18:08:33.825768   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 91/120
	I0816 18:08:34.827515   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 92/120
	I0816 18:08:35.829070   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 93/120
	I0816 18:08:36.830559   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 94/120
	I0816 18:08:37.832787   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 95/120
	I0816 18:08:38.834274   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 96/120
	I0816 18:08:39.835695   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 97/120
	I0816 18:08:40.837376   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 98/120
	I0816 18:08:41.838858   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 99/120
	I0816 18:08:42.841476   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 100/120
	I0816 18:08:43.843455   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 101/120
	I0816 18:08:44.845057   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 102/120
	I0816 18:08:45.846819   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 103/120
	I0816 18:08:46.848186   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 104/120
	I0816 18:08:47.850093   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 105/120
	I0816 18:08:48.851661   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 106/120
	I0816 18:08:49.853192   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 107/120
	I0816 18:08:50.854652   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 108/120
	I0816 18:08:51.856317   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 109/120
	I0816 18:08:52.858469   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 110/120
	I0816 18:08:53.859969   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 111/120
	I0816 18:08:54.861815   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 112/120
	I0816 18:08:55.863221   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 113/120
	I0816 18:08:56.864501   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 114/120
	I0816 18:08:57.866330   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 115/120
	I0816 18:08:58.868268   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 116/120
	I0816 18:08:59.869657   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 117/120
	I0816 18:09:00.871995   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 118/120
	I0816 18:09:01.873241   73723 main.go:141] libmachine: (no-preload-864476) Waiting for machine to stop 119/120
	I0816 18:09:02.873905   73723 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0816 18:09:02.873981   73723 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0816 18:09:02.876058   73723 out.go:201] 
	W0816 18:09:02.877438   73723 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0816 18:09:02.877458   73723 out.go:270] * 
	* 
	W0816 18:09:02.880138   73723 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 18:09:02.881496   73723 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-864476 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-864476 -n no-preload-864476
E0816 18:09:04.174283   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/bridge-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:09:07.069654   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/auto-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:09:10.712332   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/flannel-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:09:10.876030   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/enable-default-cni-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:09:11.231601   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/calico-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:09:11.237924   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/calico-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:09:11.249264   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/calico-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:09:11.270549   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/calico-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:09:11.311954   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/calico-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:09:11.393643   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/calico-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:09:11.555229   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/calico-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:09:11.876971   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/calico-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:09:12.518979   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/calico-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:09:13.800844   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/calico-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:09:16.363052   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/calico-791304/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-864476 -n no-preload-864476: exit status 3 (18.473842652s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 18:09:21.356984   74567 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.50:22: connect: no route to host
	E0816 18:09:21.357009   74567 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.50:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-864476" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (138.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-256678 --alsologtostderr -v=3
E0816 18:07:35.342083   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:07:45.148034   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/auto-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:07:48.775106   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/flannel-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:07:48.781442   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/flannel-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:07:48.792716   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/flannel-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:07:48.814029   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/flannel-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:07:48.855373   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/flannel-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:07:48.937008   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/flannel-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:07:49.098525   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/flannel-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:07:49.420163   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/flannel-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:07:50.061487   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/flannel-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:07:51.343162   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/flannel-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:07:53.905039   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/flannel-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:07:59.027304   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/flannel-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:08:09.268896   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/flannel-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:08:21.061484   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/functional-654639/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-256678 --alsologtostderr -v=3: exit status 82 (2m0.508132917s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-256678"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 18:07:18.443748   73863 out.go:345] Setting OutFile to fd 1 ...
	I0816 18:07:18.444025   73863 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 18:07:18.444034   73863 out.go:358] Setting ErrFile to fd 2...
	I0816 18:07:18.444039   73863 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 18:07:18.444224   73863 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-9545/.minikube/bin
	I0816 18:07:18.444452   73863 out.go:352] Setting JSON to false
	I0816 18:07:18.444543   73863 mustload.go:65] Loading cluster: default-k8s-diff-port-256678
	I0816 18:07:18.444912   73863 config.go:182] Loaded profile config "default-k8s-diff-port-256678": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 18:07:18.444985   73863 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/default-k8s-diff-port-256678/config.json ...
	I0816 18:07:18.445178   73863 mustload.go:65] Loading cluster: default-k8s-diff-port-256678
	I0816 18:07:18.445285   73863 config.go:182] Loaded profile config "default-k8s-diff-port-256678": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 18:07:18.445321   73863 stop.go:39] StopHost: default-k8s-diff-port-256678
	I0816 18:07:18.445699   73863 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:07:18.445748   73863 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:07:18.460874   73863 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35217
	I0816 18:07:18.461349   73863 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:07:18.462036   73863 main.go:141] libmachine: Using API Version  1
	I0816 18:07:18.462062   73863 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:07:18.462411   73863 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:07:18.464751   73863 out.go:177] * Stopping node "default-k8s-diff-port-256678"  ...
	I0816 18:07:18.466123   73863 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0816 18:07:18.466148   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .DriverName
	I0816 18:07:18.466395   73863 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0816 18:07:18.466420   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:07:18.469013   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:07:18.469395   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:05:56 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:07:18.469448   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:07:18.469629   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHPort
	I0816 18:07:18.469794   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:07:18.469962   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHUsername
	I0816 18:07:18.470120   73863 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/default-k8s-diff-port-256678/id_rsa Username:docker}
	I0816 18:07:18.578591   73863 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0816 18:07:18.651211   73863 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0816 18:07:18.720248   73863 main.go:141] libmachine: Stopping "default-k8s-diff-port-256678"...
	I0816 18:07:18.720270   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetState
	I0816 18:07:18.721535   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .Stop
	I0816 18:07:18.725114   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 0/120
	I0816 18:07:19.726737   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 1/120
	I0816 18:07:20.727781   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 2/120
	I0816 18:07:21.728946   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 3/120
	I0816 18:07:22.730117   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 4/120
	I0816 18:07:23.731863   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 5/120
	I0816 18:07:24.732867   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 6/120
	I0816 18:07:25.734673   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 7/120
	I0816 18:07:26.735562   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 8/120
	I0816 18:07:27.736707   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 9/120
	I0816 18:07:28.738495   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 10/120
	I0816 18:07:29.739580   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 11/120
	I0816 18:07:30.740535   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 12/120
	I0816 18:07:31.741483   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 13/120
	I0816 18:07:32.742692   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 14/120
	I0816 18:07:33.744367   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 15/120
	I0816 18:07:34.745282   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 16/120
	I0816 18:07:35.746705   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 17/120
	I0816 18:07:36.747898   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 18/120
	I0816 18:07:37.748959   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 19/120
	I0816 18:07:38.749943   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 20/120
	I0816 18:07:39.751020   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 21/120
	I0816 18:07:40.751965   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 22/120
	I0816 18:07:41.753018   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 23/120
	I0816 18:07:42.754679   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 24/120
	I0816 18:07:43.756509   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 25/120
	I0816 18:07:44.757413   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 26/120
	I0816 18:07:45.758673   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 27/120
	I0816 18:07:46.759696   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 28/120
	I0816 18:07:47.760836   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 29/120
	I0816 18:07:48.762790   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 30/120
	I0816 18:07:49.763788   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 31/120
	I0816 18:07:50.764767   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 32/120
	I0816 18:07:51.766750   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 33/120
	I0816 18:07:52.768013   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 34/120
	I0816 18:07:53.769658   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 35/120
	I0816 18:07:54.770740   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 36/120
	I0816 18:07:55.771807   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 37/120
	I0816 18:07:56.772859   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 38/120
	I0816 18:07:57.774857   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 39/120
	I0816 18:07:58.776733   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 40/120
	I0816 18:07:59.778569   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 41/120
	I0816 18:08:00.779504   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 42/120
	I0816 18:08:01.780658   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 43/120
	I0816 18:08:02.781530   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 44/120
	I0816 18:08:03.783213   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 45/120
	I0816 18:08:04.784400   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 46/120
	I0816 18:08:05.785432   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 47/120
	I0816 18:08:06.786612   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 48/120
	I0816 18:08:07.787519   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 49/120
	I0816 18:08:08.789170   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 50/120
	I0816 18:08:09.790836   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 51/120
	I0816 18:08:10.791930   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 52/120
	I0816 18:08:11.793058   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 53/120
	I0816 18:08:12.794884   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 54/120
	I0816 18:08:13.796762   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 55/120
	I0816 18:08:14.798998   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 56/120
	I0816 18:08:15.800157   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 57/120
	I0816 18:08:16.801369   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 58/120
	I0816 18:08:17.802775   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 59/120
	I0816 18:08:18.804781   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 60/120
	I0816 18:08:19.806849   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 61/120
	I0816 18:08:20.807936   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 62/120
	I0816 18:08:21.808981   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 63/120
	I0816 18:08:22.811084   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 64/120
	I0816 18:08:23.812695   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 65/120
	I0816 18:08:24.813599   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 66/120
	I0816 18:08:25.814683   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 67/120
	I0816 18:08:26.815762   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 68/120
	I0816 18:08:27.817532   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 69/120
	I0816 18:08:28.819423   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 70/120
	I0816 18:08:29.820516   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 71/120
	I0816 18:08:30.821586   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 72/120
	I0816 18:08:31.823034   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 73/120
	I0816 18:08:32.824336   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 74/120
	I0816 18:08:33.826260   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 75/120
	I0816 18:08:34.827646   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 76/120
	I0816 18:08:35.829248   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 77/120
	I0816 18:08:36.830728   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 78/120
	I0816 18:08:37.832582   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 79/120
	I0816 18:08:38.834470   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 80/120
	I0816 18:08:39.836043   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 81/120
	I0816 18:08:40.837594   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 82/120
	I0816 18:08:41.839081   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 83/120
	I0816 18:08:42.841327   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 84/120
	I0816 18:08:43.843571   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 85/120
	I0816 18:08:44.845152   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 86/120
	I0816 18:08:45.846600   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 87/120
	I0816 18:08:46.848038   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 88/120
	I0816 18:08:47.849754   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 89/120
	I0816 18:08:48.851961   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 90/120
	I0816 18:08:49.853296   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 91/120
	I0816 18:08:50.854671   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 92/120
	I0816 18:08:51.856169   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 93/120
	I0816 18:08:52.857834   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 94/120
	I0816 18:08:53.859812   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 95/120
	I0816 18:08:54.861680   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 96/120
	I0816 18:08:55.863086   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 97/120
	I0816 18:08:56.864523   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 98/120
	I0816 18:08:57.865906   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 99/120
	I0816 18:08:58.867921   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 100/120
	I0816 18:08:59.869406   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 101/120
	I0816 18:09:00.871173   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 102/120
	I0816 18:09:01.872649   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 103/120
	I0816 18:09:02.874368   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 104/120
	I0816 18:09:03.876700   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 105/120
	I0816 18:09:04.878184   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 106/120
	I0816 18:09:05.879556   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 107/120
	I0816 18:09:06.881150   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 108/120
	I0816 18:09:07.883110   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 109/120
	I0816 18:09:08.885577   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 110/120
	I0816 18:09:09.887153   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 111/120
	I0816 18:09:10.888597   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 112/120
	I0816 18:09:11.889886   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 113/120
	I0816 18:09:12.891295   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 114/120
	I0816 18:09:13.893337   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 115/120
	I0816 18:09:14.894650   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 116/120
	I0816 18:09:15.896220   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 117/120
	I0816 18:09:16.897961   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 118/120
	I0816 18:09:17.899467   73863 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for machine to stop 119/120
	I0816 18:09:18.901119   73863 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0816 18:09:18.901195   73863 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0816 18:09:18.903107   73863 out.go:201] 
	W0816 18:09:18.904310   73863 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0816 18:09:18.904327   73863 out.go:270] * 
	* 
	W0816 18:09:18.906841   73863 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 18:09:18.908022   73863 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-256678 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-256678 -n default-k8s-diff-port-256678
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-256678 -n default-k8s-diff-port-256678: exit status 3 (18.57540959s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 18:09:37.484956   74645 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.144:22: connect: no route to host
	E0816 18:09:37.484989   74645 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.144:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-256678" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-777541 -n embed-certs-777541
E0816 18:08:48.810471   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/bridge-791304/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-777541 -n embed-certs-777541: exit status 3 (3.167905751s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 18:08:49.452955   74265 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.218:22: connect: no route to host
	E0816 18:08:49.452976   74265 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.218:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-777541 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0816 18:08:50.394227   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/enable-default-cni-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:08:53.932355   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/bridge-791304/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-777541 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152234083s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.218:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-777541 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-777541 -n embed-certs-777541
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-777541 -n embed-certs-777541: exit status 3 (3.067515836s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 18:08:58.672993   74346 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.218:22: connect: no route to host
	E0816 18:08:58.673014   74346 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.218:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-777541" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-783465 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-783465 create -f testdata/busybox.yaml: exit status 1 (43.816153ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-783465" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-783465 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-783465 -n old-k8s-version-783465
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-783465 -n old-k8s-version-783465: exit status 6 (211.169683ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 18:08:57.774349   74418 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-783465" does not appear in /home/jenkins/minikube-integration/19461-9545/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-783465" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-783465 -n old-k8s-version-783465
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-783465 -n old-k8s-version-783465: exit status 6 (214.813271ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 18:08:57.990209   74449 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-783465" does not appear in /home/jenkins/minikube-integration/19461-9545/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-783465" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (112.6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-783465 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-783465 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m52.33729693s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-783465 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-783465 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-783465 describe deploy/metrics-server -n kube-system: exit status 1 (43.70491ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-783465" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-783465 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-783465 -n old-k8s-version-783465
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-783465 -n old-k8s-version-783465: exit status 6 (215.116185ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 18:10:50.586421   75286 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-783465" does not appear in /home/jenkins/minikube-integration/19461-9545/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-783465" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (112.60s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-864476 -n no-preload-864476
E0816 18:09:21.484674   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/calico-791304/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-864476 -n no-preload-864476: exit status 3 (3.167996732s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 18:09:24.525037   74691 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.50:22: connect: no route to host
	E0816 18:09:24.525059   74691 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.50:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-864476 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0816 18:09:24.656025   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/bridge-791304/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-864476 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152145531s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.50:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-864476 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-864476 -n no-preload-864476
E0816 18:09:31.726277   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/calico-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:09:32.866256   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/kindnet-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:09:32.872632   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/kindnet-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:09:32.883969   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/kindnet-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:09:32.905328   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/kindnet-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:09:32.946700   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/kindnet-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:09:33.028126   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/kindnet-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:09:33.189712   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/kindnet-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:09:33.511609   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/kindnet-791304/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-864476 -n no-preload-864476: exit status 3 (3.063541217s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 18:09:33.741036   74789 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.50:22: connect: no route to host
	E0816 18:09:33.741060   74789 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.50:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-864476" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-256678 -n default-k8s-diff-port-256678
E0816 18:09:37.997733   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/kindnet-791304/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-256678 -n default-k8s-diff-port-256678: exit status 3 (3.16764891s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 18:09:40.653048   74879 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.144:22: connect: no route to host
	E0816 18:09:40.653073   74879 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.144:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-256678 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0816 18:09:43.119644   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/kindnet-791304/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-256678 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152209143s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.144:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-256678 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-256678 -n default-k8s-diff-port-256678
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-256678 -n default-k8s-diff-port-256678: exit status 3 (3.063257716s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 18:09:49.868998   74959 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.144:22: connect: no route to host
	E0816 18:09:49.869016   74959 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.144:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-256678" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (715.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-783465 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0816 18:10:54.805695   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/kindnet-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:10:55.607858   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/custom-flannel-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:11:12.269183   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:11:13.760421   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/enable-default-cni-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:11:23.209858   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/auto-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:11:27.539879   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/bridge-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:11:36.570665   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/custom-flannel-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:11:50.911066   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/auto-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:11:55.091773   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/calico-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:12:16.728837   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/kindnet-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:12:48.775740   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/flannel-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:12:58.492501   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/custom-flannel-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:13:16.476766   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/flannel-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:13:21.061432   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/functional-654639/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:13:29.899954   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/enable-default-cni-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:13:43.679156   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/bridge-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:13:57.602144   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/enable-default-cni-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:14:11.231089   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/calico-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:14:11.381825   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/bridge-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:14:32.866358   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/kindnet-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:14:38.933127   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/calico-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:14:44.133462   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/functional-654639/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:15:00.570364   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/kindnet-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:15:14.631352   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/custom-flannel-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:15:42.334522   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/custom-flannel-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:16:12.269192   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:16:23.210432   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/auto-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:17:48.775710   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/flannel-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:18:21.061338   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/functional-654639/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:18:29.899762   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/enable-default-cni-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:18:43.679366   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/bridge-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:19:11.231489   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/calico-791304/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-783465 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (11m51.944542721s)

                                                
                                                
-- stdout --
	* [old-k8s-version-783465] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19461
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19461-9545/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19461-9545/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-783465" primary control-plane node in "old-k8s-version-783465" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-783465" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 18:10:53.101149   75402 out.go:345] Setting OutFile to fd 1 ...
	I0816 18:10:53.101401   75402 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 18:10:53.101412   75402 out.go:358] Setting ErrFile to fd 2...
	I0816 18:10:53.101418   75402 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 18:10:53.101600   75402 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-9545/.minikube/bin
	I0816 18:10:53.102131   75402 out.go:352] Setting JSON to false
	I0816 18:10:53.103018   75402 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6751,"bootTime":1723825102,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0816 18:10:53.103076   75402 start.go:139] virtualization: kvm guest
	I0816 18:10:53.105216   75402 out.go:177] * [old-k8s-version-783465] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0816 18:10:53.106496   75402 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 18:10:53.106504   75402 notify.go:220] Checking for updates...
	I0816 18:10:53.109235   75402 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 18:10:53.110572   75402 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19461-9545/kubeconfig
	I0816 18:10:53.111747   75402 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19461-9545/.minikube
	I0816 18:10:53.113164   75402 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 18:10:53.114589   75402 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 18:10:53.116284   75402 config.go:182] Loaded profile config "old-k8s-version-783465": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0816 18:10:53.116746   75402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:10:53.116806   75402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:10:53.132445   75402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46631
	I0816 18:10:53.132886   75402 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:10:53.133456   75402 main.go:141] libmachine: Using API Version  1
	I0816 18:10:53.133494   75402 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:10:53.133836   75402 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:10:53.134015   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	I0816 18:10:53.135791   75402 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0816 18:10:53.136942   75402 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 18:10:53.137229   75402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:10:53.137260   75402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:10:53.151853   75402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34617
	I0816 18:10:53.152327   75402 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:10:53.152881   75402 main.go:141] libmachine: Using API Version  1
	I0816 18:10:53.152905   75402 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:10:53.153159   75402 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:10:53.153307   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	I0816 18:10:53.188002   75402 out.go:177] * Using the kvm2 driver based on existing profile
	I0816 18:10:53.189287   75402 start.go:297] selected driver: kvm2
	I0816 18:10:53.189309   75402 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-783465 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-783465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 18:10:53.189432   75402 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 18:10:53.190098   75402 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 18:10:53.190187   75402 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19461-9545/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 18:10:53.205024   75402 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0816 18:10:53.205386   75402 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 18:10:53.205417   75402 cni.go:84] Creating CNI manager for ""
	I0816 18:10:53.205425   75402 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 18:10:53.205458   75402 start.go:340] cluster config:
	{Name:old-k8s-version-783465 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-783465 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 18:10:53.205557   75402 iso.go:125] acquiring lock: {Name:mke35866c1d0f078e1f027c2d727692722810e04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 18:10:53.207241   75402 out.go:177] * Starting "old-k8s-version-783465" primary control-plane node in "old-k8s-version-783465" cluster
	I0816 18:10:53.208254   75402 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0816 18:10:53.208286   75402 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0816 18:10:53.208298   75402 cache.go:56] Caching tarball of preloaded images
	I0816 18:10:53.208386   75402 preload.go:172] Found /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 18:10:53.208400   75402 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0816 18:10:53.208510   75402 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/config.json ...
	I0816 18:10:53.208736   75402 start.go:360] acquireMachinesLock for old-k8s-version-783465: {Name:mke1ffa1f2d6d714bdd85e184816ba8f4dfd08f1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 18:14:15.814480   75402 start.go:364] duration metric: took 3m22.605706427s to acquireMachinesLock for "old-k8s-version-783465"
	I0816 18:14:15.814546   75402 start.go:96] Skipping create...Using existing machine configuration
	I0816 18:14:15.814554   75402 fix.go:54] fixHost starting: 
	I0816 18:14:15.815001   75402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:14:15.815062   75402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:14:15.834710   75402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46611
	I0816 18:14:15.835124   75402 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:14:15.835653   75402 main.go:141] libmachine: Using API Version  1
	I0816 18:14:15.835676   75402 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:14:15.836005   75402 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:14:15.836258   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	I0816 18:14:15.836392   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetState
	I0816 18:14:15.838010   75402 fix.go:112] recreateIfNeeded on old-k8s-version-783465: state=Stopped err=<nil>
	I0816 18:14:15.838043   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	W0816 18:14:15.838200   75402 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 18:14:15.840214   75402 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-783465" ...
	I0816 18:14:15.841411   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .Start
	I0816 18:14:15.841576   75402 main.go:141] libmachine: (old-k8s-version-783465) Ensuring networks are active...
	I0816 18:14:15.842263   75402 main.go:141] libmachine: (old-k8s-version-783465) Ensuring network default is active
	I0816 18:14:15.842609   75402 main.go:141] libmachine: (old-k8s-version-783465) Ensuring network mk-old-k8s-version-783465 is active
	I0816 18:14:15.843023   75402 main.go:141] libmachine: (old-k8s-version-783465) Getting domain xml...
	I0816 18:14:15.844141   75402 main.go:141] libmachine: (old-k8s-version-783465) Creating domain...
	I0816 18:14:17.215163   75402 main.go:141] libmachine: (old-k8s-version-783465) Waiting to get IP...
	I0816 18:14:17.216445   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:17.216933   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:17.217029   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:17.216922   76298 retry.go:31] will retry after 286.243503ms: waiting for machine to come up
	I0816 18:14:17.504645   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:17.505240   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:17.505262   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:17.505175   76298 retry.go:31] will retry after 275.715235ms: waiting for machine to come up
	I0816 18:14:17.782804   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:17.783365   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:17.783392   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:17.783292   76298 retry.go:31] will retry after 343.088129ms: waiting for machine to come up
	I0816 18:14:18.127538   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:18.128044   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:18.128077   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:18.127958   76298 retry.go:31] will retry after 543.91951ms: waiting for machine to come up
	I0816 18:14:18.673778   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:18.674328   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:18.674351   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:18.674274   76298 retry.go:31] will retry after 694.978788ms: waiting for machine to come up
	I0816 18:14:19.370976   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:19.371577   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:19.371605   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:19.371538   76298 retry.go:31] will retry after 578.640883ms: waiting for machine to come up
	I0816 18:14:19.952328   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:19.952917   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:19.952941   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:19.952863   76298 retry.go:31] will retry after 820.19233ms: waiting for machine to come up
	I0816 18:14:20.774767   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:20.775175   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:20.775200   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:20.775134   76298 retry.go:31] will retry after 1.262201815s: waiting for machine to come up
	I0816 18:14:22.038872   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:22.039357   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:22.039385   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:22.039302   76298 retry.go:31] will retry after 1.164593889s: waiting for machine to come up
	I0816 18:14:23.205567   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:23.206051   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:23.206078   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:23.206007   76298 retry.go:31] will retry after 2.304886921s: waiting for machine to come up
	I0816 18:14:25.512748   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:25.513295   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:25.513321   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:25.513261   76298 retry.go:31] will retry after 2.603393394s: waiting for machine to come up
	I0816 18:14:28.118105   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:28.118675   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:28.118706   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:28.118637   76298 retry.go:31] will retry after 2.400714985s: waiting for machine to come up
	I0816 18:14:30.521623   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:30.522157   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:30.522196   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:30.522111   76298 retry.go:31] will retry after 3.210603239s: waiting for machine to come up
	I0816 18:14:33.735394   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:33.735898   75402 main.go:141] libmachine: (old-k8s-version-783465) Found IP for machine: 192.168.39.211
	I0816 18:14:33.735925   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has current primary IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:33.735933   75402 main.go:141] libmachine: (old-k8s-version-783465) Reserving static IP address...
	I0816 18:14:33.736407   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "old-k8s-version-783465", mac: "52:54:00:d1:97:35", ip: "192.168.39.211"} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:33.736439   75402 main.go:141] libmachine: (old-k8s-version-783465) Reserved static IP address: 192.168.39.211
	I0816 18:14:33.736459   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | skip adding static IP to network mk-old-k8s-version-783465 - found existing host DHCP lease matching {name: "old-k8s-version-783465", mac: "52:54:00:d1:97:35", ip: "192.168.39.211"}
	I0816 18:14:33.736478   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | Getting to WaitForSSH function...
	I0816 18:14:33.736492   75402 main.go:141] libmachine: (old-k8s-version-783465) Waiting for SSH to be available...
	I0816 18:14:33.739028   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:33.739377   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:33.739397   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:33.739596   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | Using SSH client type: external
	I0816 18:14:33.739689   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | Using SSH private key: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/old-k8s-version-783465/id_rsa (-rw-------)
	I0816 18:14:33.739724   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.211 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19461-9545/.minikube/machines/old-k8s-version-783465/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 18:14:33.739747   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | About to run SSH command:
	I0816 18:14:33.739785   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | exit 0
	I0816 18:14:33.861036   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | SSH cmd err, output: <nil>: 
	I0816 18:14:33.861405   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetConfigRaw
	I0816 18:14:33.862105   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetIP
	I0816 18:14:33.864850   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:33.865245   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:33.865272   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:33.865542   75402 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/config.json ...
	I0816 18:14:33.865796   75402 machine.go:93] provisionDockerMachine start ...
	I0816 18:14:33.865820   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	I0816 18:14:33.866053   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:14:33.868422   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:33.868761   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:33.868795   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:33.868911   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHPort
	I0816 18:14:33.869095   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:33.869267   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:33.869415   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHUsername
	I0816 18:14:33.869579   75402 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:33.869796   75402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0816 18:14:33.869810   75402 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 18:14:33.972880   75402 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 18:14:33.972907   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetMachineName
	I0816 18:14:33.973141   75402 buildroot.go:166] provisioning hostname "old-k8s-version-783465"
	I0816 18:14:33.973172   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetMachineName
	I0816 18:14:33.973378   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:14:33.976198   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:33.976530   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:33.976563   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:33.976747   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHPort
	I0816 18:14:33.976945   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:33.977086   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:33.977228   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHUsername
	I0816 18:14:33.977369   75402 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:33.977529   75402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0816 18:14:33.977540   75402 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-783465 && echo "old-k8s-version-783465" | sudo tee /etc/hostname
	I0816 18:14:34.086092   75402 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-783465
	
	I0816 18:14:34.086123   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:14:34.088785   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.089107   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:34.089132   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.089285   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHPort
	I0816 18:14:34.089527   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:34.089684   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:34.089828   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHUsername
	I0816 18:14:34.089997   75402 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:34.090152   75402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0816 18:14:34.090168   75402 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-783465' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-783465/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-783465' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 18:14:34.200744   75402 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 18:14:34.200779   75402 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19461-9545/.minikube CaCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19461-9545/.minikube}
	I0816 18:14:34.200834   75402 buildroot.go:174] setting up certificates
	I0816 18:14:34.200848   75402 provision.go:84] configureAuth start
	I0816 18:14:34.200862   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetMachineName
	I0816 18:14:34.201175   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetIP
	I0816 18:14:34.203868   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.204297   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:34.204344   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.204506   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:14:34.207067   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.207441   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:34.207464   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.207810   75402 provision.go:143] copyHostCerts
	I0816 18:14:34.207869   75402 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem, removing ...
	I0816 18:14:34.207892   75402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem
	I0816 18:14:34.207951   75402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem (1082 bytes)
	I0816 18:14:34.208058   75402 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem, removing ...
	I0816 18:14:34.208069   75402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem
	I0816 18:14:34.208103   75402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem (1123 bytes)
	I0816 18:14:34.208180   75402 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem, removing ...
	I0816 18:14:34.208192   75402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem
	I0816 18:14:34.208220   75402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem (1679 bytes)
	I0816 18:14:34.208291   75402 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-783465 san=[127.0.0.1 192.168.39.211 localhost minikube old-k8s-version-783465]
	I0816 18:14:34.413800   75402 provision.go:177] copyRemoteCerts
	I0816 18:14:34.413857   75402 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 18:14:34.413881   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:14:34.416724   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.417138   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:34.417173   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.417345   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHPort
	I0816 18:14:34.417673   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:34.417894   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHUsername
	I0816 18:14:34.418089   75402 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/old-k8s-version-783465/id_rsa Username:docker}
	I0816 18:14:34.495519   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 18:14:34.517414   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0816 18:14:34.540423   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 18:14:34.563983   75402 provision.go:87] duration metric: took 363.122639ms to configureAuth
	I0816 18:14:34.564019   75402 buildroot.go:189] setting minikube options for container-runtime
	I0816 18:14:34.564229   75402 config.go:182] Loaded profile config "old-k8s-version-783465": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0816 18:14:34.564299   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:14:34.567149   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.567550   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:34.567580   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.567753   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHPort
	I0816 18:14:34.567935   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:34.568098   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:34.568255   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHUsername
	I0816 18:14:34.568448   75402 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:34.568659   75402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0816 18:14:34.568680   75402 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 18:14:34.824064   75402 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 18:14:34.824091   75402 machine.go:96] duration metric: took 958.278616ms to provisionDockerMachine
	I0816 18:14:34.824106   75402 start.go:293] postStartSetup for "old-k8s-version-783465" (driver="kvm2")
	I0816 18:14:34.824120   75402 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 18:14:34.824169   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	I0816 18:14:34.824556   75402 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 18:14:34.824599   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:14:34.827203   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.827517   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:34.827547   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.827677   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHPort
	I0816 18:14:34.827869   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:34.828033   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHUsername
	I0816 18:14:34.828171   75402 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/old-k8s-version-783465/id_rsa Username:docker}
	I0816 18:14:34.912148   75402 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 18:14:34.916652   75402 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 18:14:34.916681   75402 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/addons for local assets ...
	I0816 18:14:34.916755   75402 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/files for local assets ...
	I0816 18:14:34.916864   75402 filesync.go:149] local asset: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem -> 167532.pem in /etc/ssl/certs
	I0816 18:14:34.916989   75402 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 18:14:34.927061   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /etc/ssl/certs/167532.pem (1708 bytes)
	I0816 18:14:34.949703   75402 start.go:296] duration metric: took 125.581331ms for postStartSetup
	I0816 18:14:34.949743   75402 fix.go:56] duration metric: took 19.13519024s for fixHost
	I0816 18:14:34.949763   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:14:34.952740   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.953090   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:34.953124   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.953307   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHPort
	I0816 18:14:34.953532   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:34.953715   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:34.953861   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHUsername
	I0816 18:14:34.954029   75402 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:34.954229   75402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0816 18:14:34.954242   75402 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 18:14:35.053143   75402 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723832075.025252523
	
	I0816 18:14:35.053171   75402 fix.go:216] guest clock: 1723832075.025252523
	I0816 18:14:35.053180   75402 fix.go:229] Guest: 2024-08-16 18:14:35.025252523 +0000 UTC Remote: 2024-08-16 18:14:34.949747236 +0000 UTC m=+221.880938919 (delta=75.505287ms)
	I0816 18:14:35.053204   75402 fix.go:200] guest clock delta is within tolerance: 75.505287ms
	I0816 18:14:35.053211   75402 start.go:83] releasing machines lock for "old-k8s-version-783465", held for 19.238692888s
	I0816 18:14:35.053243   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	I0816 18:14:35.053549   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetIP
	I0816 18:14:35.056365   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:35.056792   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:35.056823   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:35.057009   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	I0816 18:14:35.057509   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	I0816 18:14:35.057731   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	I0816 18:14:35.057831   75402 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 18:14:35.057892   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:14:35.057951   75402 ssh_runner.go:195] Run: cat /version.json
	I0816 18:14:35.057972   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:14:35.060543   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:35.060733   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:35.060987   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:35.061016   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:35.061126   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:35.061148   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:35.061154   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHPort
	I0816 18:14:35.061319   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHPort
	I0816 18:14:35.061339   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:35.061456   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:35.061518   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHUsername
	I0816 18:14:35.061639   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHUsername
	I0816 18:14:35.061720   75402 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/old-k8s-version-783465/id_rsa Username:docker}
	I0816 18:14:35.061773   75402 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/old-k8s-version-783465/id_rsa Username:docker}
	I0816 18:14:35.174137   75402 ssh_runner.go:195] Run: systemctl --version
	I0816 18:14:35.181704   75402 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 18:14:35.323490   75402 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 18:14:35.330733   75402 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 18:14:35.330807   75402 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 18:14:35.350653   75402 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 18:14:35.350679   75402 start.go:495] detecting cgroup driver to use...
	I0816 18:14:35.350763   75402 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 18:14:35.372307   75402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 18:14:35.386513   75402 docker.go:217] disabling cri-docker service (if available) ...
	I0816 18:14:35.386598   75402 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 18:14:35.400406   75402 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 18:14:35.414761   75402 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 18:14:35.540356   75402 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 18:14:35.675726   75402 docker.go:233] disabling docker service ...
	I0816 18:14:35.675793   75402 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 18:14:35.691169   75402 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 18:14:35.707288   75402 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 18:14:35.858149   75402 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 18:14:35.981654   75402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 18:14:35.996396   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 18:14:36.013656   75402 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0816 18:14:36.013711   75402 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:36.023839   75402 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 18:14:36.023907   75402 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:36.033889   75402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:36.043727   75402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:36.053496   75402 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 18:14:36.063694   75402 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 18:14:36.072919   75402 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 18:14:36.072979   75402 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 18:14:36.085707   75402 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 18:14:36.095377   75402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:14:36.219235   75402 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 18:14:36.384915   75402 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 18:14:36.384990   75402 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 18:14:36.392122   75402 start.go:563] Will wait 60s for crictl version
	I0816 18:14:36.392196   75402 ssh_runner.go:195] Run: which crictl
	I0816 18:14:36.397589   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 18:14:36.443581   75402 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 18:14:36.443710   75402 ssh_runner.go:195] Run: crio --version
	I0816 18:14:36.473740   75402 ssh_runner.go:195] Run: crio --version
	I0816 18:14:36.512542   75402 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0816 18:14:36.513678   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetIP
	I0816 18:14:36.517404   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:36.517912   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:36.517948   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:36.518190   75402 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0816 18:14:36.523577   75402 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 18:14:36.536188   75402 kubeadm.go:883] updating cluster {Name:old-k8s-version-783465 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-783465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 18:14:36.536361   75402 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0816 18:14:36.536425   75402 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 18:14:36.587027   75402 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0816 18:14:36.587085   75402 ssh_runner.go:195] Run: which lz4
	I0816 18:14:36.590780   75402 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 18:14:36.594635   75402 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 18:14:36.594673   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0816 18:14:38.127403   75402 crio.go:462] duration metric: took 1.536659915s to copy over tarball
	I0816 18:14:38.127504   75402 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 18:14:41.109575   75402 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.982013621s)
	I0816 18:14:41.109639   75402 crio.go:469] duration metric: took 2.982198625s to extract the tarball
	I0816 18:14:41.109650   75402 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 18:14:41.152940   75402 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 18:14:41.185863   75402 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0816 18:14:41.185892   75402 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0816 18:14:41.185982   75402 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:14:41.186003   75402 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0816 18:14:41.186036   75402 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0816 18:14:41.186044   75402 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 18:14:41.186103   75402 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 18:14:41.185993   75402 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0816 18:14:41.186171   75402 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 18:14:41.185993   75402 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 18:14:41.187521   75402 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0816 18:14:41.187532   75402 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 18:14:41.187542   75402 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0816 18:14:41.187527   75402 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 18:14:41.187595   75402 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:14:41.187605   75402 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 18:14:41.187688   75402 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 18:14:41.187840   75402 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0816 18:14:41.421551   75402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0816 18:14:41.462506   75402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0816 18:14:41.467716   75402 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0816 18:14:41.467758   75402 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0816 18:14:41.467810   75402 ssh_runner.go:195] Run: which crictl
	I0816 18:14:41.508571   75402 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0816 18:14:41.508638   75402 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 18:14:41.508687   75402 ssh_runner.go:195] Run: which crictl
	I0816 18:14:41.508691   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 18:14:41.514560   75402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 18:14:41.520003   75402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0816 18:14:41.526475   75402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0816 18:14:41.526892   75402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0816 18:14:41.533271   75402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0816 18:14:41.569269   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 18:14:41.569426   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 18:14:41.694043   75402 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0816 18:14:41.694100   75402 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 18:14:41.694049   75402 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0816 18:14:41.694210   75402 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 18:14:41.694173   75402 ssh_runner.go:195] Run: which crictl
	I0816 18:14:41.694268   75402 ssh_runner.go:195] Run: which crictl
	I0816 18:14:41.701292   75402 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0816 18:14:41.701337   75402 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0816 18:14:41.701389   75402 ssh_runner.go:195] Run: which crictl
	I0816 18:14:41.707345   75402 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0816 18:14:41.707415   75402 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 18:14:41.707467   75402 ssh_runner.go:195] Run: which crictl
	I0816 18:14:41.711820   75402 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0816 18:14:41.711854   75402 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0816 18:14:41.711896   75402 ssh_runner.go:195] Run: which crictl
	I0816 18:14:41.723813   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 18:14:41.723850   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 18:14:41.723814   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 18:14:41.723939   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 18:14:41.723951   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 18:14:41.724003   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 18:14:41.724060   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 18:14:41.872645   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 18:14:41.872674   75402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0816 18:14:41.873747   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 18:14:41.873786   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 18:14:41.873891   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 18:14:41.873899   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 18:14:41.873960   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 18:14:41.997519   75402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0816 18:14:42.002048   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 18:14:42.002091   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 18:14:42.002140   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 18:14:42.002178   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 18:14:42.002218   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 18:14:42.070993   75402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:14:42.115418   75402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0816 18:14:42.115527   75402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0816 18:14:42.115623   75402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0816 18:14:42.115631   75402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0816 18:14:42.115689   75402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0816 18:14:42.235706   75402 cache_images.go:92] duration metric: took 1.049784726s to LoadCachedImages
	W0816 18:14:42.235807   75402 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0816 18:14:42.235821   75402 kubeadm.go:934] updating node { 192.168.39.211 8443 v1.20.0 crio true true} ...
	I0816 18:14:42.235939   75402 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-783465 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.211
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-783465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 18:14:42.236024   75402 ssh_runner.go:195] Run: crio config
	I0816 18:14:42.286742   75402 cni.go:84] Creating CNI manager for ""
	I0816 18:14:42.286763   75402 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 18:14:42.286771   75402 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 18:14:42.286789   75402 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.211 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-783465 NodeName:old-k8s-version-783465 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.211"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.211 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0816 18:14:42.286904   75402 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.211
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-783465"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.211
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.211"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 18:14:42.286961   75402 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0816 18:14:42.297015   75402 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 18:14:42.297098   75402 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 18:14:42.306400   75402 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0816 18:14:42.322812   75402 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 18:14:42.339791   75402 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0816 18:14:42.356930   75402 ssh_runner.go:195] Run: grep 192.168.39.211	control-plane.minikube.internal$ /etc/hosts
	I0816 18:14:42.360578   75402 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.211	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 18:14:42.373248   75402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:14:42.495499   75402 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 18:14:42.511910   75402 certs.go:68] Setting up /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465 for IP: 192.168.39.211
	I0816 18:14:42.511942   75402 certs.go:194] generating shared ca certs ...
	I0816 18:14:42.511964   75402 certs.go:226] acquiring lock for ca certs: {Name:mk1e000575c94c138513704c2900b8a68810eb65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:14:42.512147   75402 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key
	I0816 18:14:42.512206   75402 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key
	I0816 18:14:42.512220   75402 certs.go:256] generating profile certs ...
	I0816 18:14:42.512361   75402 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/client.key
	I0816 18:14:42.512431   75402 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/apiserver.key.94c45fb6
	I0816 18:14:42.512483   75402 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/proxy-client.key
	I0816 18:14:42.512664   75402 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem (1338 bytes)
	W0816 18:14:42.512709   75402 certs.go:480] ignoring /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753_empty.pem, impossibly tiny 0 bytes
	I0816 18:14:42.512724   75402 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 18:14:42.512754   75402 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem (1082 bytes)
	I0816 18:14:42.512794   75402 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem (1123 bytes)
	I0816 18:14:42.512825   75402 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem (1679 bytes)
	I0816 18:14:42.512881   75402 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem (1708 bytes)
	I0816 18:14:42.513660   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 18:14:42.552291   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 18:14:42.585617   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 18:14:42.611017   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 18:14:42.638092   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0816 18:14:42.676877   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 18:14:42.710091   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 18:14:42.743734   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 18:14:42.779905   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /usr/share/ca-certificates/167532.pem (1708 bytes)
	I0816 18:14:42.802779   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 18:14:42.826432   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem --> /usr/share/ca-certificates/16753.pem (1338 bytes)
	I0816 18:14:42.849286   75402 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 18:14:42.866901   75402 ssh_runner.go:195] Run: openssl version
	I0816 18:14:42.872283   75402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167532.pem && ln -fs /usr/share/ca-certificates/167532.pem /etc/ssl/certs/167532.pem"
	I0816 18:14:42.882976   75402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167532.pem
	I0816 18:14:42.887432   75402 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 17:00 /usr/share/ca-certificates/167532.pem
	I0816 18:14:42.887504   75402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167532.pem
	I0816 18:14:42.893275   75402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167532.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 18:14:42.903687   75402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 18:14:42.915232   75402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:14:42.919669   75402 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:49 /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:14:42.919735   75402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:14:42.925282   75402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 18:14:42.937888   75402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16753.pem && ln -fs /usr/share/ca-certificates/16753.pem /etc/ssl/certs/16753.pem"
	I0816 18:14:42.949994   75402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16753.pem
	I0816 18:14:42.954495   75402 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 17:00 /usr/share/ca-certificates/16753.pem
	I0816 18:14:42.954548   75402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16753.pem
	I0816 18:14:42.960295   75402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16753.pem /etc/ssl/certs/51391683.0"
	I0816 18:14:42.972006   75402 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 18:14:42.976450   75402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 18:14:42.982741   75402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 18:14:42.988649   75402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 18:14:42.995021   75402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 18:14:43.000965   75402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 18:14:43.007030   75402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 18:14:43.012891   75402 kubeadm.go:392] StartCluster: {Name:old-k8s-version-783465 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-783465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 18:14:43.012983   75402 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 18:14:43.013071   75402 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 18:14:43.050670   75402 cri.go:89] found id: ""
	I0816 18:14:43.050741   75402 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 18:14:43.060748   75402 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 18:14:43.060773   75402 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 18:14:43.060825   75402 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 18:14:43.070299   75402 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 18:14:43.071251   75402 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-783465" does not appear in /home/jenkins/minikube-integration/19461-9545/kubeconfig
	I0816 18:14:43.071945   75402 kubeconfig.go:62] /home/jenkins/minikube-integration/19461-9545/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-783465" cluster setting kubeconfig missing "old-k8s-version-783465" context setting]
	I0816 18:14:43.072870   75402 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/kubeconfig: {Name:mk7d5abb3a1b9b654c5d7490d32aeae2c9a1bdd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:14:43.141753   75402 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 18:14:43.154269   75402 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.211
	I0816 18:14:43.154324   75402 kubeadm.go:1160] stopping kube-system containers ...
	I0816 18:14:43.154341   75402 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 18:14:43.154404   75402 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 18:14:43.192966   75402 cri.go:89] found id: ""
	I0816 18:14:43.193035   75402 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 18:14:43.213101   75402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 18:14:43.222811   75402 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 18:14:43.222826   75402 kubeadm.go:157] found existing configuration files:
	
	I0816 18:14:43.222870   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 18:14:43.232196   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 18:14:43.232261   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 18:14:43.241633   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 18:14:43.250751   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 18:14:43.250800   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 18:14:43.260197   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 18:14:43.268943   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 18:14:43.269000   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 18:14:43.277887   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 18:14:43.286281   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 18:14:43.286391   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 18:14:43.295899   75402 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 18:14:43.306026   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:43.441487   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:44.213457   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:44.431649   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:44.553955   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:44.646817   75402 api_server.go:52] waiting for apiserver process to appear ...
	I0816 18:14:44.646923   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:45.147202   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:45.648050   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:46.147958   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:46.647398   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:47.147403   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:47.646992   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:48.147987   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:48.646974   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:49.147114   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:49.647020   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:50.147765   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:50.647135   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:51.147506   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:51.647568   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:52.147648   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:52.647865   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:53.146986   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:53.647279   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:54.147587   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:54.647911   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:55.147322   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:55.647765   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:56.147695   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:56.647296   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:57.147031   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:57.647108   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:58.147661   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:58.647270   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:59.147355   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:59.647821   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:00.148023   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:00.647165   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:01.147669   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:01.647960   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:02.147721   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:02.647932   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:03.147098   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:03.646983   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:04.147320   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:04.647649   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:05.147258   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:05.647999   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:06.147901   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:06.647340   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:07.147339   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:07.648033   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:08.147308   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:08.647669   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:09.147149   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:09.647072   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:10.147381   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:10.647567   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:11.147101   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:11.647587   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:12.146972   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:12.647842   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:13.147558   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:13.647755   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:14.147408   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:14.647810   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:15.147888   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:15.647476   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:16.147258   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:16.647785   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:17.147086   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:17.647852   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:18.147086   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:18.647013   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:19.147027   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:19.647100   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:20.147070   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:20.647097   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:21.147251   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:21.647856   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:22.147427   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:22.647231   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:23.147403   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:23.647030   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:24.147677   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:24.647324   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:25.147973   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:25.647097   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:26.147160   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:26.646963   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:27.147620   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:27.647918   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:28.146994   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:28.647364   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:29.147332   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:29.647773   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:30.147276   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:30.647794   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:31.147398   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:31.647565   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:32.147139   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:32.647961   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:33.147648   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:33.647087   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:34.147881   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:34.646988   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:35.147118   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:35.647978   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:36.147541   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:36.647423   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:37.147051   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:37.647726   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:38.147192   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:38.647318   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:39.147186   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:39.647662   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:40.147044   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:40.647787   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:41.147638   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:41.647490   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:42.147787   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:42.647959   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:43.147938   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:43.647855   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:44.147781   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:44.647710   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:15:44.647796   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:15:44.682176   75402 cri.go:89] found id: ""
	I0816 18:15:44.682207   75402 logs.go:276] 0 containers: []
	W0816 18:15:44.682218   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:15:44.682226   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:15:44.682285   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:15:44.717500   75402 cri.go:89] found id: ""
	I0816 18:15:44.717530   75402 logs.go:276] 0 containers: []
	W0816 18:15:44.717540   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:15:44.717552   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:15:44.717620   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:15:44.751816   75402 cri.go:89] found id: ""
	I0816 18:15:44.751847   75402 logs.go:276] 0 containers: []
	W0816 18:15:44.751858   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:15:44.751865   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:15:44.751942   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:15:44.783236   75402 cri.go:89] found id: ""
	I0816 18:15:44.783260   75402 logs.go:276] 0 containers: []
	W0816 18:15:44.783267   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:15:44.783272   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:15:44.783337   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:15:44.813087   75402 cri.go:89] found id: ""
	I0816 18:15:44.813110   75402 logs.go:276] 0 containers: []
	W0816 18:15:44.813116   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:15:44.813122   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:15:44.813166   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:15:44.843568   75402 cri.go:89] found id: ""
	I0816 18:15:44.843599   75402 logs.go:276] 0 containers: []
	W0816 18:15:44.843609   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:15:44.843616   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:15:44.843679   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:15:44.873694   75402 cri.go:89] found id: ""
	I0816 18:15:44.873723   75402 logs.go:276] 0 containers: []
	W0816 18:15:44.873734   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:15:44.873741   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:15:44.873808   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:15:44.906183   75402 cri.go:89] found id: ""
	I0816 18:15:44.906212   75402 logs.go:276] 0 containers: []
	W0816 18:15:44.906222   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:15:44.906231   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:15:44.906241   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:15:44.958963   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:15:44.958993   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:15:44.972390   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:15:44.972415   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:15:45.091624   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:15:45.091645   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:15:45.091661   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:15:45.159927   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:15:45.159963   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:15:47.698398   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:47.711848   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:15:47.711917   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:15:47.744247   75402 cri.go:89] found id: ""
	I0816 18:15:47.744278   75402 logs.go:276] 0 containers: []
	W0816 18:15:47.744288   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:15:47.744295   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:15:47.744374   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:15:47.783188   75402 cri.go:89] found id: ""
	I0816 18:15:47.783211   75402 logs.go:276] 0 containers: []
	W0816 18:15:47.783219   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:15:47.783224   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:15:47.783270   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:15:47.829284   75402 cri.go:89] found id: ""
	I0816 18:15:47.829320   75402 logs.go:276] 0 containers: []
	W0816 18:15:47.829333   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:15:47.829341   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:15:47.829413   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:15:47.879482   75402 cri.go:89] found id: ""
	I0816 18:15:47.879514   75402 logs.go:276] 0 containers: []
	W0816 18:15:47.879525   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:15:47.879532   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:15:47.879606   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:15:47.913766   75402 cri.go:89] found id: ""
	I0816 18:15:47.913797   75402 logs.go:276] 0 containers: []
	W0816 18:15:47.913808   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:15:47.913815   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:15:47.913880   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:15:47.947262   75402 cri.go:89] found id: ""
	I0816 18:15:47.947340   75402 logs.go:276] 0 containers: []
	W0816 18:15:47.947353   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:15:47.947362   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:15:47.947427   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:15:47.979638   75402 cri.go:89] found id: ""
	I0816 18:15:47.979667   75402 logs.go:276] 0 containers: []
	W0816 18:15:47.979678   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:15:47.979685   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:15:47.979741   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:15:48.010246   75402 cri.go:89] found id: ""
	I0816 18:15:48.010277   75402 logs.go:276] 0 containers: []
	W0816 18:15:48.010288   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:15:48.010296   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:15:48.010310   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:15:48.083916   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:15:48.083953   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:15:48.120254   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:15:48.120285   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:15:48.169590   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:15:48.169628   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:15:48.182821   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:15:48.182850   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:15:48.254088   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:15:50.755114   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:50.768167   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:15:50.768250   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:15:50.800881   75402 cri.go:89] found id: ""
	I0816 18:15:50.800906   75402 logs.go:276] 0 containers: []
	W0816 18:15:50.800913   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:15:50.800918   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:15:50.800969   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:15:50.833538   75402 cri.go:89] found id: ""
	I0816 18:15:50.833567   75402 logs.go:276] 0 containers: []
	W0816 18:15:50.833578   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:15:50.833586   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:15:50.833649   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:15:50.867306   75402 cri.go:89] found id: ""
	I0816 18:15:50.867336   75402 logs.go:276] 0 containers: []
	W0816 18:15:50.867347   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:15:50.867353   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:15:50.867400   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:15:50.900029   75402 cri.go:89] found id: ""
	I0816 18:15:50.900055   75402 logs.go:276] 0 containers: []
	W0816 18:15:50.900064   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:15:50.900072   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:15:50.900135   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:15:50.933604   75402 cri.go:89] found id: ""
	I0816 18:15:50.933630   75402 logs.go:276] 0 containers: []
	W0816 18:15:50.933638   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:15:50.933643   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:15:50.933707   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:15:50.966102   75402 cri.go:89] found id: ""
	I0816 18:15:50.966131   75402 logs.go:276] 0 containers: []
	W0816 18:15:50.966141   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:15:50.966149   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:15:50.966210   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:15:50.998007   75402 cri.go:89] found id: ""
	I0816 18:15:50.998036   75402 logs.go:276] 0 containers: []
	W0816 18:15:50.998047   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:15:50.998054   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:15:50.998115   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:15:51.032306   75402 cri.go:89] found id: ""
	I0816 18:15:51.032342   75402 logs.go:276] 0 containers: []
	W0816 18:15:51.032349   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:15:51.032357   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:15:51.032369   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:15:51.083186   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:15:51.083222   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:15:51.096072   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:15:51.096153   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:15:51.162667   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:15:51.162693   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:15:51.162709   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:15:51.241913   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:15:51.241954   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:15:53.779323   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:53.793358   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:15:53.793433   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:15:53.827380   75402 cri.go:89] found id: ""
	I0816 18:15:53.827414   75402 logs.go:276] 0 containers: []
	W0816 18:15:53.827424   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:15:53.827430   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:15:53.827489   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:15:53.867331   75402 cri.go:89] found id: ""
	I0816 18:15:53.867370   75402 logs.go:276] 0 containers: []
	W0816 18:15:53.867380   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:15:53.867386   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:15:53.867438   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:15:53.899445   75402 cri.go:89] found id: ""
	I0816 18:15:53.899477   75402 logs.go:276] 0 containers: []
	W0816 18:15:53.899489   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:15:53.899498   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:15:53.899588   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:15:53.936527   75402 cri.go:89] found id: ""
	I0816 18:15:53.936556   75402 logs.go:276] 0 containers: []
	W0816 18:15:53.936568   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:15:53.936576   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:15:53.936653   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:15:53.970739   75402 cri.go:89] found id: ""
	I0816 18:15:53.970765   75402 logs.go:276] 0 containers: []
	W0816 18:15:53.970773   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:15:53.970780   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:15:53.970842   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:15:54.004119   75402 cri.go:89] found id: ""
	I0816 18:15:54.004150   75402 logs.go:276] 0 containers: []
	W0816 18:15:54.004159   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:15:54.004164   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:15:54.004217   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:15:54.038370   75402 cri.go:89] found id: ""
	I0816 18:15:54.038400   75402 logs.go:276] 0 containers: []
	W0816 18:15:54.038411   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:15:54.038416   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:15:54.038472   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:15:54.079346   75402 cri.go:89] found id: ""
	I0816 18:15:54.079375   75402 logs.go:276] 0 containers: []
	W0816 18:15:54.079383   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:15:54.079392   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:15:54.079403   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:15:54.116551   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:15:54.116586   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:15:54.169930   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:15:54.169970   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:15:54.182416   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:15:54.182448   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:15:54.253516   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:15:54.253539   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:15:54.253559   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:15:56.833124   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:56.846139   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:15:56.846211   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:15:56.880899   75402 cri.go:89] found id: ""
	I0816 18:15:56.880928   75402 logs.go:276] 0 containers: []
	W0816 18:15:56.880939   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:15:56.880945   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:15:56.880994   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:15:56.913362   75402 cri.go:89] found id: ""
	I0816 18:15:56.913393   75402 logs.go:276] 0 containers: []
	W0816 18:15:56.913406   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:15:56.913415   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:15:56.913507   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:15:56.951876   75402 cri.go:89] found id: ""
	I0816 18:15:56.951904   75402 logs.go:276] 0 containers: []
	W0816 18:15:56.951914   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:15:56.951919   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:15:56.951988   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:15:56.986335   75402 cri.go:89] found id: ""
	I0816 18:15:56.986358   75402 logs.go:276] 0 containers: []
	W0816 18:15:56.986366   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:15:56.986372   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:15:56.986423   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:15:57.022485   75402 cri.go:89] found id: ""
	I0816 18:15:57.022511   75402 logs.go:276] 0 containers: []
	W0816 18:15:57.022522   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:15:57.022529   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:15:57.022641   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:15:57.055436   75402 cri.go:89] found id: ""
	I0816 18:15:57.055463   75402 logs.go:276] 0 containers: []
	W0816 18:15:57.055470   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:15:57.055476   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:15:57.055536   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:15:57.085930   75402 cri.go:89] found id: ""
	I0816 18:15:57.085965   75402 logs.go:276] 0 containers: []
	W0816 18:15:57.085975   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:15:57.085981   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:15:57.086032   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:15:57.120436   75402 cri.go:89] found id: ""
	I0816 18:15:57.120466   75402 logs.go:276] 0 containers: []
	W0816 18:15:57.120477   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:15:57.120488   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:15:57.120501   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:15:57.202161   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:15:57.202218   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:15:57.243766   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:15:57.243805   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:15:57.295552   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:15:57.295585   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:15:57.307769   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:15:57.307802   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:15:57.390480   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:15:59.891480   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:59.904766   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:15:59.904836   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:15:59.939209   75402 cri.go:89] found id: ""
	I0816 18:15:59.939241   75402 logs.go:276] 0 containers: []
	W0816 18:15:59.939252   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:15:59.939260   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:15:59.939324   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:15:59.971782   75402 cri.go:89] found id: ""
	I0816 18:15:59.971812   75402 logs.go:276] 0 containers: []
	W0816 18:15:59.971822   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:15:59.971832   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:15:59.971894   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:00.018585   75402 cri.go:89] found id: ""
	I0816 18:16:00.018630   75402 logs.go:276] 0 containers: []
	W0816 18:16:00.018643   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:00.018654   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:00.018722   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:00.050484   75402 cri.go:89] found id: ""
	I0816 18:16:00.050520   75402 logs.go:276] 0 containers: []
	W0816 18:16:00.050532   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:00.050540   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:00.050603   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:00.082900   75402 cri.go:89] found id: ""
	I0816 18:16:00.082930   75402 logs.go:276] 0 containers: []
	W0816 18:16:00.082942   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:00.082951   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:00.083025   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:00.115330   75402 cri.go:89] found id: ""
	I0816 18:16:00.115363   75402 logs.go:276] 0 containers: []
	W0816 18:16:00.115372   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:00.115378   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:00.115442   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:00.150898   75402 cri.go:89] found id: ""
	I0816 18:16:00.150935   75402 logs.go:276] 0 containers: []
	W0816 18:16:00.150952   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:00.150960   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:00.151033   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:00.193304   75402 cri.go:89] found id: ""
	I0816 18:16:00.193338   75402 logs.go:276] 0 containers: []
	W0816 18:16:00.193349   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:00.193359   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:00.193370   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:00.247340   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:00.247376   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:00.260470   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:00.260500   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:00.336483   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:00.336506   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:00.336521   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:00.421251   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:00.421289   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:02.964042   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:02.977284   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:02.977381   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:03.009533   75402 cri.go:89] found id: ""
	I0816 18:16:03.009574   75402 logs.go:276] 0 containers: []
	W0816 18:16:03.009586   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:03.009594   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:03.009673   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:03.043756   75402 cri.go:89] found id: ""
	I0816 18:16:03.043784   75402 logs.go:276] 0 containers: []
	W0816 18:16:03.043794   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:03.043802   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:03.043867   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:03.078817   75402 cri.go:89] found id: ""
	I0816 18:16:03.078840   75402 logs.go:276] 0 containers: []
	W0816 18:16:03.078848   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:03.078853   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:03.078906   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:03.112874   75402 cri.go:89] found id: ""
	I0816 18:16:03.112903   75402 logs.go:276] 0 containers: []
	W0816 18:16:03.112912   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:03.112918   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:03.112985   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:03.152008   75402 cri.go:89] found id: ""
	I0816 18:16:03.152040   75402 logs.go:276] 0 containers: []
	W0816 18:16:03.152052   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:03.152059   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:03.152125   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:03.187353   75402 cri.go:89] found id: ""
	I0816 18:16:03.187386   75402 logs.go:276] 0 containers: []
	W0816 18:16:03.187396   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:03.187404   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:03.187467   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:03.220860   75402 cri.go:89] found id: ""
	I0816 18:16:03.220895   75402 logs.go:276] 0 containers: []
	W0816 18:16:03.220903   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:03.220909   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:03.220958   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:03.252202   75402 cri.go:89] found id: ""
	I0816 18:16:03.252240   75402 logs.go:276] 0 containers: []
	W0816 18:16:03.252247   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:03.252256   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:03.252268   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:03.286907   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:03.286934   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:03.338212   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:03.338249   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:03.352548   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:03.352585   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:03.427580   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:03.427610   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:03.427626   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:06.011792   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:06.024201   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:06.024277   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:06.058328   75402 cri.go:89] found id: ""
	I0816 18:16:06.058356   75402 logs.go:276] 0 containers: []
	W0816 18:16:06.058367   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:06.058373   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:06.058433   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:06.091262   75402 cri.go:89] found id: ""
	I0816 18:16:06.091298   75402 logs.go:276] 0 containers: []
	W0816 18:16:06.091311   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:06.091318   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:06.091382   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:06.124114   75402 cri.go:89] found id: ""
	I0816 18:16:06.124146   75402 logs.go:276] 0 containers: []
	W0816 18:16:06.124154   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:06.124159   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:06.124220   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:06.155379   75402 cri.go:89] found id: ""
	I0816 18:16:06.155406   75402 logs.go:276] 0 containers: []
	W0816 18:16:06.155416   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:06.155422   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:06.155471   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:06.189442   75402 cri.go:89] found id: ""
	I0816 18:16:06.189472   75402 logs.go:276] 0 containers: []
	W0816 18:16:06.189480   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:06.189485   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:06.189538   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:06.228881   75402 cri.go:89] found id: ""
	I0816 18:16:06.228910   75402 logs.go:276] 0 containers: []
	W0816 18:16:06.228921   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:06.228929   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:06.229003   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:06.262272   75402 cri.go:89] found id: ""
	I0816 18:16:06.262299   75402 logs.go:276] 0 containers: []
	W0816 18:16:06.262310   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:06.262317   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:06.262386   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:06.295427   75402 cri.go:89] found id: ""
	I0816 18:16:06.295456   75402 logs.go:276] 0 containers: []
	W0816 18:16:06.295468   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:06.295478   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:06.295492   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:06.347569   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:06.347608   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:06.362786   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:06.362825   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:06.432020   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:06.432044   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:06.432059   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:06.512085   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:06.512120   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:09.051957   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:09.066630   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:09.066690   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:09.101484   75402 cri.go:89] found id: ""
	I0816 18:16:09.101515   75402 logs.go:276] 0 containers: []
	W0816 18:16:09.101526   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:09.101536   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:09.101614   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:09.140645   75402 cri.go:89] found id: ""
	I0816 18:16:09.140677   75402 logs.go:276] 0 containers: []
	W0816 18:16:09.140689   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:09.140696   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:09.140758   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:09.174666   75402 cri.go:89] found id: ""
	I0816 18:16:09.174698   75402 logs.go:276] 0 containers: []
	W0816 18:16:09.174708   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:09.174717   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:09.174780   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:09.209715   75402 cri.go:89] found id: ""
	I0816 18:16:09.209748   75402 logs.go:276] 0 containers: []
	W0816 18:16:09.209758   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:09.209767   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:09.209845   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:09.243681   75402 cri.go:89] found id: ""
	I0816 18:16:09.243712   75402 logs.go:276] 0 containers: []
	W0816 18:16:09.243720   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:09.243726   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:09.243781   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:09.278058   75402 cri.go:89] found id: ""
	I0816 18:16:09.278090   75402 logs.go:276] 0 containers: []
	W0816 18:16:09.278102   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:09.278111   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:09.278178   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:09.313092   75402 cri.go:89] found id: ""
	I0816 18:16:09.313122   75402 logs.go:276] 0 containers: []
	W0816 18:16:09.313132   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:09.313137   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:09.313201   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:09.345203   75402 cri.go:89] found id: ""
	I0816 18:16:09.345229   75402 logs.go:276] 0 containers: []
	W0816 18:16:09.345236   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:09.345245   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:09.345259   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:09.358198   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:09.358225   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:09.422024   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:09.422047   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:09.422059   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:09.498684   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:09.498717   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:09.535349   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:09.535382   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:12.087472   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:12.100412   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:12.100477   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:12.133982   75402 cri.go:89] found id: ""
	I0816 18:16:12.134018   75402 logs.go:276] 0 containers: []
	W0816 18:16:12.134030   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:12.134038   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:12.134100   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:12.166466   75402 cri.go:89] found id: ""
	I0816 18:16:12.166497   75402 logs.go:276] 0 containers: []
	W0816 18:16:12.166507   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:12.166514   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:12.166589   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:12.197752   75402 cri.go:89] found id: ""
	I0816 18:16:12.197779   75402 logs.go:276] 0 containers: []
	W0816 18:16:12.197790   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:12.197797   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:12.197856   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:12.239759   75402 cri.go:89] found id: ""
	I0816 18:16:12.239789   75402 logs.go:276] 0 containers: []
	W0816 18:16:12.239801   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:12.239810   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:12.239871   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:12.273263   75402 cri.go:89] found id: ""
	I0816 18:16:12.273292   75402 logs.go:276] 0 containers: []
	W0816 18:16:12.273302   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:12.273310   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:12.273370   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:12.308788   75402 cri.go:89] found id: ""
	I0816 18:16:12.308820   75402 logs.go:276] 0 containers: []
	W0816 18:16:12.308831   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:12.308839   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:12.308897   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:12.345243   75402 cri.go:89] found id: ""
	I0816 18:16:12.345274   75402 logs.go:276] 0 containers: []
	W0816 18:16:12.345281   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:12.345288   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:12.345341   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:12.379939   75402 cri.go:89] found id: ""
	I0816 18:16:12.379968   75402 logs.go:276] 0 containers: []
	W0816 18:16:12.379978   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:12.379989   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:12.380004   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:12.436097   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:12.436130   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:12.449328   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:12.449357   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:12.518723   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:12.518749   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:12.518764   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:12.600228   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:12.600268   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:15.137940   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:15.150617   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:15.150694   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:15.186029   75402 cri.go:89] found id: ""
	I0816 18:16:15.186057   75402 logs.go:276] 0 containers: []
	W0816 18:16:15.186067   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:15.186074   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:15.186134   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:15.219812   75402 cri.go:89] found id: ""
	I0816 18:16:15.219840   75402 logs.go:276] 0 containers: []
	W0816 18:16:15.219851   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:15.219864   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:15.219927   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:15.253434   75402 cri.go:89] found id: ""
	I0816 18:16:15.253462   75402 logs.go:276] 0 containers: []
	W0816 18:16:15.253472   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:15.253479   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:15.253542   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:15.286697   75402 cri.go:89] found id: ""
	I0816 18:16:15.286729   75402 logs.go:276] 0 containers: []
	W0816 18:16:15.286745   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:15.286751   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:15.286810   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:15.319363   75402 cri.go:89] found id: ""
	I0816 18:16:15.319405   75402 logs.go:276] 0 containers: []
	W0816 18:16:15.319415   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:15.319422   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:15.319506   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:15.353900   75402 cri.go:89] found id: ""
	I0816 18:16:15.353924   75402 logs.go:276] 0 containers: []
	W0816 18:16:15.353931   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:15.353937   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:15.353991   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:15.389086   75402 cri.go:89] found id: ""
	I0816 18:16:15.389114   75402 logs.go:276] 0 containers: []
	W0816 18:16:15.389122   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:15.389127   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:15.389184   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:15.424069   75402 cri.go:89] found id: ""
	I0816 18:16:15.424099   75402 logs.go:276] 0 containers: []
	W0816 18:16:15.424110   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:15.424121   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:15.424136   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:15.482703   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:15.482738   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:15.496859   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:15.496886   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:15.562178   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:15.562196   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:15.562212   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:15.643484   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:15.643521   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:18.180963   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:18.194705   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:18.194783   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:18.231302   75402 cri.go:89] found id: ""
	I0816 18:16:18.231337   75402 logs.go:276] 0 containers: []
	W0816 18:16:18.231348   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:18.231355   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:18.231413   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:18.264098   75402 cri.go:89] found id: ""
	I0816 18:16:18.264124   75402 logs.go:276] 0 containers: []
	W0816 18:16:18.264135   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:18.264155   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:18.264228   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:18.298133   75402 cri.go:89] found id: ""
	I0816 18:16:18.298165   75402 logs.go:276] 0 containers: []
	W0816 18:16:18.298178   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:18.298186   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:18.298252   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:18.331323   75402 cri.go:89] found id: ""
	I0816 18:16:18.331354   75402 logs.go:276] 0 containers: []
	W0816 18:16:18.331362   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:18.331367   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:18.331416   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:18.365677   75402 cri.go:89] found id: ""
	I0816 18:16:18.365709   75402 logs.go:276] 0 containers: []
	W0816 18:16:18.365718   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:18.365724   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:18.365774   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:18.399801   75402 cri.go:89] found id: ""
	I0816 18:16:18.399835   75402 logs.go:276] 0 containers: []
	W0816 18:16:18.399844   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:18.399850   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:18.399908   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:18.438148   75402 cri.go:89] found id: ""
	I0816 18:16:18.438179   75402 logs.go:276] 0 containers: []
	W0816 18:16:18.438189   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:18.438197   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:18.438257   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:18.472185   75402 cri.go:89] found id: ""
	I0816 18:16:18.472215   75402 logs.go:276] 0 containers: []
	W0816 18:16:18.472223   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:18.472232   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:18.472243   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:18.523369   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:18.523400   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:18.536152   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:18.536179   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:18.611539   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:18.611560   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:18.611571   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:18.688043   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:18.688079   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:21.229163   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:21.242641   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:21.242717   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:21.275188   75402 cri.go:89] found id: ""
	I0816 18:16:21.275213   75402 logs.go:276] 0 containers: []
	W0816 18:16:21.275220   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:21.275226   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:21.275275   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:21.308377   75402 cri.go:89] found id: ""
	I0816 18:16:21.308406   75402 logs.go:276] 0 containers: []
	W0816 18:16:21.308417   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:21.308424   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:21.308475   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:21.341067   75402 cri.go:89] found id: ""
	I0816 18:16:21.341098   75402 logs.go:276] 0 containers: []
	W0816 18:16:21.341106   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:21.341112   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:21.341170   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:21.372707   75402 cri.go:89] found id: ""
	I0816 18:16:21.372743   75402 logs.go:276] 0 containers: []
	W0816 18:16:21.372756   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:21.372763   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:21.372847   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:21.410210   75402 cri.go:89] found id: ""
	I0816 18:16:21.410241   75402 logs.go:276] 0 containers: []
	W0816 18:16:21.410252   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:21.410259   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:21.410323   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:21.444840   75402 cri.go:89] found id: ""
	I0816 18:16:21.444863   75402 logs.go:276] 0 containers: []
	W0816 18:16:21.444872   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:21.444879   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:21.444942   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:21.478278   75402 cri.go:89] found id: ""
	I0816 18:16:21.478319   75402 logs.go:276] 0 containers: []
	W0816 18:16:21.478327   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:21.478333   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:21.478395   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:21.512026   75402 cri.go:89] found id: ""
	I0816 18:16:21.512063   75402 logs.go:276] 0 containers: []
	W0816 18:16:21.512073   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:21.512090   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:21.512111   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:21.564800   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:21.564834   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:21.577343   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:21.577368   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:21.663216   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:21.663238   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:21.663251   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:21.741960   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:21.741994   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:24.282136   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:24.296452   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:24.296513   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:24.337173   75402 cri.go:89] found id: ""
	I0816 18:16:24.337200   75402 logs.go:276] 0 containers: []
	W0816 18:16:24.337210   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:24.337218   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:24.337282   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:24.374163   75402 cri.go:89] found id: ""
	I0816 18:16:24.374200   75402 logs.go:276] 0 containers: []
	W0816 18:16:24.374213   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:24.374222   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:24.374287   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:24.407823   75402 cri.go:89] found id: ""
	I0816 18:16:24.407854   75402 logs.go:276] 0 containers: []
	W0816 18:16:24.407866   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:24.407881   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:24.407953   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:24.444006   75402 cri.go:89] found id: ""
	I0816 18:16:24.444032   75402 logs.go:276] 0 containers: []
	W0816 18:16:24.444042   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:24.444049   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:24.444113   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:24.479082   75402 cri.go:89] found id: ""
	I0816 18:16:24.479110   75402 logs.go:276] 0 containers: []
	W0816 18:16:24.479119   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:24.479125   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:24.479174   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:24.524738   75402 cri.go:89] found id: ""
	I0816 18:16:24.524764   75402 logs.go:276] 0 containers: []
	W0816 18:16:24.524775   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:24.524782   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:24.524842   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:24.560298   75402 cri.go:89] found id: ""
	I0816 18:16:24.560326   75402 logs.go:276] 0 containers: []
	W0816 18:16:24.560335   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:24.560343   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:24.560406   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:24.597182   75402 cri.go:89] found id: ""
	I0816 18:16:24.597214   75402 logs.go:276] 0 containers: []
	W0816 18:16:24.597227   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:24.597239   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:24.597254   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:24.653063   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:24.653106   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:24.665940   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:24.665972   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:24.736599   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:24.736639   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:24.736657   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:24.821883   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:24.821939   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:27.359558   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:27.382980   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:27.383053   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:27.416766   75402 cri.go:89] found id: ""
	I0816 18:16:27.416793   75402 logs.go:276] 0 containers: []
	W0816 18:16:27.416802   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:27.416811   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:27.416873   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:27.452966   75402 cri.go:89] found id: ""
	I0816 18:16:27.452988   75402 logs.go:276] 0 containers: []
	W0816 18:16:27.452995   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:27.453001   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:27.453050   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:27.485850   75402 cri.go:89] found id: ""
	I0816 18:16:27.485885   75402 logs.go:276] 0 containers: []
	W0816 18:16:27.485896   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:27.485903   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:27.485960   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:27.517667   75402 cri.go:89] found id: ""
	I0816 18:16:27.517694   75402 logs.go:276] 0 containers: []
	W0816 18:16:27.517704   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:27.517711   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:27.517774   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:27.553547   75402 cri.go:89] found id: ""
	I0816 18:16:27.553574   75402 logs.go:276] 0 containers: []
	W0816 18:16:27.553582   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:27.553593   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:27.553653   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:27.586857   75402 cri.go:89] found id: ""
	I0816 18:16:27.586884   75402 logs.go:276] 0 containers: []
	W0816 18:16:27.586893   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:27.586898   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:27.586957   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:27.621739   75402 cri.go:89] found id: ""
	I0816 18:16:27.621766   75402 logs.go:276] 0 containers: []
	W0816 18:16:27.621776   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:27.621784   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:27.621844   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:27.657772   75402 cri.go:89] found id: ""
	I0816 18:16:27.657797   75402 logs.go:276] 0 containers: []
	W0816 18:16:27.657805   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:27.657819   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:27.657831   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:27.729769   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:27.729796   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:27.729810   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:27.813351   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:27.813403   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:27.852985   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:27.853010   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:27.908434   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:27.908476   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:30.422781   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:30.435987   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:30.436070   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:30.470878   75402 cri.go:89] found id: ""
	I0816 18:16:30.470907   75402 logs.go:276] 0 containers: []
	W0816 18:16:30.470918   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:30.470926   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:30.470983   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:30.504940   75402 cri.go:89] found id: ""
	I0816 18:16:30.504969   75402 logs.go:276] 0 containers: []
	W0816 18:16:30.504980   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:30.504988   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:30.505058   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:30.538680   75402 cri.go:89] found id: ""
	I0816 18:16:30.538708   75402 logs.go:276] 0 containers: []
	W0816 18:16:30.538716   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:30.538722   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:30.538788   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:30.574757   75402 cri.go:89] found id: ""
	I0816 18:16:30.574782   75402 logs.go:276] 0 containers: []
	W0816 18:16:30.574791   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:30.574797   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:30.574853   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:30.612500   75402 cri.go:89] found id: ""
	I0816 18:16:30.612529   75402 logs.go:276] 0 containers: []
	W0816 18:16:30.612539   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:30.612547   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:30.612613   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:30.644572   75402 cri.go:89] found id: ""
	I0816 18:16:30.644595   75402 logs.go:276] 0 containers: []
	W0816 18:16:30.644603   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:30.644609   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:30.644678   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:30.678199   75402 cri.go:89] found id: ""
	I0816 18:16:30.678232   75402 logs.go:276] 0 containers: []
	W0816 18:16:30.678243   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:30.678252   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:30.678331   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:30.709435   75402 cri.go:89] found id: ""
	I0816 18:16:30.709470   75402 logs.go:276] 0 containers: []
	W0816 18:16:30.709482   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:30.709494   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:30.709511   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:30.723430   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:30.723464   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:30.800340   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:30.800374   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:30.800390   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:30.883945   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:30.883986   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:30.922107   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:30.922139   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:33.480016   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:33.494178   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:33.494241   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:33.529497   75402 cri.go:89] found id: ""
	I0816 18:16:33.529527   75402 logs.go:276] 0 containers: []
	W0816 18:16:33.529546   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:33.529554   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:33.529614   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:33.566670   75402 cri.go:89] found id: ""
	I0816 18:16:33.566700   75402 logs.go:276] 0 containers: []
	W0816 18:16:33.566711   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:33.566718   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:33.566781   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:33.603898   75402 cri.go:89] found id: ""
	I0816 18:16:33.603926   75402 logs.go:276] 0 containers: []
	W0816 18:16:33.603937   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:33.603944   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:33.604003   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:33.636077   75402 cri.go:89] found id: ""
	I0816 18:16:33.636111   75402 logs.go:276] 0 containers: []
	W0816 18:16:33.636125   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:33.636134   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:33.636200   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:33.668974   75402 cri.go:89] found id: ""
	I0816 18:16:33.669002   75402 logs.go:276] 0 containers: []
	W0816 18:16:33.669011   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:33.669017   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:33.669070   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:33.700981   75402 cri.go:89] found id: ""
	I0816 18:16:33.701010   75402 logs.go:276] 0 containers: []
	W0816 18:16:33.701019   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:33.701026   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:33.701088   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:33.735430   75402 cri.go:89] found id: ""
	I0816 18:16:33.735463   75402 logs.go:276] 0 containers: []
	W0816 18:16:33.735474   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:33.735481   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:33.735539   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:33.779797   75402 cri.go:89] found id: ""
	I0816 18:16:33.779829   75402 logs.go:276] 0 containers: []
	W0816 18:16:33.779840   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:33.779851   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:33.779865   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:33.824873   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:33.824908   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:33.874177   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:33.874217   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:33.888535   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:33.888561   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:33.957590   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:33.957608   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:33.957627   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:36.533660   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:36.546542   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:36.546606   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:36.584056   75402 cri.go:89] found id: ""
	I0816 18:16:36.584085   75402 logs.go:276] 0 containers: []
	W0816 18:16:36.584094   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:36.584099   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:36.584149   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:36.622143   75402 cri.go:89] found id: ""
	I0816 18:16:36.622172   75402 logs.go:276] 0 containers: []
	W0816 18:16:36.622184   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:36.622193   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:36.622262   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:36.655479   75402 cri.go:89] found id: ""
	I0816 18:16:36.655509   75402 logs.go:276] 0 containers: []
	W0816 18:16:36.655520   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:36.655528   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:36.655603   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:36.688044   75402 cri.go:89] found id: ""
	I0816 18:16:36.688076   75402 logs.go:276] 0 containers: []
	W0816 18:16:36.688088   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:36.688096   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:36.688161   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:36.725831   75402 cri.go:89] found id: ""
	I0816 18:16:36.725861   75402 logs.go:276] 0 containers: []
	W0816 18:16:36.725868   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:36.725874   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:36.725925   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:36.758398   75402 cri.go:89] found id: ""
	I0816 18:16:36.758433   75402 logs.go:276] 0 containers: []
	W0816 18:16:36.758444   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:36.758453   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:36.758517   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:36.791097   75402 cri.go:89] found id: ""
	I0816 18:16:36.791126   75402 logs.go:276] 0 containers: []
	W0816 18:16:36.791136   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:36.791144   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:36.791207   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:36.829337   75402 cri.go:89] found id: ""
	I0816 18:16:36.829369   75402 logs.go:276] 0 containers: []
	W0816 18:16:36.829380   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:36.829391   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:36.829405   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:36.881898   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:36.881932   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:36.895584   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:36.895618   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:36.967175   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:36.967197   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:36.967213   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:37.046993   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:37.047025   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:39.588683   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:39.607205   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:39.607287   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:39.640517   75402 cri.go:89] found id: ""
	I0816 18:16:39.640541   75402 logs.go:276] 0 containers: []
	W0816 18:16:39.640549   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:39.640554   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:39.640604   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:39.673777   75402 cri.go:89] found id: ""
	I0816 18:16:39.673805   75402 logs.go:276] 0 containers: []
	W0816 18:16:39.673813   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:39.673818   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:39.673899   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:39.709574   75402 cri.go:89] found id: ""
	I0816 18:16:39.709598   75402 logs.go:276] 0 containers: []
	W0816 18:16:39.709606   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:39.709611   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:39.709666   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:39.743946   75402 cri.go:89] found id: ""
	I0816 18:16:39.743971   75402 logs.go:276] 0 containers: []
	W0816 18:16:39.743979   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:39.743985   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:39.744041   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:39.776140   75402 cri.go:89] found id: ""
	I0816 18:16:39.776171   75402 logs.go:276] 0 containers: []
	W0816 18:16:39.776181   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:39.776187   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:39.776254   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:39.808697   75402 cri.go:89] found id: ""
	I0816 18:16:39.808719   75402 logs.go:276] 0 containers: []
	W0816 18:16:39.808728   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:39.808734   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:39.808793   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:39.840163   75402 cri.go:89] found id: ""
	I0816 18:16:39.840190   75402 logs.go:276] 0 containers: []
	W0816 18:16:39.840200   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:39.840206   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:39.840270   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:39.874396   75402 cri.go:89] found id: ""
	I0816 18:16:39.874419   75402 logs.go:276] 0 containers: []
	W0816 18:16:39.874426   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:39.874434   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:39.874448   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:39.927922   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:39.927963   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:39.942048   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:39.942076   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:40.012143   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:40.012166   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:40.012181   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:40.088798   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:40.088844   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:42.625875   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:42.640386   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:42.640448   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:42.675201   75402 cri.go:89] found id: ""
	I0816 18:16:42.675224   75402 logs.go:276] 0 containers: []
	W0816 18:16:42.675231   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:42.675236   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:42.675293   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:42.705156   75402 cri.go:89] found id: ""
	I0816 18:16:42.705182   75402 logs.go:276] 0 containers: []
	W0816 18:16:42.705192   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:42.705199   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:42.705258   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:42.738921   75402 cri.go:89] found id: ""
	I0816 18:16:42.738948   75402 logs.go:276] 0 containers: []
	W0816 18:16:42.738956   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:42.738962   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:42.739013   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:42.771130   75402 cri.go:89] found id: ""
	I0816 18:16:42.771160   75402 logs.go:276] 0 containers: []
	W0816 18:16:42.771168   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:42.771175   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:42.771231   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:42.805774   75402 cri.go:89] found id: ""
	I0816 18:16:42.805803   75402 logs.go:276] 0 containers: []
	W0816 18:16:42.805811   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:42.805817   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:42.805879   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:42.840248   75402 cri.go:89] found id: ""
	I0816 18:16:42.840277   75402 logs.go:276] 0 containers: []
	W0816 18:16:42.840293   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:42.840302   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:42.840360   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:42.873260   75402 cri.go:89] found id: ""
	I0816 18:16:42.873287   75402 logs.go:276] 0 containers: []
	W0816 18:16:42.873297   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:42.873322   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:42.873383   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:42.906205   75402 cri.go:89] found id: ""
	I0816 18:16:42.906230   75402 logs.go:276] 0 containers: []
	W0816 18:16:42.906238   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:42.906247   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:42.906257   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:42.959235   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:42.959272   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:42.972063   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:42.972090   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:43.039530   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:43.039558   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:43.039569   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:43.115486   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:43.115519   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:45.651040   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:45.663718   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:45.663812   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:45.696548   75402 cri.go:89] found id: ""
	I0816 18:16:45.696578   75402 logs.go:276] 0 containers: []
	W0816 18:16:45.696586   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:45.696591   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:45.696663   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:45.731032   75402 cri.go:89] found id: ""
	I0816 18:16:45.731059   75402 logs.go:276] 0 containers: []
	W0816 18:16:45.731068   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:45.731073   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:45.731126   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:45.764801   75402 cri.go:89] found id: ""
	I0816 18:16:45.764829   75402 logs.go:276] 0 containers: []
	W0816 18:16:45.764840   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:45.764846   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:45.764908   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:45.800768   75402 cri.go:89] found id: ""
	I0816 18:16:45.800795   75402 logs.go:276] 0 containers: []
	W0816 18:16:45.800803   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:45.800809   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:45.800858   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:45.841460   75402 cri.go:89] found id: ""
	I0816 18:16:45.841486   75402 logs.go:276] 0 containers: []
	W0816 18:16:45.841493   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:45.841505   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:45.841566   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:45.875230   75402 cri.go:89] found id: ""
	I0816 18:16:45.875254   75402 logs.go:276] 0 containers: []
	W0816 18:16:45.875261   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:45.875266   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:45.875319   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:45.907711   75402 cri.go:89] found id: ""
	I0816 18:16:45.907739   75402 logs.go:276] 0 containers: []
	W0816 18:16:45.907747   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:45.907753   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:45.907804   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:45.943147   75402 cri.go:89] found id: ""
	I0816 18:16:45.943171   75402 logs.go:276] 0 containers: []
	W0816 18:16:45.943182   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:45.943192   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:45.943206   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:45.998459   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:45.998491   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:46.013237   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:46.013267   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:46.079248   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:46.079273   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:46.079288   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:46.158842   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:46.158874   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:48.696728   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:48.710946   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:48.711041   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:48.746696   75402 cri.go:89] found id: ""
	I0816 18:16:48.746727   75402 logs.go:276] 0 containers: []
	W0816 18:16:48.746735   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:48.746741   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:48.746803   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:48.781496   75402 cri.go:89] found id: ""
	I0816 18:16:48.781522   75402 logs.go:276] 0 containers: []
	W0816 18:16:48.781532   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:48.781539   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:48.781603   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:48.815628   75402 cri.go:89] found id: ""
	I0816 18:16:48.815654   75402 logs.go:276] 0 containers: []
	W0816 18:16:48.815665   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:48.815673   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:48.815736   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:48.848990   75402 cri.go:89] found id: ""
	I0816 18:16:48.849018   75402 logs.go:276] 0 containers: []
	W0816 18:16:48.849030   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:48.849040   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:48.849098   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:48.886924   75402 cri.go:89] found id: ""
	I0816 18:16:48.886949   75402 logs.go:276] 0 containers: []
	W0816 18:16:48.886960   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:48.886968   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:48.887022   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:48.923989   75402 cri.go:89] found id: ""
	I0816 18:16:48.924018   75402 logs.go:276] 0 containers: []
	W0816 18:16:48.924030   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:48.924038   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:48.924102   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:48.959513   75402 cri.go:89] found id: ""
	I0816 18:16:48.959546   75402 logs.go:276] 0 containers: []
	W0816 18:16:48.959556   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:48.959562   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:48.959614   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:48.995615   75402 cri.go:89] found id: ""
	I0816 18:16:48.995651   75402 logs.go:276] 0 containers: []
	W0816 18:16:48.995662   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:48.995673   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:48.995688   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:49.008440   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:49.008468   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:49.076761   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:49.076780   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:49.076797   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:49.152855   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:49.152893   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:49.190857   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:49.190887   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:51.745344   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:51.759552   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:51.759628   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:51.795494   75402 cri.go:89] found id: ""
	I0816 18:16:51.795520   75402 logs.go:276] 0 containers: []
	W0816 18:16:51.795531   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:51.795539   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:51.795600   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:51.833162   75402 cri.go:89] found id: ""
	I0816 18:16:51.833188   75402 logs.go:276] 0 containers: []
	W0816 18:16:51.833198   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:51.833205   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:51.833265   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:51.866940   75402 cri.go:89] found id: ""
	I0816 18:16:51.866968   75402 logs.go:276] 0 containers: []
	W0816 18:16:51.866979   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:51.866986   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:51.867051   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:51.899824   75402 cri.go:89] found id: ""
	I0816 18:16:51.899857   75402 logs.go:276] 0 containers: []
	W0816 18:16:51.899867   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:51.899874   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:51.899937   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:51.932273   75402 cri.go:89] found id: ""
	I0816 18:16:51.932297   75402 logs.go:276] 0 containers: []
	W0816 18:16:51.932312   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:51.932320   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:51.932390   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:51.966885   75402 cri.go:89] found id: ""
	I0816 18:16:51.966911   75402 logs.go:276] 0 containers: []
	W0816 18:16:51.966922   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:51.966930   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:51.966996   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:52.002988   75402 cri.go:89] found id: ""
	I0816 18:16:52.003020   75402 logs.go:276] 0 containers: []
	W0816 18:16:52.003029   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:52.003035   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:52.003098   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:52.038858   75402 cri.go:89] found id: ""
	I0816 18:16:52.038894   75402 logs.go:276] 0 containers: []
	W0816 18:16:52.038909   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:52.038919   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:52.038933   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:52.076404   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:52.076431   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:52.127735   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:52.127767   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:52.140657   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:52.140680   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:52.202961   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:52.202989   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:52.203008   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:54.787095   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:54.801258   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:54.801332   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:54.837987   75402 cri.go:89] found id: ""
	I0816 18:16:54.838018   75402 logs.go:276] 0 containers: []
	W0816 18:16:54.838028   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:54.838034   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:54.838118   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:54.872439   75402 cri.go:89] found id: ""
	I0816 18:16:54.872466   75402 logs.go:276] 0 containers: []
	W0816 18:16:54.872477   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:54.872490   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:54.872554   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:54.904676   75402 cri.go:89] found id: ""
	I0816 18:16:54.904706   75402 logs.go:276] 0 containers: []
	W0816 18:16:54.904717   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:54.904724   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:54.904783   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:54.938101   75402 cri.go:89] found id: ""
	I0816 18:16:54.938134   75402 logs.go:276] 0 containers: []
	W0816 18:16:54.938145   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:54.938154   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:54.938218   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:54.977409   75402 cri.go:89] found id: ""
	I0816 18:16:54.977442   75402 logs.go:276] 0 containers: []
	W0816 18:16:54.977453   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:54.977460   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:54.977521   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:55.013248   75402 cri.go:89] found id: ""
	I0816 18:16:55.013275   75402 logs.go:276] 0 containers: []
	W0816 18:16:55.013286   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:55.013294   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:55.013363   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:55.044555   75402 cri.go:89] found id: ""
	I0816 18:16:55.044588   75402 logs.go:276] 0 containers: []
	W0816 18:16:55.044597   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:55.044603   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:55.044690   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:55.075970   75402 cri.go:89] found id: ""
	I0816 18:16:55.075997   75402 logs.go:276] 0 containers: []
	W0816 18:16:55.076006   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:55.076014   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:55.076025   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:55.149982   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:55.150017   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:55.190160   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:55.190194   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:55.242629   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:55.242660   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:55.255229   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:55.255254   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:55.324775   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:57.824996   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:57.838666   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:57.838740   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:57.872828   75402 cri.go:89] found id: ""
	I0816 18:16:57.872861   75402 logs.go:276] 0 containers: []
	W0816 18:16:57.872869   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:57.872875   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:57.872927   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:57.907324   75402 cri.go:89] found id: ""
	I0816 18:16:57.907354   75402 logs.go:276] 0 containers: []
	W0816 18:16:57.907366   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:57.907373   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:57.907433   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:57.941657   75402 cri.go:89] found id: ""
	I0816 18:16:57.941682   75402 logs.go:276] 0 containers: []
	W0816 18:16:57.941689   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:57.941695   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:57.941746   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:57.981424   75402 cri.go:89] found id: ""
	I0816 18:16:57.981466   75402 logs.go:276] 0 containers: []
	W0816 18:16:57.981480   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:57.981489   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:57.981562   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:58.015534   75402 cri.go:89] found id: ""
	I0816 18:16:58.015587   75402 logs.go:276] 0 containers: []
	W0816 18:16:58.015598   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:58.015606   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:58.015669   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:58.047875   75402 cri.go:89] found id: ""
	I0816 18:16:58.047908   75402 logs.go:276] 0 containers: []
	W0816 18:16:58.047917   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:58.047923   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:58.047976   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:58.079294   75402 cri.go:89] found id: ""
	I0816 18:16:58.079324   75402 logs.go:276] 0 containers: []
	W0816 18:16:58.079334   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:58.079342   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:58.079406   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:58.112357   75402 cri.go:89] found id: ""
	I0816 18:16:58.112389   75402 logs.go:276] 0 containers: []
	W0816 18:16:58.112401   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:58.112413   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:58.112428   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:58.159903   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:58.159934   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:58.172763   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:58.172789   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:58.245827   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:58.245856   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:58.245872   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:58.325008   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:58.325049   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:00.864354   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:00.877517   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:00.877593   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:00.915396   75402 cri.go:89] found id: ""
	I0816 18:17:00.915428   75402 logs.go:276] 0 containers: []
	W0816 18:17:00.915438   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:00.915446   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:00.915611   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:00.953950   75402 cri.go:89] found id: ""
	I0816 18:17:00.953977   75402 logs.go:276] 0 containers: []
	W0816 18:17:00.953987   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:00.953993   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:00.954051   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:00.987673   75402 cri.go:89] found id: ""
	I0816 18:17:00.987703   75402 logs.go:276] 0 containers: []
	W0816 18:17:00.987713   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:00.987721   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:00.987784   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:01.021230   75402 cri.go:89] found id: ""
	I0816 18:17:01.021277   75402 logs.go:276] 0 containers: []
	W0816 18:17:01.021308   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:01.021315   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:01.021388   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:01.057087   75402 cri.go:89] found id: ""
	I0816 18:17:01.057117   75402 logs.go:276] 0 containers: []
	W0816 18:17:01.057127   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:01.057135   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:01.057207   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:01.094142   75402 cri.go:89] found id: ""
	I0816 18:17:01.094168   75402 logs.go:276] 0 containers: []
	W0816 18:17:01.094176   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:01.094183   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:01.094233   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:01.132799   75402 cri.go:89] found id: ""
	I0816 18:17:01.132824   75402 logs.go:276] 0 containers: []
	W0816 18:17:01.132831   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:01.132837   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:01.132888   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:01.173367   75402 cri.go:89] found id: ""
	I0816 18:17:01.173402   75402 logs.go:276] 0 containers: []
	W0816 18:17:01.173414   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:01.173425   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:01.173443   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:01.186856   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:01.186896   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:01.259913   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:01.259941   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:01.259955   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:01.340914   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:01.340947   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:01.381023   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:01.381058   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:03.933420   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:03.946940   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:03.947008   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:03.984529   75402 cri.go:89] found id: ""
	I0816 18:17:03.984560   75402 logs.go:276] 0 containers: []
	W0816 18:17:03.984571   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:03.984581   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:03.984668   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:04.017900   75402 cri.go:89] found id: ""
	I0816 18:17:04.017929   75402 logs.go:276] 0 containers: []
	W0816 18:17:04.017940   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:04.017948   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:04.018009   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:04.050837   75402 cri.go:89] found id: ""
	I0816 18:17:04.050871   75402 logs.go:276] 0 containers: []
	W0816 18:17:04.050888   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:04.050896   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:04.050959   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:04.085448   75402 cri.go:89] found id: ""
	I0816 18:17:04.085477   75402 logs.go:276] 0 containers: []
	W0816 18:17:04.085487   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:04.085495   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:04.085564   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:04.118177   75402 cri.go:89] found id: ""
	I0816 18:17:04.118203   75402 logs.go:276] 0 containers: []
	W0816 18:17:04.118213   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:04.118220   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:04.118284   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:04.150289   75402 cri.go:89] found id: ""
	I0816 18:17:04.150317   75402 logs.go:276] 0 containers: []
	W0816 18:17:04.150330   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:04.150338   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:04.150404   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:04.184258   75402 cri.go:89] found id: ""
	I0816 18:17:04.184282   75402 logs.go:276] 0 containers: []
	W0816 18:17:04.184290   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:04.184295   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:04.184347   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:04.217142   75402 cri.go:89] found id: ""
	I0816 18:17:04.217174   75402 logs.go:276] 0 containers: []
	W0816 18:17:04.217184   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:04.217192   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:04.217204   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:04.253000   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:04.253034   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:04.304978   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:04.305018   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:04.320210   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:04.320241   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:04.396146   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:04.396169   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:04.396184   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:06.980747   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:06.992944   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:06.993006   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:07.026303   75402 cri.go:89] found id: ""
	I0816 18:17:07.026356   75402 logs.go:276] 0 containers: []
	W0816 18:17:07.026368   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:07.026376   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:07.026443   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:07.059226   75402 cri.go:89] found id: ""
	I0816 18:17:07.059257   75402 logs.go:276] 0 containers: []
	W0816 18:17:07.059268   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:07.059277   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:07.059339   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:07.092142   75402 cri.go:89] found id: ""
	I0816 18:17:07.092171   75402 logs.go:276] 0 containers: []
	W0816 18:17:07.092182   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:07.092188   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:07.092248   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:07.125284   75402 cri.go:89] found id: ""
	I0816 18:17:07.125330   75402 logs.go:276] 0 containers: []
	W0816 18:17:07.125347   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:07.125355   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:07.125420   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:07.163890   75402 cri.go:89] found id: ""
	I0816 18:17:07.163919   75402 logs.go:276] 0 containers: []
	W0816 18:17:07.163930   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:07.163938   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:07.164002   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:07.197988   75402 cri.go:89] found id: ""
	I0816 18:17:07.198014   75402 logs.go:276] 0 containers: []
	W0816 18:17:07.198025   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:07.198033   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:07.198116   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:07.232709   75402 cri.go:89] found id: ""
	I0816 18:17:07.232738   75402 logs.go:276] 0 containers: []
	W0816 18:17:07.232749   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:07.232756   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:07.232817   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:07.264514   75402 cri.go:89] found id: ""
	I0816 18:17:07.264548   75402 logs.go:276] 0 containers: []
	W0816 18:17:07.264558   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:07.264569   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:07.264583   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:07.316138   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:07.316173   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:07.329659   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:07.329688   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:07.397345   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:07.397380   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:07.397397   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:07.481245   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:07.481280   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:10.024405   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:10.036860   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:10.036927   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:10.069402   75402 cri.go:89] found id: ""
	I0816 18:17:10.069436   75402 logs.go:276] 0 containers: []
	W0816 18:17:10.069448   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:10.069458   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:10.069511   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:10.101480   75402 cri.go:89] found id: ""
	I0816 18:17:10.101508   75402 logs.go:276] 0 containers: []
	W0816 18:17:10.101518   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:10.101529   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:10.101601   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:10.131673   75402 cri.go:89] found id: ""
	I0816 18:17:10.131708   75402 logs.go:276] 0 containers: []
	W0816 18:17:10.131719   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:10.131726   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:10.131821   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:10.166476   75402 cri.go:89] found id: ""
	I0816 18:17:10.166508   75402 logs.go:276] 0 containers: []
	W0816 18:17:10.166518   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:10.166525   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:10.166590   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:10.199296   75402 cri.go:89] found id: ""
	I0816 18:17:10.199321   75402 logs.go:276] 0 containers: []
	W0816 18:17:10.199332   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:10.199340   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:10.199406   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:10.232640   75402 cri.go:89] found id: ""
	I0816 18:17:10.232672   75402 logs.go:276] 0 containers: []
	W0816 18:17:10.232683   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:10.232691   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:10.232775   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:10.263958   75402 cri.go:89] found id: ""
	I0816 18:17:10.263988   75402 logs.go:276] 0 containers: []
	W0816 18:17:10.263998   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:10.264003   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:10.264052   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:10.295904   75402 cri.go:89] found id: ""
	I0816 18:17:10.295929   75402 logs.go:276] 0 containers: []
	W0816 18:17:10.295937   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:10.295946   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:10.295957   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:10.344874   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:10.344909   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:10.358523   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:10.358552   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:10.433311   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:10.433334   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:10.433351   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:10.514580   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:10.514620   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:13.053815   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:13.068517   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:13.068597   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:13.104251   75402 cri.go:89] found id: ""
	I0816 18:17:13.104279   75402 logs.go:276] 0 containers: []
	W0816 18:17:13.104313   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:13.104321   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:13.104375   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:13.137415   75402 cri.go:89] found id: ""
	I0816 18:17:13.137442   75402 logs.go:276] 0 containers: []
	W0816 18:17:13.137453   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:13.137461   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:13.137510   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:13.174165   75402 cri.go:89] found id: ""
	I0816 18:17:13.174191   75402 logs.go:276] 0 containers: []
	W0816 18:17:13.174203   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:13.174210   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:13.174271   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:13.206789   75402 cri.go:89] found id: ""
	I0816 18:17:13.206814   75402 logs.go:276] 0 containers: []
	W0816 18:17:13.206823   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:13.206831   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:13.206892   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:13.238950   75402 cri.go:89] found id: ""
	I0816 18:17:13.238975   75402 logs.go:276] 0 containers: []
	W0816 18:17:13.238984   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:13.238990   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:13.239037   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:13.271485   75402 cri.go:89] found id: ""
	I0816 18:17:13.271518   75402 logs.go:276] 0 containers: []
	W0816 18:17:13.271535   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:13.271544   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:13.271612   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:13.307576   75402 cri.go:89] found id: ""
	I0816 18:17:13.307610   75402 logs.go:276] 0 containers: []
	W0816 18:17:13.307622   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:13.307632   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:13.307698   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:13.339746   75402 cri.go:89] found id: ""
	I0816 18:17:13.339792   75402 logs.go:276] 0 containers: []
	W0816 18:17:13.339802   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:13.339813   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:13.339827   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:13.352847   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:13.352875   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:13.440397   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:13.440418   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:13.440432   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:13.514879   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:13.514916   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:13.553848   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:13.553882   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:16.103318   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:16.115837   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:16.115922   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:16.147079   75402 cri.go:89] found id: ""
	I0816 18:17:16.147108   75402 logs.go:276] 0 containers: []
	W0816 18:17:16.147119   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:16.147127   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:16.147189   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:16.184207   75402 cri.go:89] found id: ""
	I0816 18:17:16.184233   75402 logs.go:276] 0 containers: []
	W0816 18:17:16.184241   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:16.184247   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:16.184295   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:16.219036   75402 cri.go:89] found id: ""
	I0816 18:17:16.219065   75402 logs.go:276] 0 containers: []
	W0816 18:17:16.219072   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:16.219078   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:16.219163   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:16.251269   75402 cri.go:89] found id: ""
	I0816 18:17:16.251307   75402 logs.go:276] 0 containers: []
	W0816 18:17:16.251320   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:16.251329   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:16.251394   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:16.286549   75402 cri.go:89] found id: ""
	I0816 18:17:16.286576   75402 logs.go:276] 0 containers: []
	W0816 18:17:16.286585   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:16.286591   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:16.286647   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:16.322017   75402 cri.go:89] found id: ""
	I0816 18:17:16.322045   75402 logs.go:276] 0 containers: []
	W0816 18:17:16.322055   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:16.322063   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:16.322128   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:16.353606   75402 cri.go:89] found id: ""
	I0816 18:17:16.353636   75402 logs.go:276] 0 containers: []
	W0816 18:17:16.353646   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:16.353653   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:16.353719   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:16.386973   75402 cri.go:89] found id: ""
	I0816 18:17:16.387005   75402 logs.go:276] 0 containers: []
	W0816 18:17:16.387016   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:16.387027   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:16.387039   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:16.437031   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:16.437066   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:16.451258   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:16.451292   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:16.519130   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:16.519155   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:16.519170   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:16.598591   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:16.598626   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:19.147916   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:19.160525   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:19.160600   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:19.193494   75402 cri.go:89] found id: ""
	I0816 18:17:19.193520   75402 logs.go:276] 0 containers: []
	W0816 18:17:19.193527   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:19.193533   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:19.193599   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:19.230936   75402 cri.go:89] found id: ""
	I0816 18:17:19.230963   75402 logs.go:276] 0 containers: []
	W0816 18:17:19.230971   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:19.230976   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:19.231029   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:19.263713   75402 cri.go:89] found id: ""
	I0816 18:17:19.263735   75402 logs.go:276] 0 containers: []
	W0816 18:17:19.263742   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:19.263748   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:19.263794   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:19.294609   75402 cri.go:89] found id: ""
	I0816 18:17:19.294635   75402 logs.go:276] 0 containers: []
	W0816 18:17:19.294642   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:19.294647   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:19.294698   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:19.329278   75402 cri.go:89] found id: ""
	I0816 18:17:19.329303   75402 logs.go:276] 0 containers: []
	W0816 18:17:19.329313   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:19.329319   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:19.329368   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:19.362007   75402 cri.go:89] found id: ""
	I0816 18:17:19.362043   75402 logs.go:276] 0 containers: []
	W0816 18:17:19.362052   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:19.362067   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:19.362120   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:19.395190   75402 cri.go:89] found id: ""
	I0816 18:17:19.395217   75402 logs.go:276] 0 containers: []
	W0816 18:17:19.395248   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:19.395255   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:19.395302   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:19.426962   75402 cri.go:89] found id: ""
	I0816 18:17:19.426991   75402 logs.go:276] 0 containers: []
	W0816 18:17:19.427002   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:19.427012   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:19.427027   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:19.441319   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:19.441346   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:19.511390   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:19.511409   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:19.511425   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:19.590897   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:19.590935   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:19.628753   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:19.628781   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:22.182534   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:22.194844   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:22.194917   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:22.228225   75402 cri.go:89] found id: ""
	I0816 18:17:22.228247   75402 logs.go:276] 0 containers: []
	W0816 18:17:22.228269   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:22.228276   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:22.228325   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:22.258614   75402 cri.go:89] found id: ""
	I0816 18:17:22.258646   75402 logs.go:276] 0 containers: []
	W0816 18:17:22.258654   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:22.258660   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:22.258708   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:22.289103   75402 cri.go:89] found id: ""
	I0816 18:17:22.289136   75402 logs.go:276] 0 containers: []
	W0816 18:17:22.289147   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:22.289154   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:22.289215   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:22.321828   75402 cri.go:89] found id: ""
	I0816 18:17:22.321857   75402 logs.go:276] 0 containers: []
	W0816 18:17:22.321869   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:22.321877   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:22.321942   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:22.353557   75402 cri.go:89] found id: ""
	I0816 18:17:22.353588   75402 logs.go:276] 0 containers: []
	W0816 18:17:22.353597   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:22.353602   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:22.353660   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:22.385078   75402 cri.go:89] found id: ""
	I0816 18:17:22.385103   75402 logs.go:276] 0 containers: []
	W0816 18:17:22.385110   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:22.385116   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:22.385189   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:22.415864   75402 cri.go:89] found id: ""
	I0816 18:17:22.415900   75402 logs.go:276] 0 containers: []
	W0816 18:17:22.415913   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:22.415922   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:22.415990   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:22.449895   75402 cri.go:89] found id: ""
	I0816 18:17:22.449922   75402 logs.go:276] 0 containers: []
	W0816 18:17:22.449942   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:22.449957   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:22.449974   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:22.523055   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:22.523073   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:22.523084   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:22.599680   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:22.599719   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:22.638021   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:22.638057   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:22.688970   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:22.689010   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:25.202748   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:25.217316   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:25.217388   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:25.249528   75402 cri.go:89] found id: ""
	I0816 18:17:25.249558   75402 logs.go:276] 0 containers: []
	W0816 18:17:25.249566   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:25.249578   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:25.249625   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:25.282667   75402 cri.go:89] found id: ""
	I0816 18:17:25.282696   75402 logs.go:276] 0 containers: []
	W0816 18:17:25.282706   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:25.282712   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:25.282764   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:25.314061   75402 cri.go:89] found id: ""
	I0816 18:17:25.314091   75402 logs.go:276] 0 containers: []
	W0816 18:17:25.314101   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:25.314108   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:25.314161   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:25.351260   75402 cri.go:89] found id: ""
	I0816 18:17:25.351287   75402 logs.go:276] 0 containers: []
	W0816 18:17:25.351296   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:25.351301   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:25.351352   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:25.388303   75402 cri.go:89] found id: ""
	I0816 18:17:25.388334   75402 logs.go:276] 0 containers: []
	W0816 18:17:25.388345   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:25.388352   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:25.388412   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:25.422133   75402 cri.go:89] found id: ""
	I0816 18:17:25.422161   75402 logs.go:276] 0 containers: []
	W0816 18:17:25.422169   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:25.422175   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:25.422232   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:25.456749   75402 cri.go:89] found id: ""
	I0816 18:17:25.456775   75402 logs.go:276] 0 containers: []
	W0816 18:17:25.456783   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:25.456789   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:25.456836   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:25.494783   75402 cri.go:89] found id: ""
	I0816 18:17:25.494809   75402 logs.go:276] 0 containers: []
	W0816 18:17:25.494817   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:25.494825   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:25.494836   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:25.561253   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:25.561290   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:25.580349   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:25.580383   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:25.656333   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:25.656361   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:25.656378   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:25.733479   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:25.733515   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:28.272217   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:28.285750   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:28.285822   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:28.318230   75402 cri.go:89] found id: ""
	I0816 18:17:28.318260   75402 logs.go:276] 0 containers: []
	W0816 18:17:28.318268   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:28.318275   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:28.318344   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:28.351766   75402 cri.go:89] found id: ""
	I0816 18:17:28.351798   75402 logs.go:276] 0 containers: []
	W0816 18:17:28.351808   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:28.351814   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:28.351872   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:28.385543   75402 cri.go:89] found id: ""
	I0816 18:17:28.385572   75402 logs.go:276] 0 containers: []
	W0816 18:17:28.385581   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:28.385588   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:28.385653   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:28.418808   75402 cri.go:89] found id: ""
	I0816 18:17:28.418837   75402 logs.go:276] 0 containers: []
	W0816 18:17:28.418846   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:28.418852   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:28.418900   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:28.453883   75402 cri.go:89] found id: ""
	I0816 18:17:28.453911   75402 logs.go:276] 0 containers: []
	W0816 18:17:28.453922   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:28.453929   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:28.453996   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:28.486261   75402 cri.go:89] found id: ""
	I0816 18:17:28.486291   75402 logs.go:276] 0 containers: []
	W0816 18:17:28.486304   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:28.486310   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:28.486366   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:28.520617   75402 cri.go:89] found id: ""
	I0816 18:17:28.520658   75402 logs.go:276] 0 containers: []
	W0816 18:17:28.520670   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:28.520678   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:28.520731   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:28.552996   75402 cri.go:89] found id: ""
	I0816 18:17:28.553026   75402 logs.go:276] 0 containers: []
	W0816 18:17:28.553036   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:28.553046   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:28.553061   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:28.604149   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:28.604192   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:28.617393   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:28.617421   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:28.683258   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:28.683279   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:28.683294   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:28.766933   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:28.766977   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:31.305897   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:31.326070   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:31.326143   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:31.375314   75402 cri.go:89] found id: ""
	I0816 18:17:31.375350   75402 logs.go:276] 0 containers: []
	W0816 18:17:31.375361   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:31.375369   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:31.375429   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:31.407372   75402 cri.go:89] found id: ""
	I0816 18:17:31.407398   75402 logs.go:276] 0 containers: []
	W0816 18:17:31.407406   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:31.407411   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:31.407459   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:31.445679   75402 cri.go:89] found id: ""
	I0816 18:17:31.445706   75402 logs.go:276] 0 containers: []
	W0816 18:17:31.445714   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:31.445720   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:31.445781   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:31.480040   75402 cri.go:89] found id: ""
	I0816 18:17:31.480072   75402 logs.go:276] 0 containers: []
	W0816 18:17:31.480080   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:31.480085   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:31.480145   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:31.511143   75402 cri.go:89] found id: ""
	I0816 18:17:31.511171   75402 logs.go:276] 0 containers: []
	W0816 18:17:31.511182   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:31.511188   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:31.511252   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:31.544254   75402 cri.go:89] found id: ""
	I0816 18:17:31.544282   75402 logs.go:276] 0 containers: []
	W0816 18:17:31.544293   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:31.544300   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:31.544363   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:31.579007   75402 cri.go:89] found id: ""
	I0816 18:17:31.579033   75402 logs.go:276] 0 containers: []
	W0816 18:17:31.579041   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:31.579046   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:31.579108   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:31.619966   75402 cri.go:89] found id: ""
	I0816 18:17:31.619995   75402 logs.go:276] 0 containers: []
	W0816 18:17:31.620005   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:31.620018   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:31.620035   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:31.657784   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:31.657815   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:31.706824   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:31.706853   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:31.719696   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:31.719721   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:31.786096   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:31.786124   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:31.786142   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:34.363862   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:34.377365   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:34.377430   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:34.414191   75402 cri.go:89] found id: ""
	I0816 18:17:34.414216   75402 logs.go:276] 0 containers: []
	W0816 18:17:34.414223   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:34.414229   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:34.414285   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:34.446811   75402 cri.go:89] found id: ""
	I0816 18:17:34.446836   75402 logs.go:276] 0 containers: []
	W0816 18:17:34.446843   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:34.446848   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:34.446905   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:34.477582   75402 cri.go:89] found id: ""
	I0816 18:17:34.477615   75402 logs.go:276] 0 containers: []
	W0816 18:17:34.477627   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:34.477634   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:34.477695   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:34.507868   75402 cri.go:89] found id: ""
	I0816 18:17:34.507901   75402 logs.go:276] 0 containers: []
	W0816 18:17:34.507912   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:34.507921   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:34.507984   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:34.538719   75402 cri.go:89] found id: ""
	I0816 18:17:34.538754   75402 logs.go:276] 0 containers: []
	W0816 18:17:34.538765   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:34.538772   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:34.538826   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:34.571445   75402 cri.go:89] found id: ""
	I0816 18:17:34.571468   75402 logs.go:276] 0 containers: []
	W0816 18:17:34.571477   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:34.571484   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:34.571557   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:34.601587   75402 cri.go:89] found id: ""
	I0816 18:17:34.601611   75402 logs.go:276] 0 containers: []
	W0816 18:17:34.601618   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:34.601624   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:34.601669   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:34.634850   75402 cri.go:89] found id: ""
	I0816 18:17:34.634878   75402 logs.go:276] 0 containers: []
	W0816 18:17:34.634892   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:34.634906   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:34.634920   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:34.682828   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:34.682859   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:34.695796   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:34.695820   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:34.762100   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:34.762121   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:34.762133   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:34.845329   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:34.845359   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:37.386266   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:37.398940   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:37.399005   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:37.433072   75402 cri.go:89] found id: ""
	I0816 18:17:37.433099   75402 logs.go:276] 0 containers: []
	W0816 18:17:37.433112   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:37.433118   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:37.433169   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:37.466968   75402 cri.go:89] found id: ""
	I0816 18:17:37.467001   75402 logs.go:276] 0 containers: []
	W0816 18:17:37.467012   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:37.467021   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:37.467086   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:37.509268   75402 cri.go:89] found id: ""
	I0816 18:17:37.509291   75402 logs.go:276] 0 containers: []
	W0816 18:17:37.509300   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:37.509306   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:37.509365   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:37.541295   75402 cri.go:89] found id: ""
	I0816 18:17:37.541338   75402 logs.go:276] 0 containers: []
	W0816 18:17:37.541350   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:37.541357   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:37.541421   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:37.575423   75402 cri.go:89] found id: ""
	I0816 18:17:37.575453   75402 logs.go:276] 0 containers: []
	W0816 18:17:37.575464   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:37.575472   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:37.575540   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:37.614787   75402 cri.go:89] found id: ""
	I0816 18:17:37.614817   75402 logs.go:276] 0 containers: []
	W0816 18:17:37.614828   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:37.614835   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:37.614896   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:37.646396   75402 cri.go:89] found id: ""
	I0816 18:17:37.646430   75402 logs.go:276] 0 containers: []
	W0816 18:17:37.646441   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:37.646449   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:37.646517   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:37.679383   75402 cri.go:89] found id: ""
	I0816 18:17:37.679414   75402 logs.go:276] 0 containers: []
	W0816 18:17:37.679423   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:37.679431   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:37.679442   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:37.729641   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:37.729673   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:37.742420   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:37.742448   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:37.812572   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:37.812600   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:37.812615   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:37.887100   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:37.887137   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:40.424202   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:40.438231   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:40.438337   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:40.474614   75402 cri.go:89] found id: ""
	I0816 18:17:40.474639   75402 logs.go:276] 0 containers: []
	W0816 18:17:40.474648   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:40.474653   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:40.474701   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:40.510123   75402 cri.go:89] found id: ""
	I0816 18:17:40.510154   75402 logs.go:276] 0 containers: []
	W0816 18:17:40.510162   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:40.510167   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:40.510217   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:40.548971   75402 cri.go:89] found id: ""
	I0816 18:17:40.549000   75402 logs.go:276] 0 containers: []
	W0816 18:17:40.549008   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:40.549013   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:40.549069   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:40.595126   75402 cri.go:89] found id: ""
	I0816 18:17:40.595158   75402 logs.go:276] 0 containers: []
	W0816 18:17:40.595167   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:40.595174   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:40.595220   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:40.629769   75402 cri.go:89] found id: ""
	I0816 18:17:40.629793   75402 logs.go:276] 0 containers: []
	W0816 18:17:40.629801   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:40.629807   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:40.629871   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:40.661889   75402 cri.go:89] found id: ""
	I0816 18:17:40.661922   75402 logs.go:276] 0 containers: []
	W0816 18:17:40.661932   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:40.661939   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:40.662001   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:40.697764   75402 cri.go:89] found id: ""
	I0816 18:17:40.697790   75402 logs.go:276] 0 containers: []
	W0816 18:17:40.697801   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:40.697808   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:40.697867   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:40.734825   75402 cri.go:89] found id: ""
	I0816 18:17:40.734852   75402 logs.go:276] 0 containers: []
	W0816 18:17:40.734862   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:40.734872   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:40.734939   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:40.787975   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:40.788015   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:40.800817   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:40.800843   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:40.874182   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:40.874205   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:40.874219   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:40.960032   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:40.960066   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:43.499770   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:43.513726   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:43.513806   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:43.548368   75402 cri.go:89] found id: ""
	I0816 18:17:43.548396   75402 logs.go:276] 0 containers: []
	W0816 18:17:43.548406   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:43.548413   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:43.548474   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:43.581177   75402 cri.go:89] found id: ""
	I0816 18:17:43.581205   75402 logs.go:276] 0 containers: []
	W0816 18:17:43.581216   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:43.581223   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:43.581291   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:43.614315   75402 cri.go:89] found id: ""
	I0816 18:17:43.614354   75402 logs.go:276] 0 containers: []
	W0816 18:17:43.614367   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:43.614374   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:43.614437   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:43.648608   75402 cri.go:89] found id: ""
	I0816 18:17:43.648645   75402 logs.go:276] 0 containers: []
	W0816 18:17:43.648658   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:43.648669   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:43.648722   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:43.680549   75402 cri.go:89] found id: ""
	I0816 18:17:43.680586   75402 logs.go:276] 0 containers: []
	W0816 18:17:43.680597   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:43.680604   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:43.680686   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:43.710473   75402 cri.go:89] found id: ""
	I0816 18:17:43.710497   75402 logs.go:276] 0 containers: []
	W0816 18:17:43.710506   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:43.710514   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:43.710576   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:43.741415   75402 cri.go:89] found id: ""
	I0816 18:17:43.741442   75402 logs.go:276] 0 containers: []
	W0816 18:17:43.741450   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:43.741456   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:43.741505   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:43.775018   75402 cri.go:89] found id: ""
	I0816 18:17:43.775051   75402 logs.go:276] 0 containers: []
	W0816 18:17:43.775063   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:43.775074   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:43.775087   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:43.825596   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:43.825630   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:43.839133   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:43.839161   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:43.905645   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:43.905667   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:43.905679   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:43.988860   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:43.988901   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:46.525896   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:46.539147   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:46.539229   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:46.570703   75402 cri.go:89] found id: ""
	I0816 18:17:46.570726   75402 logs.go:276] 0 containers: []
	W0816 18:17:46.570734   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:46.570740   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:46.570785   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:46.605909   75402 cri.go:89] found id: ""
	I0816 18:17:46.605939   75402 logs.go:276] 0 containers: []
	W0816 18:17:46.605954   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:46.605961   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:46.606013   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:46.638865   75402 cri.go:89] found id: ""
	I0816 18:17:46.638899   75402 logs.go:276] 0 containers: []
	W0816 18:17:46.638911   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:46.638919   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:46.638994   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:46.671869   75402 cri.go:89] found id: ""
	I0816 18:17:46.671904   75402 logs.go:276] 0 containers: []
	W0816 18:17:46.671917   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:46.671926   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:46.671988   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:46.703423   75402 cri.go:89] found id: ""
	I0816 18:17:46.703464   75402 logs.go:276] 0 containers: []
	W0816 18:17:46.703473   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:46.703479   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:46.703545   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:46.735824   75402 cri.go:89] found id: ""
	I0816 18:17:46.735853   75402 logs.go:276] 0 containers: []
	W0816 18:17:46.735864   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:46.735871   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:46.735926   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:46.767122   75402 cri.go:89] found id: ""
	I0816 18:17:46.767146   75402 logs.go:276] 0 containers: []
	W0816 18:17:46.767154   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:46.767160   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:46.767207   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:46.798093   75402 cri.go:89] found id: ""
	I0816 18:17:46.798126   75402 logs.go:276] 0 containers: []
	W0816 18:17:46.798140   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:46.798152   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:46.798167   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:46.832699   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:46.832725   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:46.884212   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:46.884246   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:46.896896   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:46.896921   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:46.968805   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:46.968824   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:46.968838   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:49.552581   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:49.565134   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:49.565212   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:49.597012   75402 cri.go:89] found id: ""
	I0816 18:17:49.597042   75402 logs.go:276] 0 containers: []
	W0816 18:17:49.597057   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:49.597067   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:49.597133   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:49.628902   75402 cri.go:89] found id: ""
	I0816 18:17:49.628935   75402 logs.go:276] 0 containers: []
	W0816 18:17:49.628948   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:49.628957   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:49.629025   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:49.662668   75402 cri.go:89] found id: ""
	I0816 18:17:49.662698   75402 logs.go:276] 0 containers: []
	W0816 18:17:49.662709   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:49.662715   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:49.662778   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:49.696354   75402 cri.go:89] found id: ""
	I0816 18:17:49.696381   75402 logs.go:276] 0 containers: []
	W0816 18:17:49.696389   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:49.696395   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:49.696487   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:49.730801   75402 cri.go:89] found id: ""
	I0816 18:17:49.730838   75402 logs.go:276] 0 containers: []
	W0816 18:17:49.730849   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:49.730856   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:49.730921   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:49.764474   75402 cri.go:89] found id: ""
	I0816 18:17:49.764503   75402 logs.go:276] 0 containers: []
	W0816 18:17:49.764514   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:49.764522   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:49.764585   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:49.798577   75402 cri.go:89] found id: ""
	I0816 18:17:49.798616   75402 logs.go:276] 0 containers: []
	W0816 18:17:49.798627   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:49.798634   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:49.798703   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:49.830987   75402 cri.go:89] found id: ""
	I0816 18:17:49.831016   75402 logs.go:276] 0 containers: []
	W0816 18:17:49.831024   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:49.831032   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:49.831043   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:49.883397   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:49.883433   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:49.897208   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:49.897239   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:49.968363   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:49.968386   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:49.968398   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:50.056552   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:50.056583   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:52.596191   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:52.609592   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:52.609668   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:52.645775   75402 cri.go:89] found id: ""
	I0816 18:17:52.645807   75402 logs.go:276] 0 containers: []
	W0816 18:17:52.645817   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:52.645823   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:52.645869   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:52.677817   75402 cri.go:89] found id: ""
	I0816 18:17:52.677852   75402 logs.go:276] 0 containers: []
	W0816 18:17:52.677862   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:52.677870   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:52.677935   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:52.710618   75402 cri.go:89] found id: ""
	I0816 18:17:52.710648   75402 logs.go:276] 0 containers: []
	W0816 18:17:52.710658   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:52.710664   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:52.710716   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:52.745830   75402 cri.go:89] found id: ""
	I0816 18:17:52.745858   75402 logs.go:276] 0 containers: []
	W0816 18:17:52.745867   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:52.745872   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:52.745929   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:52.778511   75402 cri.go:89] found id: ""
	I0816 18:17:52.778538   75402 logs.go:276] 0 containers: []
	W0816 18:17:52.778548   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:52.778567   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:52.778632   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:52.810759   75402 cri.go:89] found id: ""
	I0816 18:17:52.810788   75402 logs.go:276] 0 containers: []
	W0816 18:17:52.810800   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:52.810807   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:52.810872   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:52.843786   75402 cri.go:89] found id: ""
	I0816 18:17:52.843814   75402 logs.go:276] 0 containers: []
	W0816 18:17:52.843824   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:52.843831   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:52.843886   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:52.876886   75402 cri.go:89] found id: ""
	I0816 18:17:52.876914   75402 logs.go:276] 0 containers: []
	W0816 18:17:52.876924   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:52.876934   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:52.876950   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:52.932519   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:52.932559   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:52.946645   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:52.946671   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:53.018156   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:53.018177   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:53.018190   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:53.095562   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:53.095600   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:55.633820   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:55.646170   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:55.646238   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:55.678147   75402 cri.go:89] found id: ""
	I0816 18:17:55.678181   75402 logs.go:276] 0 containers: []
	W0816 18:17:55.678194   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:55.678202   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:55.678264   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:55.710910   75402 cri.go:89] found id: ""
	I0816 18:17:55.710938   75402 logs.go:276] 0 containers: []
	W0816 18:17:55.710948   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:55.710956   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:55.711012   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:55.744822   75402 cri.go:89] found id: ""
	I0816 18:17:55.744853   75402 logs.go:276] 0 containers: []
	W0816 18:17:55.744863   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:55.744870   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:55.744931   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:55.791677   75402 cri.go:89] found id: ""
	I0816 18:17:55.791708   75402 logs.go:276] 0 containers: []
	W0816 18:17:55.791719   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:55.791727   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:55.791788   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:55.826448   75402 cri.go:89] found id: ""
	I0816 18:17:55.826481   75402 logs.go:276] 0 containers: []
	W0816 18:17:55.826492   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:55.826500   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:55.826564   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:55.861178   75402 cri.go:89] found id: ""
	I0816 18:17:55.861210   75402 logs.go:276] 0 containers: []
	W0816 18:17:55.861219   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:55.861225   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:55.861280   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:55.898073   75402 cri.go:89] found id: ""
	I0816 18:17:55.898099   75402 logs.go:276] 0 containers: []
	W0816 18:17:55.898110   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:55.898117   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:55.898184   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:55.931446   75402 cri.go:89] found id: ""
	I0816 18:17:55.931478   75402 logs.go:276] 0 containers: []
	W0816 18:17:55.931487   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:55.931498   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:55.931514   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:55.999910   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:55.999931   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:55.999943   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:56.077240   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:56.077312   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:56.115479   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:56.115506   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:56.166954   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:56.166989   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:58.680571   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:58.692824   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:58.692890   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:58.729761   75402 cri.go:89] found id: ""
	I0816 18:17:58.729786   75402 logs.go:276] 0 containers: []
	W0816 18:17:58.729794   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:58.729799   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:58.729857   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:58.764943   75402 cri.go:89] found id: ""
	I0816 18:17:58.765082   75402 logs.go:276] 0 containers: []
	W0816 18:17:58.765113   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:58.765124   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:58.765179   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:58.801314   75402 cri.go:89] found id: ""
	I0816 18:17:58.801345   75402 logs.go:276] 0 containers: []
	W0816 18:17:58.801357   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:58.801365   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:58.801429   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:58.833936   75402 cri.go:89] found id: ""
	I0816 18:17:58.833973   75402 logs.go:276] 0 containers: []
	W0816 18:17:58.833982   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:58.833988   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:58.834046   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:58.870108   75402 cri.go:89] found id: ""
	I0816 18:17:58.870137   75402 logs.go:276] 0 containers: []
	W0816 18:17:58.870148   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:58.870155   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:58.870219   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:58.904157   75402 cri.go:89] found id: ""
	I0816 18:17:58.904184   75402 logs.go:276] 0 containers: []
	W0816 18:17:58.904194   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:58.904201   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:58.904264   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:58.937862   75402 cri.go:89] found id: ""
	I0816 18:17:58.937891   75402 logs.go:276] 0 containers: []
	W0816 18:17:58.937901   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:58.937909   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:58.937972   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:58.972465   75402 cri.go:89] found id: ""
	I0816 18:17:58.972495   75402 logs.go:276] 0 containers: []
	W0816 18:17:58.972506   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:58.972517   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:58.972532   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:59.047197   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:59.047223   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:59.047238   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:59.126634   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:59.126668   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:59.165528   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:59.165562   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:59.214294   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:59.214433   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:01.729662   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:01.742582   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:01.742642   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:01.776148   75402 cri.go:89] found id: ""
	I0816 18:18:01.776180   75402 logs.go:276] 0 containers: []
	W0816 18:18:01.776188   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:01.776197   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:01.776243   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:01.809186   75402 cri.go:89] found id: ""
	I0816 18:18:01.809218   75402 logs.go:276] 0 containers: []
	W0816 18:18:01.809229   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:01.809237   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:01.809307   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:01.842379   75402 cri.go:89] found id: ""
	I0816 18:18:01.842406   75402 logs.go:276] 0 containers: []
	W0816 18:18:01.842417   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:01.842425   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:01.842490   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:01.874706   75402 cri.go:89] found id: ""
	I0816 18:18:01.874739   75402 logs.go:276] 0 containers: []
	W0816 18:18:01.874747   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:01.874753   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:01.874813   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:01.915567   75402 cri.go:89] found id: ""
	I0816 18:18:01.915596   75402 logs.go:276] 0 containers: []
	W0816 18:18:01.915607   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:01.915615   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:01.915675   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:01.951527   75402 cri.go:89] found id: ""
	I0816 18:18:01.951559   75402 logs.go:276] 0 containers: []
	W0816 18:18:01.951569   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:01.951576   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:01.951638   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:01.983822   75402 cri.go:89] found id: ""
	I0816 18:18:01.983848   75402 logs.go:276] 0 containers: []
	W0816 18:18:01.983856   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:01.983861   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:01.983909   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:02.018976   75402 cri.go:89] found id: ""
	I0816 18:18:02.019003   75402 logs.go:276] 0 containers: []
	W0816 18:18:02.019012   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:02.019019   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:02.019033   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:02.071096   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:02.071131   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:02.085163   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:02.085189   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:02.154771   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:02.154789   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:02.154800   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:02.242068   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:02.242105   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:04.790311   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:04.803215   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:04.803298   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:04.835834   75402 cri.go:89] found id: ""
	I0816 18:18:04.835868   75402 logs.go:276] 0 containers: []
	W0816 18:18:04.835879   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:04.835886   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:04.835951   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:04.870000   75402 cri.go:89] found id: ""
	I0816 18:18:04.870032   75402 logs.go:276] 0 containers: []
	W0816 18:18:04.870042   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:04.870049   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:04.870111   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:04.906624   75402 cri.go:89] found id: ""
	I0816 18:18:04.906653   75402 logs.go:276] 0 containers: []
	W0816 18:18:04.906663   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:04.906670   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:04.906730   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:04.940115   75402 cri.go:89] found id: ""
	I0816 18:18:04.940139   75402 logs.go:276] 0 containers: []
	W0816 18:18:04.940148   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:04.940155   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:04.940213   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:04.974461   75402 cri.go:89] found id: ""
	I0816 18:18:04.974493   75402 logs.go:276] 0 containers: []
	W0816 18:18:04.974503   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:04.974510   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:04.974571   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:05.006593   75402 cri.go:89] found id: ""
	I0816 18:18:05.006618   75402 logs.go:276] 0 containers: []
	W0816 18:18:05.006628   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:05.006635   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:05.006691   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:05.040041   75402 cri.go:89] found id: ""
	I0816 18:18:05.040066   75402 logs.go:276] 0 containers: []
	W0816 18:18:05.040082   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:05.040089   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:05.040144   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:05.072968   75402 cri.go:89] found id: ""
	I0816 18:18:05.072996   75402 logs.go:276] 0 containers: []
	W0816 18:18:05.073005   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:05.073014   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:05.073025   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:05.124510   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:05.124543   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:05.145566   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:05.145592   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:05.221874   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:05.221898   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:05.221914   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:05.297283   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:05.297316   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:07.837564   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:07.850372   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:07.850441   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:07.882879   75402 cri.go:89] found id: ""
	I0816 18:18:07.882906   75402 logs.go:276] 0 containers: []
	W0816 18:18:07.882915   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:07.882920   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:07.882978   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:07.916983   75402 cri.go:89] found id: ""
	I0816 18:18:07.917011   75402 logs.go:276] 0 containers: []
	W0816 18:18:07.917019   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:07.917024   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:07.917075   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:07.953864   75402 cri.go:89] found id: ""
	I0816 18:18:07.953886   75402 logs.go:276] 0 containers: []
	W0816 18:18:07.953896   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:07.953903   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:07.953951   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:07.994375   75402 cri.go:89] found id: ""
	I0816 18:18:07.994399   75402 logs.go:276] 0 containers: []
	W0816 18:18:07.994408   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:07.994414   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:07.994472   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:08.029137   75402 cri.go:89] found id: ""
	I0816 18:18:08.029170   75402 logs.go:276] 0 containers: []
	W0816 18:18:08.029182   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:08.029189   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:08.029253   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:08.062331   75402 cri.go:89] found id: ""
	I0816 18:18:08.062358   75402 logs.go:276] 0 containers: []
	W0816 18:18:08.062367   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:08.062373   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:08.062430   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:08.097021   75402 cri.go:89] found id: ""
	I0816 18:18:08.097044   75402 logs.go:276] 0 containers: []
	W0816 18:18:08.097051   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:08.097056   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:08.097112   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:08.131147   75402 cri.go:89] found id: ""
	I0816 18:18:08.131174   75402 logs.go:276] 0 containers: []
	W0816 18:18:08.131184   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:08.131192   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:08.131203   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:08.182334   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:08.182373   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:08.195459   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:08.195485   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:08.260333   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:08.260351   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:08.260363   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:08.344466   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:08.344506   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:10.881640   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:10.896400   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:10.896482   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:10.934034   75402 cri.go:89] found id: ""
	I0816 18:18:10.934068   75402 logs.go:276] 0 containers: []
	W0816 18:18:10.934076   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:10.934081   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:10.934130   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:10.966697   75402 cri.go:89] found id: ""
	I0816 18:18:10.966724   75402 logs.go:276] 0 containers: []
	W0816 18:18:10.966733   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:10.966741   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:10.966807   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:11.000540   75402 cri.go:89] found id: ""
	I0816 18:18:11.000568   75402 logs.go:276] 0 containers: []
	W0816 18:18:11.000579   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:11.000587   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:11.000665   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:11.034322   75402 cri.go:89] found id: ""
	I0816 18:18:11.034346   75402 logs.go:276] 0 containers: []
	W0816 18:18:11.034354   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:11.034360   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:11.034407   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:11.067081   75402 cri.go:89] found id: ""
	I0816 18:18:11.067108   75402 logs.go:276] 0 containers: []
	W0816 18:18:11.067116   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:11.067122   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:11.067170   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:11.099726   75402 cri.go:89] found id: ""
	I0816 18:18:11.099753   75402 logs.go:276] 0 containers: []
	W0816 18:18:11.099763   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:11.099770   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:11.099834   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:11.133187   75402 cri.go:89] found id: ""
	I0816 18:18:11.133216   75402 logs.go:276] 0 containers: []
	W0816 18:18:11.133226   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:11.133235   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:11.133315   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:11.167121   75402 cri.go:89] found id: ""
	I0816 18:18:11.167157   75402 logs.go:276] 0 containers: []
	W0816 18:18:11.167166   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:11.167177   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:11.167194   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:11.181396   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:11.181424   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:11.248286   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:11.248313   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:11.248325   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:11.328546   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:11.328583   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:11.365534   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:11.365576   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:13.919889   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:13.935097   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:13.935178   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:13.973196   75402 cri.go:89] found id: ""
	I0816 18:18:13.973225   75402 logs.go:276] 0 containers: []
	W0816 18:18:13.973236   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:13.973244   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:13.973328   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:14.011913   75402 cri.go:89] found id: ""
	I0816 18:18:14.011936   75402 logs.go:276] 0 containers: []
	W0816 18:18:14.011944   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:14.011950   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:14.012013   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:14.048418   75402 cri.go:89] found id: ""
	I0816 18:18:14.048447   75402 logs.go:276] 0 containers: []
	W0816 18:18:14.048459   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:14.048466   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:14.048515   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:14.082462   75402 cri.go:89] found id: ""
	I0816 18:18:14.082496   75402 logs.go:276] 0 containers: []
	W0816 18:18:14.082506   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:14.082514   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:14.082576   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:14.114958   75402 cri.go:89] found id: ""
	I0816 18:18:14.114986   75402 logs.go:276] 0 containers: []
	W0816 18:18:14.114996   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:14.115005   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:14.115067   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:14.154829   75402 cri.go:89] found id: ""
	I0816 18:18:14.154865   75402 logs.go:276] 0 containers: []
	W0816 18:18:14.154878   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:14.154888   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:14.154957   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:14.190012   75402 cri.go:89] found id: ""
	I0816 18:18:14.190045   75402 logs.go:276] 0 containers: []
	W0816 18:18:14.190053   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:14.190058   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:14.190108   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:14.223314   75402 cri.go:89] found id: ""
	I0816 18:18:14.223341   75402 logs.go:276] 0 containers: []
	W0816 18:18:14.223350   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:14.223360   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:14.223381   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:14.274995   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:14.275035   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:14.288518   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:14.288564   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:14.365668   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:14.365691   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:14.365705   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:14.445828   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:14.445866   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:16.981802   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:16.994729   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:16.994794   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:17.029790   75402 cri.go:89] found id: ""
	I0816 18:18:17.029821   75402 logs.go:276] 0 containers: []
	W0816 18:18:17.029839   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:17.029848   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:17.029912   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:17.063194   75402 cri.go:89] found id: ""
	I0816 18:18:17.063223   75402 logs.go:276] 0 containers: []
	W0816 18:18:17.063233   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:17.063240   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:17.063293   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:17.097808   75402 cri.go:89] found id: ""
	I0816 18:18:17.097831   75402 logs.go:276] 0 containers: []
	W0816 18:18:17.097839   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:17.097844   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:17.097900   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:17.132646   75402 cri.go:89] found id: ""
	I0816 18:18:17.132682   75402 logs.go:276] 0 containers: []
	W0816 18:18:17.132691   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:17.132697   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:17.132751   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:17.164285   75402 cri.go:89] found id: ""
	I0816 18:18:17.164316   75402 logs.go:276] 0 containers: []
	W0816 18:18:17.164328   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:17.164335   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:17.164391   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:17.195642   75402 cri.go:89] found id: ""
	I0816 18:18:17.195672   75402 logs.go:276] 0 containers: []
	W0816 18:18:17.195683   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:17.195691   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:17.195754   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:17.228005   75402 cri.go:89] found id: ""
	I0816 18:18:17.228033   75402 logs.go:276] 0 containers: []
	W0816 18:18:17.228041   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:17.228047   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:17.228107   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:17.279195   75402 cri.go:89] found id: ""
	I0816 18:18:17.279229   75402 logs.go:276] 0 containers: []
	W0816 18:18:17.279241   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:17.279253   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:17.279270   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:17.360084   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:17.360125   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:17.405184   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:17.405210   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:17.457453   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:17.457483   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:17.471472   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:17.471502   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:17.536478   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:20.036644   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:20.050169   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:20.050244   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:20.087943   75402 cri.go:89] found id: ""
	I0816 18:18:20.087971   75402 logs.go:276] 0 containers: []
	W0816 18:18:20.087981   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:20.087988   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:20.088051   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:20.119908   75402 cri.go:89] found id: ""
	I0816 18:18:20.119931   75402 logs.go:276] 0 containers: []
	W0816 18:18:20.119940   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:20.119945   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:20.120013   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:20.152115   75402 cri.go:89] found id: ""
	I0816 18:18:20.152146   75402 logs.go:276] 0 containers: []
	W0816 18:18:20.152156   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:20.152162   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:20.152209   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:20.189464   75402 cri.go:89] found id: ""
	I0816 18:18:20.189488   75402 logs.go:276] 0 containers: []
	W0816 18:18:20.189495   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:20.189500   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:20.189550   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:20.224779   75402 cri.go:89] found id: ""
	I0816 18:18:20.224807   75402 logs.go:276] 0 containers: []
	W0816 18:18:20.224817   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:20.224824   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:20.224888   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:20.257021   75402 cri.go:89] found id: ""
	I0816 18:18:20.257048   75402 logs.go:276] 0 containers: []
	W0816 18:18:20.257059   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:20.257067   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:20.257121   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:20.290991   75402 cri.go:89] found id: ""
	I0816 18:18:20.291023   75402 logs.go:276] 0 containers: []
	W0816 18:18:20.291032   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:20.291039   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:20.291099   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:20.323674   75402 cri.go:89] found id: ""
	I0816 18:18:20.323704   75402 logs.go:276] 0 containers: []
	W0816 18:18:20.323715   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:20.323726   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:20.323742   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:20.373411   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:20.373447   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:20.386954   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:20.386981   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:20.464366   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:20.464384   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:20.464403   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:20.541836   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:20.541881   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:23.085071   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:23.100460   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:23.100524   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:23.141239   75402 cri.go:89] found id: ""
	I0816 18:18:23.141269   75402 logs.go:276] 0 containers: []
	W0816 18:18:23.141280   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:23.141287   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:23.141354   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:23.172914   75402 cri.go:89] found id: ""
	I0816 18:18:23.172941   75402 logs.go:276] 0 containers: []
	W0816 18:18:23.172950   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:23.172958   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:23.173015   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:23.205593   75402 cri.go:89] found id: ""
	I0816 18:18:23.205621   75402 logs.go:276] 0 containers: []
	W0816 18:18:23.205632   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:23.205640   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:23.205706   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:23.239358   75402 cri.go:89] found id: ""
	I0816 18:18:23.239383   75402 logs.go:276] 0 containers: []
	W0816 18:18:23.239392   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:23.239401   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:23.239463   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:23.271798   75402 cri.go:89] found id: ""
	I0816 18:18:23.271828   75402 logs.go:276] 0 containers: []
	W0816 18:18:23.271838   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:23.271844   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:23.271911   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:23.305287   75402 cri.go:89] found id: ""
	I0816 18:18:23.305316   75402 logs.go:276] 0 containers: []
	W0816 18:18:23.305327   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:23.305335   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:23.305397   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:23.344041   75402 cri.go:89] found id: ""
	I0816 18:18:23.344067   75402 logs.go:276] 0 containers: []
	W0816 18:18:23.344075   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:23.344080   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:23.344134   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:23.376540   75402 cri.go:89] found id: ""
	I0816 18:18:23.376571   75402 logs.go:276] 0 containers: []
	W0816 18:18:23.376583   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:23.376601   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:23.376616   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:23.428265   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:23.428301   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:23.441377   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:23.441404   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:23.509219   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:23.509243   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:23.509259   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:23.589151   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:23.589186   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:26.126176   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:26.140228   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:26.140292   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:26.176768   75402 cri.go:89] found id: ""
	I0816 18:18:26.176807   75402 logs.go:276] 0 containers: []
	W0816 18:18:26.176820   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:26.176829   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:26.176887   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:26.212357   75402 cri.go:89] found id: ""
	I0816 18:18:26.212383   75402 logs.go:276] 0 containers: []
	W0816 18:18:26.212390   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:26.212396   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:26.212457   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:26.245256   75402 cri.go:89] found id: ""
	I0816 18:18:26.245290   75402 logs.go:276] 0 containers: []
	W0816 18:18:26.245302   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:26.245309   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:26.245370   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:26.277525   75402 cri.go:89] found id: ""
	I0816 18:18:26.277561   75402 logs.go:276] 0 containers: []
	W0816 18:18:26.277569   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:26.277575   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:26.277627   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:26.310928   75402 cri.go:89] found id: ""
	I0816 18:18:26.310956   75402 logs.go:276] 0 containers: []
	W0816 18:18:26.310967   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:26.310976   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:26.311052   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:26.344595   75402 cri.go:89] found id: ""
	I0816 18:18:26.344647   75402 logs.go:276] 0 containers: []
	W0816 18:18:26.344661   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:26.344669   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:26.344741   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:26.377776   75402 cri.go:89] found id: ""
	I0816 18:18:26.377805   75402 logs.go:276] 0 containers: []
	W0816 18:18:26.377814   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:26.377820   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:26.377872   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:26.411139   75402 cri.go:89] found id: ""
	I0816 18:18:26.411167   75402 logs.go:276] 0 containers: []
	W0816 18:18:26.411179   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:26.411190   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:26.411204   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:26.493802   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:26.493838   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:26.529542   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:26.529576   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:26.583544   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:26.583588   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:26.596429   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:26.596459   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:26.667858   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:29.168766   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:29.182032   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:29.182103   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:29.220213   75402 cri.go:89] found id: ""
	I0816 18:18:29.220239   75402 logs.go:276] 0 containers: []
	W0816 18:18:29.220247   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:29.220253   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:29.220300   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:29.257820   75402 cri.go:89] found id: ""
	I0816 18:18:29.257850   75402 logs.go:276] 0 containers: []
	W0816 18:18:29.257861   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:29.257867   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:29.257933   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:29.290450   75402 cri.go:89] found id: ""
	I0816 18:18:29.290473   75402 logs.go:276] 0 containers: []
	W0816 18:18:29.290480   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:29.290485   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:29.290546   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:29.328032   75402 cri.go:89] found id: ""
	I0816 18:18:29.328061   75402 logs.go:276] 0 containers: []
	W0816 18:18:29.328070   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:29.328076   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:29.328135   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:29.362104   75402 cri.go:89] found id: ""
	I0816 18:18:29.362132   75402 logs.go:276] 0 containers: []
	W0816 18:18:29.362141   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:29.362149   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:29.362218   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:29.395258   75402 cri.go:89] found id: ""
	I0816 18:18:29.395290   75402 logs.go:276] 0 containers: []
	W0816 18:18:29.395301   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:29.395309   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:29.395375   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:29.426617   75402 cri.go:89] found id: ""
	I0816 18:18:29.426646   75402 logs.go:276] 0 containers: []
	W0816 18:18:29.426656   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:29.426663   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:29.426725   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:29.462861   75402 cri.go:89] found id: ""
	I0816 18:18:29.462890   75402 logs.go:276] 0 containers: []
	W0816 18:18:29.462901   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:29.462912   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:29.462928   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:29.514882   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:29.514915   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:29.528101   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:29.528128   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:29.598983   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:29.599005   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:29.599020   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:29.684955   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:29.684991   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:32.230155   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:32.244158   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:32.244226   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:32.281993   75402 cri.go:89] found id: ""
	I0816 18:18:32.282020   75402 logs.go:276] 0 containers: []
	W0816 18:18:32.282031   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:32.282037   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:32.282100   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:32.316870   75402 cri.go:89] found id: ""
	I0816 18:18:32.316896   75402 logs.go:276] 0 containers: []
	W0816 18:18:32.316906   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:32.316914   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:32.316976   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:32.352597   75402 cri.go:89] found id: ""
	I0816 18:18:32.352637   75402 logs.go:276] 0 containers: []
	W0816 18:18:32.352649   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:32.352656   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:32.352722   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:32.387520   75402 cri.go:89] found id: ""
	I0816 18:18:32.387564   75402 logs.go:276] 0 containers: []
	W0816 18:18:32.387576   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:32.387584   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:32.387638   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:32.421499   75402 cri.go:89] found id: ""
	I0816 18:18:32.421526   75402 logs.go:276] 0 containers: []
	W0816 18:18:32.421537   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:32.421544   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:32.421603   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:32.460048   75402 cri.go:89] found id: ""
	I0816 18:18:32.460075   75402 logs.go:276] 0 containers: []
	W0816 18:18:32.460086   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:32.460093   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:32.460151   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:32.498148   75402 cri.go:89] found id: ""
	I0816 18:18:32.498176   75402 logs.go:276] 0 containers: []
	W0816 18:18:32.498184   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:32.498190   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:32.498248   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:32.530683   75402 cri.go:89] found id: ""
	I0816 18:18:32.530717   75402 logs.go:276] 0 containers: []
	W0816 18:18:32.530730   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:32.530741   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:32.530762   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:32.614776   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:32.614820   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:32.655628   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:32.655667   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:32.722763   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:32.722807   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:32.739817   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:32.739847   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:32.819297   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:35.320173   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:35.332427   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:35.332503   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:35.366316   75402 cri.go:89] found id: ""
	I0816 18:18:35.366346   75402 logs.go:276] 0 containers: []
	W0816 18:18:35.366357   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:35.366365   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:35.366433   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:35.399308   75402 cri.go:89] found id: ""
	I0816 18:18:35.399346   75402 logs.go:276] 0 containers: []
	W0816 18:18:35.399357   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:35.399367   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:35.399434   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:35.434926   75402 cri.go:89] found id: ""
	I0816 18:18:35.434958   75402 logs.go:276] 0 containers: []
	W0816 18:18:35.434971   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:35.434980   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:35.435042   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:35.473222   75402 cri.go:89] found id: ""
	I0816 18:18:35.473247   75402 logs.go:276] 0 containers: []
	W0816 18:18:35.473258   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:35.473266   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:35.473343   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:35.505484   75402 cri.go:89] found id: ""
	I0816 18:18:35.505521   75402 logs.go:276] 0 containers: []
	W0816 18:18:35.505533   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:35.505540   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:35.505608   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:35.540532   75402 cri.go:89] found id: ""
	I0816 18:18:35.540573   75402 logs.go:276] 0 containers: []
	W0816 18:18:35.540584   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:35.540590   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:35.540663   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:35.574205   75402 cri.go:89] found id: ""
	I0816 18:18:35.574235   75402 logs.go:276] 0 containers: []
	W0816 18:18:35.574245   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:35.574252   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:35.574343   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:35.614707   75402 cri.go:89] found id: ""
	I0816 18:18:35.614732   75402 logs.go:276] 0 containers: []
	W0816 18:18:35.614739   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:35.614747   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:35.614759   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:35.690830   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:35.690861   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:35.726601   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:35.726627   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:35.774706   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:35.774736   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:35.787557   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:35.787616   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:35.857474   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:38.358057   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:38.371128   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:38.371189   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:38.404812   75402 cri.go:89] found id: ""
	I0816 18:18:38.404844   75402 logs.go:276] 0 containers: []
	W0816 18:18:38.404855   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:38.404864   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:38.404926   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:38.437922   75402 cri.go:89] found id: ""
	I0816 18:18:38.437950   75402 logs.go:276] 0 containers: []
	W0816 18:18:38.437960   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:38.437967   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:38.438023   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:38.471474   75402 cri.go:89] found id: ""
	I0816 18:18:38.471509   75402 logs.go:276] 0 containers: []
	W0816 18:18:38.471519   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:38.471525   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:38.471582   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:38.510132   75402 cri.go:89] found id: ""
	I0816 18:18:38.510158   75402 logs.go:276] 0 containers: []
	W0816 18:18:38.510168   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:38.510184   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:38.510246   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:38.542212   75402 cri.go:89] found id: ""
	I0816 18:18:38.542251   75402 logs.go:276] 0 containers: []
	W0816 18:18:38.542262   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:38.542269   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:38.542341   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:38.579037   75402 cri.go:89] found id: ""
	I0816 18:18:38.579068   75402 logs.go:276] 0 containers: []
	W0816 18:18:38.579076   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:38.579082   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:38.579129   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:38.619219   75402 cri.go:89] found id: ""
	I0816 18:18:38.619252   75402 logs.go:276] 0 containers: []
	W0816 18:18:38.619263   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:38.619272   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:38.619335   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:38.655124   75402 cri.go:89] found id: ""
	I0816 18:18:38.655149   75402 logs.go:276] 0 containers: []
	W0816 18:18:38.655169   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:38.655180   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:38.655194   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:38.737857   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:38.737894   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:38.779777   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:38.779806   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:38.831556   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:38.831590   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:38.844496   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:38.844523   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:38.914543   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:41.415612   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:41.428187   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:41.428251   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:41.462932   75402 cri.go:89] found id: ""
	I0816 18:18:41.462964   75402 logs.go:276] 0 containers: []
	W0816 18:18:41.462975   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:41.462983   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:41.463043   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:41.497712   75402 cri.go:89] found id: ""
	I0816 18:18:41.497739   75402 logs.go:276] 0 containers: []
	W0816 18:18:41.497748   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:41.497754   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:41.497804   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:41.528430   75402 cri.go:89] found id: ""
	I0816 18:18:41.528455   75402 logs.go:276] 0 containers: []
	W0816 18:18:41.528463   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:41.528468   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:41.528527   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:41.560048   75402 cri.go:89] found id: ""
	I0816 18:18:41.560071   75402 logs.go:276] 0 containers: []
	W0816 18:18:41.560081   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:41.560088   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:41.560142   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:41.592536   75402 cri.go:89] found id: ""
	I0816 18:18:41.592566   75402 logs.go:276] 0 containers: []
	W0816 18:18:41.592577   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:41.592585   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:41.592663   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:41.626850   75402 cri.go:89] found id: ""
	I0816 18:18:41.626884   75402 logs.go:276] 0 containers: []
	W0816 18:18:41.626894   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:41.626902   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:41.626965   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:41.660452   75402 cri.go:89] found id: ""
	I0816 18:18:41.660478   75402 logs.go:276] 0 containers: []
	W0816 18:18:41.660486   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:41.660491   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:41.660542   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:41.695990   75402 cri.go:89] found id: ""
	I0816 18:18:41.696012   75402 logs.go:276] 0 containers: []
	W0816 18:18:41.696020   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:41.696028   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:41.696039   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:41.733107   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:41.733134   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:41.782812   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:41.782843   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:41.795954   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:41.795984   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:41.867473   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:41.867526   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:41.867545   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:44.450340   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:44.463299   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:44.463361   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:44.495068   75402 cri.go:89] found id: ""
	I0816 18:18:44.495098   75402 logs.go:276] 0 containers: []
	W0816 18:18:44.495108   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:44.495116   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:44.495221   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:44.529615   75402 cri.go:89] found id: ""
	I0816 18:18:44.529638   75402 logs.go:276] 0 containers: []
	W0816 18:18:44.529646   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:44.529651   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:44.529701   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:44.565275   75402 cri.go:89] found id: ""
	I0816 18:18:44.565298   75402 logs.go:276] 0 containers: []
	W0816 18:18:44.565306   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:44.565321   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:44.565384   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:44.598554   75402 cri.go:89] found id: ""
	I0816 18:18:44.598590   75402 logs.go:276] 0 containers: []
	W0816 18:18:44.598601   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:44.598609   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:44.598673   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:44.631389   75402 cri.go:89] found id: ""
	I0816 18:18:44.631422   75402 logs.go:276] 0 containers: []
	W0816 18:18:44.631436   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:44.631446   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:44.631519   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:44.663986   75402 cri.go:89] found id: ""
	I0816 18:18:44.664013   75402 logs.go:276] 0 containers: []
	W0816 18:18:44.664023   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:44.664031   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:44.664095   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:44.700238   75402 cri.go:89] found id: ""
	I0816 18:18:44.700263   75402 logs.go:276] 0 containers: []
	W0816 18:18:44.700272   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:44.700277   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:44.700330   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:44.732737   75402 cri.go:89] found id: ""
	I0816 18:18:44.732766   75402 logs.go:276] 0 containers: []
	W0816 18:18:44.732779   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:44.732790   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:44.732807   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:44.806427   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:44.806462   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:44.842965   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:44.842994   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:44.895745   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:44.895781   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:44.909850   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:44.909885   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:44.979315   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:47.479563   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:47.491876   75402 kubeadm.go:597] duration metric: took 4m4.431091965s to restartPrimaryControlPlane
	W0816 18:18:47.491939   75402 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0816 18:18:47.491962   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 18:18:51.168302   75402 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.676317513s)
	I0816 18:18:51.168387   75402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 18:18:51.182492   75402 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 18:18:51.192403   75402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 18:18:51.202058   75402 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 18:18:51.202075   75402 kubeadm.go:157] found existing configuration files:
	
	I0816 18:18:51.202115   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 18:18:51.210661   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 18:18:51.210721   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 18:18:51.219979   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 18:18:51.228422   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 18:18:51.228488   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 18:18:51.237159   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 18:18:51.245555   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 18:18:51.245622   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 18:18:51.253986   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 18:18:51.261885   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 18:18:51.261927   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 18:18:51.270479   75402 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 18:18:51.335784   75402 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0816 18:18:51.335883   75402 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 18:18:51.482910   75402 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 18:18:51.483069   75402 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 18:18:51.483228   75402 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0816 18:18:51.652730   75402 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 18:18:51.655077   75402 out.go:235]   - Generating certificates and keys ...
	I0816 18:18:51.655185   75402 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 18:18:51.655304   75402 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 18:18:51.655425   75402 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 18:18:51.655521   75402 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 18:18:51.657408   75402 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 18:18:51.657485   75402 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 18:18:51.657561   75402 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 18:18:51.657645   75402 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 18:18:51.657748   75402 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 18:18:51.657854   75402 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 18:18:51.657911   75402 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 18:18:51.657984   75402 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 18:18:51.720786   75402 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 18:18:51.991165   75402 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 18:18:52.140983   75402 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 18:18:52.453361   75402 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 18:18:52.467210   75402 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 18:18:52.469222   75402 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 18:18:52.469338   75402 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 18:18:52.590938   75402 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 18:18:52.592875   75402 out.go:235]   - Booting up control plane ...
	I0816 18:18:52.592987   75402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 18:18:52.602597   75402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 18:18:52.603616   75402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 18:18:52.604417   75402 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 18:18:52.606669   75402 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0816 18:19:32.607933   75402 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0816 18:19:32.608136   75402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:19:32.608430   75402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:19:37.609143   75402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:19:37.609401   75402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:19:47.609941   75402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:19:47.610185   75402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:20:07.611108   75402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:20:07.611350   75402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:20:47.613446   75402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:20:47.613708   75402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:20:47.613742   75402 kubeadm.go:310] 
	I0816 18:20:47.613809   75402 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0816 18:20:47.613902   75402 kubeadm.go:310] 		timed out waiting for the condition
	I0816 18:20:47.613926   75402 kubeadm.go:310] 
	I0816 18:20:47.613976   75402 kubeadm.go:310] 	This error is likely caused by:
	I0816 18:20:47.614028   75402 kubeadm.go:310] 		- The kubelet is not running
	I0816 18:20:47.614160   75402 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0816 18:20:47.614174   75402 kubeadm.go:310] 
	I0816 18:20:47.614323   75402 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0816 18:20:47.614383   75402 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0816 18:20:47.614432   75402 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0816 18:20:47.614441   75402 kubeadm.go:310] 
	I0816 18:20:47.614601   75402 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0816 18:20:47.614730   75402 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0816 18:20:47.614751   75402 kubeadm.go:310] 
	I0816 18:20:47.614875   75402 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0816 18:20:47.614982   75402 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0816 18:20:47.615101   75402 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0816 18:20:47.615217   75402 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0816 18:20:47.615230   75402 kubeadm.go:310] 
	I0816 18:20:47.616865   75402 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 18:20:47.616971   75402 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0816 18:20:47.617028   75402 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0816 18:20:47.617173   75402 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0816 18:20:47.617226   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 18:20:48.158066   75402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 18:20:48.172568   75402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 18:20:48.182445   75402 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 18:20:48.182468   75402 kubeadm.go:157] found existing configuration files:
	
	I0816 18:20:48.182527   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 18:20:48.191779   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 18:20:48.191847   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 18:20:48.201531   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 18:20:48.210495   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 18:20:48.210568   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 18:20:48.219701   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 18:20:48.228170   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 18:20:48.228242   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 18:20:48.237366   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 18:20:48.246335   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 18:20:48.246393   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 18:20:48.255655   75402 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 18:20:48.321873   75402 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0816 18:20:48.321930   75402 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 18:20:48.462199   75402 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 18:20:48.462324   75402 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 18:20:48.462448   75402 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0816 18:20:48.646565   75402 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 18:20:48.648485   75402 out.go:235]   - Generating certificates and keys ...
	I0816 18:20:48.648605   75402 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 18:20:48.648748   75402 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 18:20:48.648895   75402 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 18:20:48.648994   75402 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 18:20:48.649088   75402 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 18:20:48.649185   75402 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 18:20:48.649282   75402 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 18:20:48.649368   75402 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 18:20:48.649485   75402 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 18:20:48.649595   75402 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 18:20:48.649649   75402 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 18:20:48.649753   75402 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 18:20:48.864525   75402 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 18:20:49.035729   75402 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 18:20:49.086765   75402 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 18:20:49.222612   75402 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 18:20:49.239121   75402 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 18:20:49.240158   75402 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 18:20:49.240200   75402 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 18:20:49.366027   75402 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 18:20:49.367770   75402 out.go:235]   - Booting up control plane ...
	I0816 18:20:49.367907   75402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 18:20:49.373047   75402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 18:20:49.373886   75402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 18:20:49.374691   75402 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 18:20:49.379220   75402 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0816 18:21:29.381362   75402 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0816 18:21:29.381473   75402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:21:29.381700   75402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:21:34.381889   75402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:21:34.382065   75402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:21:44.382765   75402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:21:44.382964   75402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:22:04.383485   75402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:22:04.383748   75402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:22:44.382265   75402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:22:44.382558   75402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:22:44.382572   75402 kubeadm.go:310] 
	I0816 18:22:44.382628   75402 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0816 18:22:44.382715   75402 kubeadm.go:310] 		timed out waiting for the condition
	I0816 18:22:44.382741   75402 kubeadm.go:310] 
	I0816 18:22:44.382789   75402 kubeadm.go:310] 	This error is likely caused by:
	I0816 18:22:44.382837   75402 kubeadm.go:310] 		- The kubelet is not running
	I0816 18:22:44.382986   75402 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0816 18:22:44.382997   75402 kubeadm.go:310] 
	I0816 18:22:44.383149   75402 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0816 18:22:44.383202   75402 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0816 18:22:44.383246   75402 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0816 18:22:44.383258   75402 kubeadm.go:310] 
	I0816 18:22:44.383421   75402 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0816 18:22:44.383534   75402 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0816 18:22:44.383549   75402 kubeadm.go:310] 
	I0816 18:22:44.383743   75402 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0816 18:22:44.383877   75402 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0816 18:22:44.383993   75402 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0816 18:22:44.384092   75402 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0816 18:22:44.384103   75402 kubeadm.go:310] 
	I0816 18:22:44.384783   75402 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 18:22:44.384895   75402 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0816 18:22:44.384986   75402 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0816 18:22:44.385062   75402 kubeadm.go:394] duration metric: took 8m1.372176417s to StartCluster
	I0816 18:22:44.385108   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:22:44.385173   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:22:44.425862   75402 cri.go:89] found id: ""
	I0816 18:22:44.425892   75402 logs.go:276] 0 containers: []
	W0816 18:22:44.425901   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:22:44.425909   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:22:44.425982   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:22:44.461988   75402 cri.go:89] found id: ""
	I0816 18:22:44.462019   75402 logs.go:276] 0 containers: []
	W0816 18:22:44.462030   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:22:44.462038   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:22:44.462109   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:22:44.496063   75402 cri.go:89] found id: ""
	I0816 18:22:44.496095   75402 logs.go:276] 0 containers: []
	W0816 18:22:44.496106   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:22:44.496114   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:22:44.496175   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:22:44.529875   75402 cri.go:89] found id: ""
	I0816 18:22:44.529899   75402 logs.go:276] 0 containers: []
	W0816 18:22:44.529906   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:22:44.529912   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:22:44.529958   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:22:44.565745   75402 cri.go:89] found id: ""
	I0816 18:22:44.565781   75402 logs.go:276] 0 containers: []
	W0816 18:22:44.565791   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:22:44.565798   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:22:44.565860   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:22:44.604122   75402 cri.go:89] found id: ""
	I0816 18:22:44.604149   75402 logs.go:276] 0 containers: []
	W0816 18:22:44.604160   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:22:44.604168   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:22:44.604228   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:22:44.636607   75402 cri.go:89] found id: ""
	I0816 18:22:44.636658   75402 logs.go:276] 0 containers: []
	W0816 18:22:44.636669   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:22:44.636677   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:22:44.636736   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:22:44.670942   75402 cri.go:89] found id: ""
	I0816 18:22:44.670973   75402 logs.go:276] 0 containers: []
	W0816 18:22:44.670981   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:22:44.670989   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:22:44.671001   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:22:44.722403   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:22:44.722433   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:22:44.738587   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:22:44.738627   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:22:44.854530   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:22:44.854563   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:22:44.854579   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:22:44.957308   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:22:44.957342   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0816 18:22:44.997652   75402 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0816 18:22:44.997714   75402 out.go:270] * 
	* 
	W0816 18:22:44.997804   75402 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0816 18:22:44.997828   75402 out.go:270] * 
	* 
	W0816 18:22:44.998787   75402 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 18:22:45.002189   75402 out.go:201] 
	W0816 18:22:45.003254   75402 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0816 18:22:45.003310   75402 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0816 18:22:45.003340   75402 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0816 18:22:45.004826   75402 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-783465 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-783465 -n old-k8s-version-783465
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-783465 -n old-k8s-version-783465: exit status 2 (215.150743ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-783465 logs -n 25
E0816 18:22:46.272434   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/auto-791304/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-783465 logs -n 25: (1.591745286s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p custom-flannel-791304 sudo cat                      | custom-flannel-791304        | jenkins | v1.33.1 | 16 Aug 24 18:05 UTC | 16 Aug 24 18:05 UTC |
	|         | /lib/systemd/system/containerd.service                 |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-791304                               | custom-flannel-791304        | jenkins | v1.33.1 | 16 Aug 24 18:05 UTC | 16 Aug 24 18:05 UTC |
	|         | sudo cat                                               |                              |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-791304 sudo                          | custom-flannel-791304        | jenkins | v1.33.1 | 16 Aug 24 18:05 UTC | 16 Aug 24 18:05 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-791304 sudo                          | custom-flannel-791304        | jenkins | v1.33.1 | 16 Aug 24 18:05 UTC | 16 Aug 24 18:05 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-791304 sudo                          | custom-flannel-791304        | jenkins | v1.33.1 | 16 Aug 24 18:05 UTC | 16 Aug 24 18:05 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-791304 sudo                          | custom-flannel-791304        | jenkins | v1.33.1 | 16 Aug 24 18:05 UTC | 16 Aug 24 18:05 UTC |
	|         | find /etc/crio -type f -exec                           |                              |         |         |                     |                     |
	|         | sh -c 'echo {}; cat {}' \;                             |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-791304 sudo                          | custom-flannel-791304        | jenkins | v1.33.1 | 16 Aug 24 18:05 UTC | 16 Aug 24 18:05 UTC |
	|         | crio config                                            |                              |         |         |                     |                     |
	| delete  | -p custom-flannel-791304                               | custom-flannel-791304        | jenkins | v1.33.1 | 16 Aug 24 18:05 UTC | 16 Aug 24 18:05 UTC |
	| start   | -p                                                     | default-k8s-diff-port-256678 | jenkins | v1.33.1 | 16 Aug 24 18:05 UTC | 16 Aug 24 18:07 UTC |
	|         | default-k8s-diff-port-256678                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-777541            | embed-certs-777541           | jenkins | v1.33.1 | 16 Aug 24 18:06 UTC | 16 Aug 24 18:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-777541                                  | embed-certs-777541           | jenkins | v1.33.1 | 16 Aug 24 18:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-864476             | no-preload-864476            | jenkins | v1.33.1 | 16 Aug 24 18:07 UTC | 16 Aug 24 18:07 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-864476                                   | no-preload-864476            | jenkins | v1.33.1 | 16 Aug 24 18:07 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-256678  | default-k8s-diff-port-256678 | jenkins | v1.33.1 | 16 Aug 24 18:07 UTC | 16 Aug 24 18:07 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-256678 | jenkins | v1.33.1 | 16 Aug 24 18:07 UTC |                     |
	|         | default-k8s-diff-port-256678                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-777541                 | embed-certs-777541           | jenkins | v1.33.1 | 16 Aug 24 18:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-783465        | old-k8s-version-783465       | jenkins | v1.33.1 | 16 Aug 24 18:08 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-777541                                  | embed-certs-777541           | jenkins | v1.33.1 | 16 Aug 24 18:08 UTC | 16 Aug 24 18:19 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-864476                  | no-preload-864476            | jenkins | v1.33.1 | 16 Aug 24 18:09 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-864476                                   | no-preload-864476            | jenkins | v1.33.1 | 16 Aug 24 18:09 UTC | 16 Aug 24 18:19 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-256678       | default-k8s-diff-port-256678 | jenkins | v1.33.1 | 16 Aug 24 18:09 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-256678 | jenkins | v1.33.1 | 16 Aug 24 18:09 UTC | 16 Aug 24 18:19 UTC |
	|         | default-k8s-diff-port-256678                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-783465                              | old-k8s-version-783465       | jenkins | v1.33.1 | 16 Aug 24 18:10 UTC | 16 Aug 24 18:10 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-783465             | old-k8s-version-783465       | jenkins | v1.33.1 | 16 Aug 24 18:10 UTC | 16 Aug 24 18:10 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-783465                              | old-k8s-version-783465       | jenkins | v1.33.1 | 16 Aug 24 18:10 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 18:10:53
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 18:10:53.101149   75402 out.go:345] Setting OutFile to fd 1 ...
	I0816 18:10:53.101401   75402 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 18:10:53.101412   75402 out.go:358] Setting ErrFile to fd 2...
	I0816 18:10:53.101418   75402 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 18:10:53.101600   75402 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-9545/.minikube/bin
	I0816 18:10:53.102131   75402 out.go:352] Setting JSON to false
	I0816 18:10:53.103018   75402 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6751,"bootTime":1723825102,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0816 18:10:53.103076   75402 start.go:139] virtualization: kvm guest
	I0816 18:10:53.105216   75402 out.go:177] * [old-k8s-version-783465] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0816 18:10:53.106496   75402 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 18:10:53.106504   75402 notify.go:220] Checking for updates...
	I0816 18:10:53.109235   75402 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 18:10:53.110572   75402 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19461-9545/kubeconfig
	I0816 18:10:53.111747   75402 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19461-9545/.minikube
	I0816 18:10:53.113164   75402 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 18:10:53.114589   75402 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 18:10:53.116284   75402 config.go:182] Loaded profile config "old-k8s-version-783465": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0816 18:10:53.116746   75402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:10:53.116806   75402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:10:53.132445   75402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46631
	I0816 18:10:53.132886   75402 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:10:53.133456   75402 main.go:141] libmachine: Using API Version  1
	I0816 18:10:53.133494   75402 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:10:53.133836   75402 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:10:53.134015   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	I0816 18:10:53.135791   75402 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0816 18:10:53.136942   75402 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 18:10:53.137229   75402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:10:53.137260   75402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:10:53.151853   75402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34617
	I0816 18:10:53.152327   75402 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:10:53.152881   75402 main.go:141] libmachine: Using API Version  1
	I0816 18:10:53.152905   75402 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:10:53.153159   75402 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:10:53.153307   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	I0816 18:10:53.188002   75402 out.go:177] * Using the kvm2 driver based on existing profile
	I0816 18:10:53.189287   75402 start.go:297] selected driver: kvm2
	I0816 18:10:53.189309   75402 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-783465 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-783465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 18:10:53.189432   75402 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 18:10:53.190098   75402 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 18:10:53.190187   75402 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19461-9545/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 18:10:53.205024   75402 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0816 18:10:53.205386   75402 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 18:10:53.205417   75402 cni.go:84] Creating CNI manager for ""
	I0816 18:10:53.205425   75402 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 18:10:53.205458   75402 start.go:340] cluster config:
	{Name:old-k8s-version-783465 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-783465 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 18:10:53.205557   75402 iso.go:125] acquiring lock: {Name:mke35866c1d0f078e1f027c2d727692722810e04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 18:10:53.207241   75402 out.go:177] * Starting "old-k8s-version-783465" primary control-plane node in "old-k8s-version-783465" cluster
	I0816 18:10:53.208254   75402 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0816 18:10:53.208286   75402 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0816 18:10:53.208298   75402 cache.go:56] Caching tarball of preloaded images
	I0816 18:10:53.208386   75402 preload.go:172] Found /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 18:10:53.208400   75402 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0816 18:10:53.208510   75402 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/config.json ...
	I0816 18:10:53.208736   75402 start.go:360] acquireMachinesLock for old-k8s-version-783465: {Name:mke1ffa1f2d6d714bdd85e184816ba8f4dfd08f1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 18:10:54.604889   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:10:57.676891   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:03.756940   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:06.828911   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:12.908885   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:15.980925   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:22.060891   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:25.132961   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:31.212919   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:34.284876   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:40.365032   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:43.436910   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:49.516914   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:52.588969   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:58.668915   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:01.740965   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:07.820898   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:10.892922   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:16.972913   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:20.044913   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:26.124921   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:29.196968   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:35.276952   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:38.348971   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:44.428932   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:47.500897   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:53.580923   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:56.652927   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:13:02.732992   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:13:05.804929   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:13:11.884953   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:13:14.956943   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:13:21.036963   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:13:24.108915   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:13:30.188851   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:13:33.260936   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:13:36.264963   74828 start.go:364] duration metric: took 4m2.37855556s to acquireMachinesLock for "no-preload-864476"
	I0816 18:13:36.265020   74828 start.go:96] Skipping create...Using existing machine configuration
	I0816 18:13:36.265027   74828 fix.go:54] fixHost starting: 
	I0816 18:13:36.265379   74828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:13:36.265409   74828 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:13:36.280707   74828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35227
	I0816 18:13:36.281167   74828 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:13:36.281747   74828 main.go:141] libmachine: Using API Version  1
	I0816 18:13:36.281778   74828 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:13:36.282122   74828 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:13:36.282330   74828 main.go:141] libmachine: (no-preload-864476) Calling .DriverName
	I0816 18:13:36.282457   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetState
	I0816 18:13:36.284064   74828 fix.go:112] recreateIfNeeded on no-preload-864476: state=Stopped err=<nil>
	I0816 18:13:36.284084   74828 main.go:141] libmachine: (no-preload-864476) Calling .DriverName
	W0816 18:13:36.284217   74828 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 18:13:36.286749   74828 out.go:177] * Restarting existing kvm2 VM for "no-preload-864476" ...
	I0816 18:13:36.262619   74510 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 18:13:36.262654   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetMachineName
	I0816 18:13:36.262944   74510 buildroot.go:166] provisioning hostname "embed-certs-777541"
	I0816 18:13:36.262967   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetMachineName
	I0816 18:13:36.263222   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:13:36.264803   74510 machine.go:96] duration metric: took 4m37.429582668s to provisionDockerMachine
	I0816 18:13:36.264858   74510 fix.go:56] duration metric: took 4m37.449862851s for fixHost
	I0816 18:13:36.264867   74510 start.go:83] releasing machines lock for "embed-certs-777541", held for 4m37.449881856s
	W0816 18:13:36.264895   74510 start.go:714] error starting host: provision: host is not running
	W0816 18:13:36.264994   74510 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0816 18:13:36.265005   74510 start.go:729] Will try again in 5 seconds ...
	I0816 18:13:36.288329   74828 main.go:141] libmachine: (no-preload-864476) Calling .Start
	I0816 18:13:36.288484   74828 main.go:141] libmachine: (no-preload-864476) Ensuring networks are active...
	I0816 18:13:36.289285   74828 main.go:141] libmachine: (no-preload-864476) Ensuring network default is active
	I0816 18:13:36.289912   74828 main.go:141] libmachine: (no-preload-864476) Ensuring network mk-no-preload-864476 is active
	I0816 18:13:36.290318   74828 main.go:141] libmachine: (no-preload-864476) Getting domain xml...
	I0816 18:13:36.291176   74828 main.go:141] libmachine: (no-preload-864476) Creating domain...
	I0816 18:13:37.504191   74828 main.go:141] libmachine: (no-preload-864476) Waiting to get IP...
	I0816 18:13:37.505110   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:37.505575   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:37.505621   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:37.505543   75973 retry.go:31] will retry after 308.411866ms: waiting for machine to come up
	I0816 18:13:37.816219   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:37.816877   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:37.816931   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:37.816852   75973 retry.go:31] will retry after 321.445064ms: waiting for machine to come up
	I0816 18:13:38.140594   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:38.141059   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:38.141082   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:38.141018   75973 retry.go:31] will retry after 337.935433ms: waiting for machine to come up
	I0816 18:13:38.480699   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:38.481110   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:38.481135   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:38.481033   75973 retry.go:31] will retry after 449.775503ms: waiting for machine to come up
	I0816 18:13:41.266589   74510 start.go:360] acquireMachinesLock for embed-certs-777541: {Name:mke1ffa1f2d6d714bdd85e184816ba8f4dfd08f1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 18:13:38.932812   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:38.933232   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:38.933259   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:38.933171   75973 retry.go:31] will retry after 482.676832ms: waiting for machine to come up
	I0816 18:13:39.417939   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:39.418323   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:39.418350   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:39.418276   75973 retry.go:31] will retry after 740.37516ms: waiting for machine to come up
	I0816 18:13:40.160491   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:40.160917   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:40.160942   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:40.160867   75973 retry.go:31] will retry after 1.10464436s: waiting for machine to come up
	I0816 18:13:41.267213   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:41.267654   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:41.267680   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:41.267613   75973 retry.go:31] will retry after 1.395131164s: waiting for machine to come up
	I0816 18:13:42.664731   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:42.665229   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:42.665252   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:42.665181   75973 retry.go:31] will retry after 1.560403289s: waiting for machine to come up
	I0816 18:13:44.226847   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:44.227375   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:44.227404   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:44.227342   75973 retry.go:31] will retry after 1.647944685s: waiting for machine to come up
	I0816 18:13:45.876965   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:45.877411   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:45.877440   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:45.877366   75973 retry.go:31] will retry after 1.971325886s: waiting for machine to come up
	I0816 18:13:47.849950   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:47.850457   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:47.850490   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:47.850383   75973 retry.go:31] will retry after 2.95642392s: waiting for machine to come up
	I0816 18:13:50.810560   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:50.811013   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:50.811045   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:50.810930   75973 retry.go:31] will retry after 4.510008193s: waiting for machine to come up
	I0816 18:13:56.529339   75006 start.go:364] duration metric: took 4m6.515818295s to acquireMachinesLock for "default-k8s-diff-port-256678"
	I0816 18:13:56.529444   75006 start.go:96] Skipping create...Using existing machine configuration
	I0816 18:13:56.529459   75006 fix.go:54] fixHost starting: 
	I0816 18:13:56.529851   75006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:13:56.529890   75006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:13:56.547077   75006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45661
	I0816 18:13:56.547585   75006 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:13:56.548068   75006 main.go:141] libmachine: Using API Version  1
	I0816 18:13:56.548091   75006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:13:56.548421   75006 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:13:56.548610   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .DriverName
	I0816 18:13:56.548766   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetState
	I0816 18:13:56.550373   75006 fix.go:112] recreateIfNeeded on default-k8s-diff-port-256678: state=Stopped err=<nil>
	I0816 18:13:56.550414   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .DriverName
	W0816 18:13:56.550604   75006 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 18:13:56.552781   75006 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-256678" ...
	I0816 18:13:55.326062   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.326558   74828 main.go:141] libmachine: (no-preload-864476) Found IP for machine: 192.168.50.50
	I0816 18:13:55.326576   74828 main.go:141] libmachine: (no-preload-864476) Reserving static IP address...
	I0816 18:13:55.326593   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has current primary IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.327109   74828 main.go:141] libmachine: (no-preload-864476) Reserved static IP address: 192.168.50.50
	I0816 18:13:55.327142   74828 main.go:141] libmachine: (no-preload-864476) Waiting for SSH to be available...
	I0816 18:13:55.327167   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "no-preload-864476", mac: "52:54:00:f3:50:53", ip: "192.168.50.50"} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:55.327191   74828 main.go:141] libmachine: (no-preload-864476) DBG | skip adding static IP to network mk-no-preload-864476 - found existing host DHCP lease matching {name: "no-preload-864476", mac: "52:54:00:f3:50:53", ip: "192.168.50.50"}
	I0816 18:13:55.327205   74828 main.go:141] libmachine: (no-preload-864476) DBG | Getting to WaitForSSH function...
	I0816 18:13:55.329001   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.329350   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:55.329378   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.329534   74828 main.go:141] libmachine: (no-preload-864476) DBG | Using SSH client type: external
	I0816 18:13:55.329574   74828 main.go:141] libmachine: (no-preload-864476) DBG | Using SSH private key: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/no-preload-864476/id_rsa (-rw-------)
	I0816 18:13:55.329604   74828 main.go:141] libmachine: (no-preload-864476) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.50 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19461-9545/.minikube/machines/no-preload-864476/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 18:13:55.329622   74828 main.go:141] libmachine: (no-preload-864476) DBG | About to run SSH command:
	I0816 18:13:55.329636   74828 main.go:141] libmachine: (no-preload-864476) DBG | exit 0
	I0816 18:13:55.452553   74828 main.go:141] libmachine: (no-preload-864476) DBG | SSH cmd err, output: <nil>: 
	I0816 18:13:55.452964   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetConfigRaw
	I0816 18:13:55.453557   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetIP
	I0816 18:13:55.455951   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.456334   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:55.456370   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.456564   74828 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/no-preload-864476/config.json ...
	I0816 18:13:55.456782   74828 machine.go:93] provisionDockerMachine start ...
	I0816 18:13:55.456801   74828 main.go:141] libmachine: (no-preload-864476) Calling .DriverName
	I0816 18:13:55.456983   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:13:55.459149   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.459547   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:55.459570   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.459730   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHPort
	I0816 18:13:55.459918   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:55.460068   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:55.460207   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHUsername
	I0816 18:13:55.460418   74828 main.go:141] libmachine: Using SSH client type: native
	I0816 18:13:55.460603   74828 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.50 22 <nil> <nil>}
	I0816 18:13:55.460637   74828 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 18:13:55.564875   74828 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 18:13:55.564903   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetMachineName
	I0816 18:13:55.565203   74828 buildroot.go:166] provisioning hostname "no-preload-864476"
	I0816 18:13:55.565229   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetMachineName
	I0816 18:13:55.565455   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:13:55.568114   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.568578   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:55.568612   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.568777   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHPort
	I0816 18:13:55.568912   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:55.569023   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:55.569200   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHUsername
	I0816 18:13:55.569448   74828 main.go:141] libmachine: Using SSH client type: native
	I0816 18:13:55.569649   74828 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.50 22 <nil> <nil>}
	I0816 18:13:55.569667   74828 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-864476 && echo "no-preload-864476" | sudo tee /etc/hostname
	I0816 18:13:55.686349   74828 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-864476
	
	I0816 18:13:55.686378   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:13:55.689171   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.689572   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:55.689608   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.689792   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHPort
	I0816 18:13:55.690008   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:55.690183   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:55.690418   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHUsername
	I0816 18:13:55.690623   74828 main.go:141] libmachine: Using SSH client type: native
	I0816 18:13:55.690782   74828 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.50 22 <nil> <nil>}
	I0816 18:13:55.690798   74828 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-864476' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-864476/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-864476' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 18:13:55.800352   74828 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 18:13:55.800386   74828 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19461-9545/.minikube CaCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19461-9545/.minikube}
	I0816 18:13:55.800436   74828 buildroot.go:174] setting up certificates
	I0816 18:13:55.800452   74828 provision.go:84] configureAuth start
	I0816 18:13:55.800470   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetMachineName
	I0816 18:13:55.800793   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetIP
	I0816 18:13:55.803388   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.803786   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:55.803822   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.804025   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:13:55.806567   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.806977   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:55.807003   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.807129   74828 provision.go:143] copyHostCerts
	I0816 18:13:55.807178   74828 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem, removing ...
	I0816 18:13:55.807198   74828 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem
	I0816 18:13:55.807286   74828 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem (1082 bytes)
	I0816 18:13:55.807401   74828 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem, removing ...
	I0816 18:13:55.807412   74828 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem
	I0816 18:13:55.807439   74828 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem (1123 bytes)
	I0816 18:13:55.807554   74828 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem, removing ...
	I0816 18:13:55.807565   74828 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem
	I0816 18:13:55.807588   74828 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem (1679 bytes)
	I0816 18:13:55.807648   74828 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem org=jenkins.no-preload-864476 san=[127.0.0.1 192.168.50.50 localhost minikube no-preload-864476]
	I0816 18:13:55.881474   74828 provision.go:177] copyRemoteCerts
	I0816 18:13:55.881529   74828 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 18:13:55.881558   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:13:55.884424   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.884952   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:55.884983   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.885138   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHPort
	I0816 18:13:55.885335   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:55.885486   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHUsername
	I0816 18:13:55.885669   74828 sshutil.go:53] new ssh client: &{IP:192.168.50.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/no-preload-864476/id_rsa Username:docker}
	I0816 18:13:55.966915   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0816 18:13:55.989812   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 18:13:56.011744   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 18:13:56.032745   74828 provision.go:87] duration metric: took 232.276991ms to configureAuth
	I0816 18:13:56.032778   74828 buildroot.go:189] setting minikube options for container-runtime
	I0816 18:13:56.033001   74828 config.go:182] Loaded profile config "no-preload-864476": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 18:13:56.033096   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:13:56.035919   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:56.036283   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:56.036311   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:56.036499   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHPort
	I0816 18:13:56.036713   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:56.036861   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:56.036975   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHUsername
	I0816 18:13:56.037100   74828 main.go:141] libmachine: Using SSH client type: native
	I0816 18:13:56.037275   74828 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.50 22 <nil> <nil>}
	I0816 18:13:56.037294   74828 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 18:13:56.296112   74828 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 18:13:56.296140   74828 machine.go:96] duration metric: took 839.343895ms to provisionDockerMachine
	I0816 18:13:56.296152   74828 start.go:293] postStartSetup for "no-preload-864476" (driver="kvm2")
	I0816 18:13:56.296162   74828 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 18:13:56.296177   74828 main.go:141] libmachine: (no-preload-864476) Calling .DriverName
	I0816 18:13:56.296537   74828 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 18:13:56.296570   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:13:56.299838   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:56.300364   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:56.300396   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:56.300603   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHPort
	I0816 18:13:56.300833   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:56.300985   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHUsername
	I0816 18:13:56.301187   74828 sshutil.go:53] new ssh client: &{IP:192.168.50.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/no-preload-864476/id_rsa Username:docker}
	I0816 18:13:56.383095   74828 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 18:13:56.387172   74828 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 18:13:56.387200   74828 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/addons for local assets ...
	I0816 18:13:56.387286   74828 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/files for local assets ...
	I0816 18:13:56.387392   74828 filesync.go:149] local asset: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem -> 167532.pem in /etc/ssl/certs
	I0816 18:13:56.387550   74828 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 18:13:56.396072   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /etc/ssl/certs/167532.pem (1708 bytes)
	I0816 18:13:56.419470   74828 start.go:296] duration metric: took 123.306644ms for postStartSetup
	I0816 18:13:56.419509   74828 fix.go:56] duration metric: took 20.154482872s for fixHost
	I0816 18:13:56.419529   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:13:56.422047   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:56.422454   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:56.422503   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:56.422573   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHPort
	I0816 18:13:56.422764   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:56.422963   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:56.423150   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHUsername
	I0816 18:13:56.423388   74828 main.go:141] libmachine: Using SSH client type: native
	I0816 18:13:56.423597   74828 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.50 22 <nil> <nil>}
	I0816 18:13:56.423610   74828 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 18:13:56.529164   74828 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723832036.506687395
	
	I0816 18:13:56.529190   74828 fix.go:216] guest clock: 1723832036.506687395
	I0816 18:13:56.529200   74828 fix.go:229] Guest: 2024-08-16 18:13:56.506687395 +0000 UTC Remote: 2024-08-16 18:13:56.419513163 +0000 UTC m=+262.671840210 (delta=87.174232ms)
	I0816 18:13:56.529229   74828 fix.go:200] guest clock delta is within tolerance: 87.174232ms
	I0816 18:13:56.529246   74828 start.go:83] releasing machines lock for "no-preload-864476", held for 20.264231324s
	I0816 18:13:56.529276   74828 main.go:141] libmachine: (no-preload-864476) Calling .DriverName
	I0816 18:13:56.529645   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetIP
	I0816 18:13:56.532279   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:56.532599   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:56.532660   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:56.532824   74828 main.go:141] libmachine: (no-preload-864476) Calling .DriverName
	I0816 18:13:56.533348   74828 main.go:141] libmachine: (no-preload-864476) Calling .DriverName
	I0816 18:13:56.533522   74828 main.go:141] libmachine: (no-preload-864476) Calling .DriverName
	I0816 18:13:56.533604   74828 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 18:13:56.533663   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:13:56.533759   74828 ssh_runner.go:195] Run: cat /version.json
	I0816 18:13:56.533786   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:13:56.536427   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:56.536711   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:56.536822   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:56.536845   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:56.536996   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHPort
	I0816 18:13:56.537071   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:56.537105   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:56.537191   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:56.537334   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHUsername
	I0816 18:13:56.537430   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHPort
	I0816 18:13:56.537497   74828 sshutil.go:53] new ssh client: &{IP:192.168.50.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/no-preload-864476/id_rsa Username:docker}
	I0816 18:13:56.537582   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:56.537728   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHUsername
	I0816 18:13:56.537964   74828 sshutil.go:53] new ssh client: &{IP:192.168.50.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/no-preload-864476/id_rsa Username:docker}
	I0816 18:13:56.654319   74828 ssh_runner.go:195] Run: systemctl --version
	I0816 18:13:56.660640   74828 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 18:13:56.806359   74828 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 18:13:56.812415   74828 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 18:13:56.812489   74828 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 18:13:56.828095   74828 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 18:13:56.828122   74828 start.go:495] detecting cgroup driver to use...
	I0816 18:13:56.828186   74828 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 18:13:56.843041   74828 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 18:13:56.856322   74828 docker.go:217] disabling cri-docker service (if available) ...
	I0816 18:13:56.856386   74828 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 18:13:56.869899   74828 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 18:13:56.884609   74828 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 18:13:56.990986   74828 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 18:13:57.134218   74828 docker.go:233] disabling docker service ...
	I0816 18:13:57.134283   74828 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 18:13:57.156415   74828 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 18:13:57.172969   74828 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 18:13:57.328279   74828 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 18:13:57.448217   74828 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 18:13:57.461630   74828 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 18:13:57.478199   74828 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 18:13:57.478271   74828 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:13:57.487845   74828 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 18:13:57.487918   74828 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:13:57.497895   74828 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:13:57.509260   74828 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:13:57.519090   74828 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 18:13:57.529351   74828 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:13:57.539816   74828 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:13:57.559271   74828 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:13:57.573027   74828 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 18:13:57.583410   74828 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 18:13:57.583490   74828 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 18:13:57.598762   74828 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 18:13:57.609589   74828 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:13:57.727016   74828 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 18:13:57.876815   74828 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 18:13:57.876876   74828 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 18:13:57.882172   74828 start.go:563] Will wait 60s for crictl version
	I0816 18:13:57.882241   74828 ssh_runner.go:195] Run: which crictl
	I0816 18:13:57.885706   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 18:13:57.926981   74828 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 18:13:57.927070   74828 ssh_runner.go:195] Run: crio --version
	I0816 18:13:57.957802   74828 ssh_runner.go:195] Run: crio --version
	I0816 18:13:57.984920   74828 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 18:13:57.986450   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetIP
	I0816 18:13:57.989584   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:57.990205   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:57.990257   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:57.990552   74828 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0816 18:13:57.994584   74828 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 18:13:58.007996   74828 kubeadm.go:883] updating cluster {Name:no-preload-864476 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-864476 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.50 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 18:13:58.008137   74828 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 18:13:58.008184   74828 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 18:13:58.041643   74828 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0816 18:13:58.041672   74828 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0816 18:13:58.041751   74828 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:13:58.041778   74828 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0816 18:13:58.041794   74828 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0816 18:13:58.041741   74828 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0816 18:13:58.041779   74828 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 18:13:58.041899   74828 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0816 18:13:58.041918   74828 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0816 18:13:58.041798   74828 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0816 18:13:58.043387   74828 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0816 18:13:58.043471   74828 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0816 18:13:58.043386   74828 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:13:58.043471   74828 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 18:13:58.043388   74828 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0816 18:13:58.043387   74828 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0816 18:13:58.043386   74828 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0816 18:13:58.043394   74828 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0816 18:13:58.289223   74828 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0816 18:13:58.299125   74828 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0816 18:13:58.308703   74828 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0816 18:13:58.339031   74828 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 18:13:58.351467   74828 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0816 18:13:58.351514   74828 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0816 18:13:58.351572   74828 ssh_runner.go:195] Run: which crictl
	I0816 18:13:58.358019   74828 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0816 18:13:58.359198   74828 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0816 18:13:58.385487   74828 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0816 18:13:58.385529   74828 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0816 18:13:58.385571   74828 ssh_runner.go:195] Run: which crictl
	I0816 18:13:58.392417   74828 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0816 18:13:58.506834   74828 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0816 18:13:58.506886   74828 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 18:13:58.506896   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0816 18:13:58.506924   74828 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0816 18:13:58.506963   74828 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0816 18:13:58.507003   74828 ssh_runner.go:195] Run: which crictl
	I0816 18:13:58.506928   74828 ssh_runner.go:195] Run: which crictl
	I0816 18:13:58.507072   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0816 18:13:58.507004   74828 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0816 18:13:58.507099   74828 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0816 18:13:58.507124   74828 ssh_runner.go:195] Run: which crictl
	I0816 18:13:58.507160   74828 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0816 18:13:58.507181   74828 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0816 18:13:58.507228   74828 ssh_runner.go:195] Run: which crictl
	I0816 18:13:58.562410   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0816 18:13:58.562469   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0816 18:13:58.562481   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0816 18:13:58.562554   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 18:13:58.562590   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0816 18:13:58.562628   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0816 18:13:58.686069   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0816 18:13:58.690288   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0816 18:13:58.690352   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0816 18:13:58.692851   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 18:13:58.692911   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0816 18:13:58.693027   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0816 18:13:58.777263   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0816 18:13:56.554238   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .Start
	I0816 18:13:56.554426   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Ensuring networks are active...
	I0816 18:13:56.555221   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Ensuring network default is active
	I0816 18:13:56.555599   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Ensuring network mk-default-k8s-diff-port-256678 is active
	I0816 18:13:56.556004   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Getting domain xml...
	I0816 18:13:56.556809   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Creating domain...
	I0816 18:13:57.825641   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting to get IP...
	I0816 18:13:57.826681   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:13:57.827158   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:13:57.827219   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:13:57.827129   76107 retry.go:31] will retry after 267.923612ms: waiting for machine to come up
	I0816 18:13:58.096794   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:13:58.097184   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:13:58.097219   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:13:58.097158   76107 retry.go:31] will retry after 286.726817ms: waiting for machine to come up
	I0816 18:13:58.386213   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:13:58.386757   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:13:58.386782   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:13:58.386704   76107 retry.go:31] will retry after 386.697374ms: waiting for machine to come up
	I0816 18:13:58.775483   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:13:58.775989   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:13:58.776014   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:13:58.775949   76107 retry.go:31] will retry after 554.398617ms: waiting for machine to come up
	I0816 18:13:59.331517   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:13:59.332002   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:13:59.332024   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:13:59.331943   76107 retry.go:31] will retry after 589.24333ms: waiting for machine to come up
	I0816 18:13:58.823309   74828 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0816 18:13:58.823318   74828 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0816 18:13:58.823410   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 18:13:58.823434   74828 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0816 18:13:58.823437   74828 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0816 18:13:58.823549   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0816 18:13:58.836312   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0816 18:13:58.894363   74828 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0816 18:13:58.894428   74828 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0816 18:13:58.894447   74828 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0816 18:13:58.894495   74828 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0816 18:13:58.894495   74828 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0816 18:13:58.933183   74828 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0816 18:13:58.933290   74828 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0816 18:13:58.934389   74828 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0816 18:13:58.934456   74828 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0816 18:13:58.934491   74828 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0816 18:13:58.934550   74828 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0816 18:13:58.934569   74828 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0816 18:13:58.934682   74828 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:14:00.792156   74828 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (1.897633034s)
	I0816 18:14:00.792196   74828 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0816 18:14:00.792224   74828 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (1.89763588s)
	I0816 18:14:00.792257   74828 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0816 18:14:00.792230   74828 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0816 18:14:00.792281   74828 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.858968807s)
	I0816 18:14:00.792300   74828 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0816 18:14:00.792317   74828 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0816 18:14:00.792355   74828 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1: (1.85778817s)
	I0816 18:14:00.792370   74828 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0816 18:14:00.792415   74828 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.857704749s)
	I0816 18:14:00.792422   74828 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.857843473s)
	I0816 18:14:00.792436   74828 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0816 18:14:00.792457   74828 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0816 18:14:00.792491   74828 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:14:00.792528   74828 ssh_runner.go:195] Run: which crictl
	I0816 18:14:00.797103   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:14:03.171070   74828 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.378727123s)
	I0816 18:14:03.171118   74828 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0816 18:14:03.171149   74828 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.374004458s)
	I0816 18:14:03.171155   74828 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0816 18:14:03.171274   74828 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0816 18:14:03.171225   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:13:59.922834   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:13:59.923439   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:13:59.923467   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:13:59.923368   76107 retry.go:31] will retry after 779.656786ms: waiting for machine to come up
	I0816 18:14:00.704929   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:00.705395   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:14:00.705417   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:14:00.705344   76107 retry.go:31] will retry after 790.87115ms: waiting for machine to come up
	I0816 18:14:01.497557   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:01.497999   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:14:01.498052   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:14:01.497981   76107 retry.go:31] will retry after 919.825072ms: waiting for machine to come up
	I0816 18:14:02.419821   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:02.420280   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:14:02.420312   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:14:02.420227   76107 retry.go:31] will retry after 1.304504009s: waiting for machine to come up
	I0816 18:14:03.725928   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:03.726378   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:14:03.726400   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:14:03.726344   76107 retry.go:31] will retry after 2.105251359s: waiting for machine to come up
	I0816 18:14:06.879864   74828 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.708558161s)
	I0816 18:14:06.879904   74828 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0816 18:14:06.879905   74828 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.708563338s)
	I0816 18:14:06.879935   74828 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0816 18:14:06.879981   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:14:06.879991   74828 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0816 18:14:08.769077   74828 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.889063218s)
	I0816 18:14:08.769114   74828 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0816 18:14:08.769145   74828 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0816 18:14:08.769231   74828 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0816 18:14:08.769146   74828 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.889146748s)
	I0816 18:14:08.769343   74828 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0816 18:14:08.769431   74828 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0816 18:14:05.833605   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:05.834078   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:14:05.834109   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:14:05.834025   76107 retry.go:31] will retry after 2.042421539s: waiting for machine to come up
	I0816 18:14:07.878000   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:07.878510   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:14:07.878541   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:14:07.878432   76107 retry.go:31] will retry after 2.777402825s: waiting for machine to come up
	I0816 18:14:10.627286   74828 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.858028746s)
	I0816 18:14:10.627331   74828 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0816 18:14:10.627346   74828 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.857891086s)
	I0816 18:14:10.627358   74828 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0816 18:14:10.627378   74828 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0816 18:14:10.627402   74828 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0816 18:14:11.977277   74828 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.349851948s)
	I0816 18:14:11.977314   74828 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0816 18:14:11.977339   74828 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0816 18:14:11.977389   74828 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0816 18:14:12.630939   74828 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0816 18:14:12.630999   74828 cache_images.go:123] Successfully loaded all cached images
	I0816 18:14:12.631004   74828 cache_images.go:92] duration metric: took 14.589319022s to LoadCachedImages
	I0816 18:14:12.631016   74828 kubeadm.go:934] updating node { 192.168.50.50 8443 v1.31.0 crio true true} ...
	I0816 18:14:12.631132   74828 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-864476 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.50
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-864476 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 18:14:12.631207   74828 ssh_runner.go:195] Run: crio config
	I0816 18:14:12.683072   74828 cni.go:84] Creating CNI manager for ""
	I0816 18:14:12.683094   74828 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 18:14:12.683107   74828 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 18:14:12.683129   74828 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.50 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-864476 NodeName:no-preload-864476 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.50"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.50 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 18:14:12.683276   74828 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.50
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-864476"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.50
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.50"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 18:14:12.683345   74828 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 18:14:12.693879   74828 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 18:14:12.693941   74828 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 18:14:12.702601   74828 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0816 18:14:12.718235   74828 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 18:14:12.733455   74828 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0816 18:14:12.748878   74828 ssh_runner.go:195] Run: grep 192.168.50.50	control-plane.minikube.internal$ /etc/hosts
	I0816 18:14:12.752276   74828 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.50	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 18:14:12.763390   74828 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:14:12.872450   74828 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 18:14:12.888531   74828 certs.go:68] Setting up /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/no-preload-864476 for IP: 192.168.50.50
	I0816 18:14:12.888569   74828 certs.go:194] generating shared ca certs ...
	I0816 18:14:12.888589   74828 certs.go:226] acquiring lock for ca certs: {Name:mk1e000575c94c138513704c2900b8a68810eb65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:14:12.888783   74828 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key
	I0816 18:14:12.888845   74828 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key
	I0816 18:14:12.888860   74828 certs.go:256] generating profile certs ...
	I0816 18:14:12.888971   74828 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/no-preload-864476/client.key
	I0816 18:14:12.889070   74828 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/no-preload-864476/apiserver.key.30cf6dcb
	I0816 18:14:12.889136   74828 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/no-preload-864476/proxy-client.key
	I0816 18:14:12.889298   74828 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem (1338 bytes)
	W0816 18:14:12.889339   74828 certs.go:480] ignoring /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753_empty.pem, impossibly tiny 0 bytes
	I0816 18:14:12.889351   74828 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 18:14:12.889391   74828 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem (1082 bytes)
	I0816 18:14:12.889421   74828 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem (1123 bytes)
	I0816 18:14:12.889452   74828 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem (1679 bytes)
	I0816 18:14:12.889507   74828 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem (1708 bytes)
	I0816 18:14:12.890441   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 18:14:12.919571   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 18:14:12.947375   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 18:14:12.975197   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 18:14:13.007308   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/no-preload-864476/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0816 18:14:13.056151   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/no-preload-864476/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 18:14:13.080317   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/no-preload-864476/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 18:14:13.102231   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/no-preload-864476/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 18:14:13.124045   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 18:14:13.145312   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem --> /usr/share/ca-certificates/16753.pem (1338 bytes)
	I0816 18:14:13.166806   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /usr/share/ca-certificates/167532.pem (1708 bytes)
	I0816 18:14:13.188173   74828 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 18:14:13.203594   74828 ssh_runner.go:195] Run: openssl version
	I0816 18:14:13.209148   74828 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16753.pem && ln -fs /usr/share/ca-certificates/16753.pem /etc/ssl/certs/16753.pem"
	I0816 18:14:13.220266   74828 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16753.pem
	I0816 18:14:13.224569   74828 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 17:00 /usr/share/ca-certificates/16753.pem
	I0816 18:14:13.224635   74828 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16753.pem
	I0816 18:14:13.230141   74828 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16753.pem /etc/ssl/certs/51391683.0"
	I0816 18:14:13.241362   74828 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167532.pem && ln -fs /usr/share/ca-certificates/167532.pem /etc/ssl/certs/167532.pem"
	I0816 18:14:13.252437   74828 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167532.pem
	I0816 18:14:13.256658   74828 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 17:00 /usr/share/ca-certificates/167532.pem
	I0816 18:14:13.256712   74828 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167532.pem
	I0816 18:14:13.262006   74828 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167532.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 18:14:13.273168   74828 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 18:14:13.284518   74828 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:14:13.288566   74828 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:49 /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:14:13.288611   74828 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:14:13.293944   74828 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 18:14:13.305148   74828 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 18:14:13.309460   74828 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 18:14:13.315123   74828 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 18:14:13.320854   74828 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 18:14:13.326676   74828 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 18:14:13.332183   74828 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 18:14:13.337794   74828 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 18:14:13.343369   74828 kubeadm.go:392] StartCluster: {Name:no-preload-864476 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-864476 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.50 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 18:14:13.343470   74828 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 18:14:13.343527   74828 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 18:14:13.384490   74828 cri.go:89] found id: ""
	I0816 18:14:13.384567   74828 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 18:14:13.395094   74828 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 18:14:13.395116   74828 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 18:14:13.395183   74828 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 18:14:13.406605   74828 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 18:14:13.407898   74828 kubeconfig.go:125] found "no-preload-864476" server: "https://192.168.50.50:8443"
	I0816 18:14:13.410808   74828 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 18:14:13.420516   74828 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.50
	I0816 18:14:13.420541   74828 kubeadm.go:1160] stopping kube-system containers ...
	I0816 18:14:13.420554   74828 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 18:14:13.420589   74828 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 18:14:13.459174   74828 cri.go:89] found id: ""
	I0816 18:14:13.459242   74828 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 18:14:13.475598   74828 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 18:14:13.484685   74828 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 18:14:13.484707   74828 kubeadm.go:157] found existing configuration files:
	
	I0816 18:14:13.484756   74828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 18:14:13.493092   74828 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 18:14:13.493147   74828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 18:14:13.501649   74828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 18:14:13.509987   74828 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 18:14:13.510028   74828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 18:14:13.518500   74828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 18:14:13.526689   74828 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 18:14:13.526737   74828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 18:14:13.535606   74828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 18:14:13.545130   74828 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 18:14:13.545185   74828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 18:14:13.553947   74828 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 18:14:13.562763   74828 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:13.663383   74828 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:10.657652   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:10.658062   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:14:10.658105   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:14:10.657999   76107 retry.go:31] will retry after 3.856225979s: waiting for machine to come up
	I0816 18:14:14.518358   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:14.518875   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Found IP for machine: 192.168.72.144
	I0816 18:14:14.518896   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Reserving static IP address...
	I0816 18:14:14.518915   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has current primary IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:14.519296   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Reserved static IP address: 192.168.72.144
	I0816 18:14:14.519334   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-256678", mac: "52:54:00:76:32:d8", ip: "192.168.72.144"} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:14.519346   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for SSH to be available...
	I0816 18:14:14.519377   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | skip adding static IP to network mk-default-k8s-diff-port-256678 - found existing host DHCP lease matching {name: "default-k8s-diff-port-256678", mac: "52:54:00:76:32:d8", ip: "192.168.72.144"}
	I0816 18:14:14.519391   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | Getting to WaitForSSH function...
	I0816 18:14:14.521566   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:14.521926   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:14.521969   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:14.522133   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | Using SSH client type: external
	I0816 18:14:14.522160   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | Using SSH private key: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/default-k8s-diff-port-256678/id_rsa (-rw-------)
	I0816 18:14:14.522202   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.144 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19461-9545/.minikube/machines/default-k8s-diff-port-256678/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 18:14:14.522221   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | About to run SSH command:
	I0816 18:14:14.522235   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | exit 0
	I0816 18:14:14.648603   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | SSH cmd err, output: <nil>: 
	I0816 18:14:14.649005   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetConfigRaw
	I0816 18:14:14.649616   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetIP
	I0816 18:14:14.652340   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:14.652767   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:14.652796   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:14.653116   75006 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/default-k8s-diff-port-256678/config.json ...
	I0816 18:14:14.653337   75006 machine.go:93] provisionDockerMachine start ...
	I0816 18:14:14.653361   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .DriverName
	I0816 18:14:14.653598   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:14:14.656062   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:14.656412   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:14.656442   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:14.656565   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHPort
	I0816 18:14:14.656757   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:14.656895   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:14.657015   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHUsername
	I0816 18:14:14.657128   75006 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:14.657312   75006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I0816 18:14:14.657321   75006 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 18:14:14.768721   75006 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 18:14:14.768749   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetMachineName
	I0816 18:14:14.768990   75006 buildroot.go:166] provisioning hostname "default-k8s-diff-port-256678"
	I0816 18:14:14.769021   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetMachineName
	I0816 18:14:14.769246   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:14:14.772310   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:14.772675   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:14.772704   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:14.772922   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHPort
	I0816 18:14:14.773084   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:14.773242   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:14.773361   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHUsername
	I0816 18:14:14.773564   75006 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:14.773764   75006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I0816 18:14:14.773783   75006 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-256678 && echo "default-k8s-diff-port-256678" | sudo tee /etc/hostname
	I0816 18:14:14.894016   75006 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-256678
	
	I0816 18:14:14.894047   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:14:14.896797   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:14.897150   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:14.897184   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:14.897424   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHPort
	I0816 18:14:14.897613   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:14.897800   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:14.897933   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHUsername
	I0816 18:14:14.898124   75006 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:14.898286   75006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I0816 18:14:14.898303   75006 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-256678' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-256678/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-256678' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 18:14:15.814480   75402 start.go:364] duration metric: took 3m22.605706427s to acquireMachinesLock for "old-k8s-version-783465"
	I0816 18:14:15.814546   75402 start.go:96] Skipping create...Using existing machine configuration
	I0816 18:14:15.814554   75402 fix.go:54] fixHost starting: 
	I0816 18:14:15.815001   75402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:14:15.815062   75402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:14:15.834710   75402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46611
	I0816 18:14:15.835124   75402 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:14:15.835653   75402 main.go:141] libmachine: Using API Version  1
	I0816 18:14:15.835676   75402 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:14:15.836005   75402 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:14:15.836258   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	I0816 18:14:15.836392   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetState
	I0816 18:14:15.838010   75402 fix.go:112] recreateIfNeeded on old-k8s-version-783465: state=Stopped err=<nil>
	I0816 18:14:15.838043   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	W0816 18:14:15.838200   75402 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 18:14:15.840214   75402 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-783465" ...
	I0816 18:14:15.016150   75006 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 18:14:15.016176   75006 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19461-9545/.minikube CaCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19461-9545/.minikube}
	I0816 18:14:15.016200   75006 buildroot.go:174] setting up certificates
	I0816 18:14:15.016213   75006 provision.go:84] configureAuth start
	I0816 18:14:15.016231   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetMachineName
	I0816 18:14:15.016518   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetIP
	I0816 18:14:15.019132   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.019687   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:15.019725   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.019907   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:14:15.022758   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.023192   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:15.023233   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.023408   75006 provision.go:143] copyHostCerts
	I0816 18:14:15.023468   75006 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem, removing ...
	I0816 18:14:15.023489   75006 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem
	I0816 18:14:15.023552   75006 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem (1082 bytes)
	I0816 18:14:15.023649   75006 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem, removing ...
	I0816 18:14:15.023659   75006 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem
	I0816 18:14:15.023681   75006 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem (1123 bytes)
	I0816 18:14:15.023733   75006 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem, removing ...
	I0816 18:14:15.023740   75006 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem
	I0816 18:14:15.023756   75006 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem (1679 bytes)
	I0816 18:14:15.023802   75006 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-256678 san=[127.0.0.1 192.168.72.144 default-k8s-diff-port-256678 localhost minikube]
	I0816 18:14:15.142549   75006 provision.go:177] copyRemoteCerts
	I0816 18:14:15.142601   75006 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 18:14:15.142625   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:14:15.145515   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.145867   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:15.145903   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.146029   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHPort
	I0816 18:14:15.146250   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:15.146436   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHUsername
	I0816 18:14:15.146604   75006 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/default-k8s-diff-port-256678/id_rsa Username:docker}
	I0816 18:14:15.230785   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 18:14:15.258450   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0816 18:14:15.286008   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 18:14:15.308690   75006 provision.go:87] duration metric: took 292.45797ms to configureAuth
	I0816 18:14:15.308725   75006 buildroot.go:189] setting minikube options for container-runtime
	I0816 18:14:15.308927   75006 config.go:182] Loaded profile config "default-k8s-diff-port-256678": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 18:14:15.308996   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:14:15.311959   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.312310   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:15.312332   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.312492   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHPort
	I0816 18:14:15.312713   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:15.312890   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:15.313028   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHUsername
	I0816 18:14:15.313184   75006 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:15.313369   75006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I0816 18:14:15.313387   75006 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 18:14:15.574487   75006 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 18:14:15.574517   75006 machine.go:96] duration metric: took 921.166622ms to provisionDockerMachine
	I0816 18:14:15.574529   75006 start.go:293] postStartSetup for "default-k8s-diff-port-256678" (driver="kvm2")
	I0816 18:14:15.574538   75006 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 18:14:15.574552   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .DriverName
	I0816 18:14:15.574835   75006 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 18:14:15.574854   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:14:15.577944   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.578266   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:15.578295   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.578469   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHPort
	I0816 18:14:15.578651   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:15.578800   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHUsername
	I0816 18:14:15.578912   75006 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/default-k8s-diff-port-256678/id_rsa Username:docker}
	I0816 18:14:15.664404   75006 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 18:14:15.668362   75006 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 18:14:15.668389   75006 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/addons for local assets ...
	I0816 18:14:15.668459   75006 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/files for local assets ...
	I0816 18:14:15.668562   75006 filesync.go:149] local asset: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem -> 167532.pem in /etc/ssl/certs
	I0816 18:14:15.668705   75006 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 18:14:15.678830   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /etc/ssl/certs/167532.pem (1708 bytes)
	I0816 18:14:15.702087   75006 start.go:296] duration metric: took 127.545675ms for postStartSetup
	I0816 18:14:15.702129   75006 fix.go:56] duration metric: took 19.172678011s for fixHost
	I0816 18:14:15.702152   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:14:15.704680   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.705117   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:15.705154   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.705288   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHPort
	I0816 18:14:15.705479   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:15.705643   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:15.705766   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHUsername
	I0816 18:14:15.705922   75006 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:15.706084   75006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I0816 18:14:15.706095   75006 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 18:14:15.814313   75006 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723832055.788948458
	
	I0816 18:14:15.814337   75006 fix.go:216] guest clock: 1723832055.788948458
	I0816 18:14:15.814348   75006 fix.go:229] Guest: 2024-08-16 18:14:15.788948458 +0000 UTC Remote: 2024-08-16 18:14:15.702133997 +0000 UTC m=+265.826862410 (delta=86.814461ms)
	I0816 18:14:15.814372   75006 fix.go:200] guest clock delta is within tolerance: 86.814461ms
	I0816 18:14:15.814382   75006 start.go:83] releasing machines lock for "default-k8s-diff-port-256678", held for 19.284958633s
	I0816 18:14:15.814416   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .DriverName
	I0816 18:14:15.814723   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetIP
	I0816 18:14:15.817995   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.818426   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:15.818467   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.818620   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .DriverName
	I0816 18:14:15.819299   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .DriverName
	I0816 18:14:15.819518   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .DriverName
	I0816 18:14:15.819616   75006 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 18:14:15.819656   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:14:15.819840   75006 ssh_runner.go:195] Run: cat /version.json
	I0816 18:14:15.819869   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:14:15.822797   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.823189   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.823478   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:15.823521   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.823659   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHPort
	I0816 18:14:15.823804   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:15.823811   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:15.823828   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.823965   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHUsername
	I0816 18:14:15.824064   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHPort
	I0816 18:14:15.824177   75006 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/default-k8s-diff-port-256678/id_rsa Username:docker}
	I0816 18:14:15.824234   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:15.824368   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHUsername
	I0816 18:14:15.824486   75006 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/default-k8s-diff-port-256678/id_rsa Username:docker}
	I0816 18:14:15.948709   75006 ssh_runner.go:195] Run: systemctl --version
	I0816 18:14:15.956239   75006 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 18:14:16.103538   75006 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 18:14:16.109299   75006 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 18:14:16.109385   75006 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 18:14:16.125056   75006 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 18:14:16.125092   75006 start.go:495] detecting cgroup driver to use...
	I0816 18:14:16.125188   75006 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 18:14:16.141741   75006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 18:14:16.158917   75006 docker.go:217] disabling cri-docker service (if available) ...
	I0816 18:14:16.158993   75006 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 18:14:16.173256   75006 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 18:14:16.187026   75006 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 18:14:16.332452   75006 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 18:14:16.503181   75006 docker.go:233] disabling docker service ...
	I0816 18:14:16.503254   75006 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 18:14:16.517961   75006 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 18:14:16.535991   75006 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 18:14:16.667874   75006 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 18:14:16.799300   75006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 18:14:16.813852   75006 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 18:14:16.832891   75006 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 18:14:16.832953   75006 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:16.845621   75006 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 18:14:16.845716   75006 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:16.856045   75006 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:16.866117   75006 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:16.877586   75006 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 18:14:16.887643   75006 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:16.897164   75006 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:16.915247   75006 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:16.924887   75006 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 18:14:16.933645   75006 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 18:14:16.933709   75006 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 18:14:16.946920   75006 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 18:14:16.955928   75006 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:14:17.090148   75006 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 18:14:17.241434   75006 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 18:14:17.241531   75006 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 18:14:17.246730   75006 start.go:563] Will wait 60s for crictl version
	I0816 18:14:17.246796   75006 ssh_runner.go:195] Run: which crictl
	I0816 18:14:17.250397   75006 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 18:14:17.289194   75006 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 18:14:17.289295   75006 ssh_runner.go:195] Run: crio --version
	I0816 18:14:17.324401   75006 ssh_runner.go:195] Run: crio --version
	I0816 18:14:17.361220   75006 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 18:14:15.841411   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .Start
	I0816 18:14:15.841576   75402 main.go:141] libmachine: (old-k8s-version-783465) Ensuring networks are active...
	I0816 18:14:15.842263   75402 main.go:141] libmachine: (old-k8s-version-783465) Ensuring network default is active
	I0816 18:14:15.842609   75402 main.go:141] libmachine: (old-k8s-version-783465) Ensuring network mk-old-k8s-version-783465 is active
	I0816 18:14:15.843023   75402 main.go:141] libmachine: (old-k8s-version-783465) Getting domain xml...
	I0816 18:14:15.844141   75402 main.go:141] libmachine: (old-k8s-version-783465) Creating domain...
	I0816 18:14:17.215163   75402 main.go:141] libmachine: (old-k8s-version-783465) Waiting to get IP...
	I0816 18:14:17.216445   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:17.216933   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:17.217029   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:17.216922   76298 retry.go:31] will retry after 286.243503ms: waiting for machine to come up
	I0816 18:14:17.504645   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:17.505240   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:17.505262   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:17.505175   76298 retry.go:31] will retry after 275.715235ms: waiting for machine to come up
	I0816 18:14:17.782804   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:17.783365   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:17.783392   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:17.783292   76298 retry.go:31] will retry after 343.088129ms: waiting for machine to come up
	I0816 18:14:14.936549   74828 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.273126441s)
	I0816 18:14:14.936584   74828 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:15.139778   74828 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:15.201814   74828 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:15.270552   74828 api_server.go:52] waiting for apiserver process to appear ...
	I0816 18:14:15.270667   74828 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:15.771379   74828 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:16.271296   74828 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:16.335242   74828 api_server.go:72] duration metric: took 1.064710561s to wait for apiserver process to appear ...
	I0816 18:14:16.335265   74828 api_server.go:88] waiting for apiserver healthz status ...
	I0816 18:14:16.335282   74828 api_server.go:253] Checking apiserver healthz at https://192.168.50.50:8443/healthz ...
	I0816 18:14:16.335727   74828 api_server.go:269] stopped: https://192.168.50.50:8443/healthz: Get "https://192.168.50.50:8443/healthz": dial tcp 192.168.50.50:8443: connect: connection refused
	I0816 18:14:16.835361   74828 api_server.go:253] Checking apiserver healthz at https://192.168.50.50:8443/healthz ...
	I0816 18:14:17.362436   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetIP
	I0816 18:14:17.365728   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:17.366122   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:17.366154   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:17.366403   75006 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0816 18:14:17.370322   75006 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 18:14:17.383153   75006 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-256678 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-256678 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.144 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 18:14:17.383303   75006 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 18:14:17.383364   75006 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 18:14:17.420269   75006 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0816 18:14:17.420339   75006 ssh_runner.go:195] Run: which lz4
	I0816 18:14:17.424477   75006 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 18:14:17.428507   75006 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 18:14:17.428547   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0816 18:14:18.717202   75006 crio.go:462] duration metric: took 1.292754157s to copy over tarball
	I0816 18:14:18.717278   75006 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 18:14:19.241691   74828 api_server.go:279] https://192.168.50.50:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 18:14:19.241729   74828 api_server.go:103] status: https://192.168.50.50:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 18:14:19.241746   74828 api_server.go:253] Checking apiserver healthz at https://192.168.50.50:8443/healthz ...
	I0816 18:14:19.292883   74828 api_server.go:279] https://192.168.50.50:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 18:14:19.292924   74828 api_server.go:103] status: https://192.168.50.50:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 18:14:19.336097   74828 api_server.go:253] Checking apiserver healthz at https://192.168.50.50:8443/healthz ...
	I0816 18:14:19.363715   74828 api_server.go:279] https://192.168.50.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:14:19.363753   74828 api_server.go:103] status: https://192.168.50.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:14:19.835848   74828 api_server.go:253] Checking apiserver healthz at https://192.168.50.50:8443/healthz ...
	I0816 18:14:19.840615   74828 api_server.go:279] https://192.168.50.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:14:19.840666   74828 api_server.go:103] status: https://192.168.50.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:14:20.336291   74828 api_server.go:253] Checking apiserver healthz at https://192.168.50.50:8443/healthz ...
	I0816 18:14:20.343751   74828 api_server.go:279] https://192.168.50.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:14:20.343785   74828 api_server.go:103] status: https://192.168.50.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:14:20.835470   74828 api_server.go:253] Checking apiserver healthz at https://192.168.50.50:8443/healthz ...
	I0816 18:14:20.841217   74828 api_server.go:279] https://192.168.50.50:8443/healthz returned 200:
	ok
	I0816 18:14:20.849609   74828 api_server.go:141] control plane version: v1.31.0
	I0816 18:14:20.849642   74828 api_server.go:131] duration metric: took 4.514370955s to wait for apiserver health ...
	I0816 18:14:20.849653   74828 cni.go:84] Creating CNI manager for ""
	I0816 18:14:20.849662   74828 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 18:14:20.851828   74828 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 18:14:18.127538   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:18.128044   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:18.128077   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:18.127958   76298 retry.go:31] will retry after 543.91951ms: waiting for machine to come up
	I0816 18:14:18.673778   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:18.674328   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:18.674351   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:18.674274   76298 retry.go:31] will retry after 694.978788ms: waiting for machine to come up
	I0816 18:14:19.370976   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:19.371577   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:19.371605   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:19.371538   76298 retry.go:31] will retry after 578.640883ms: waiting for machine to come up
	I0816 18:14:19.952328   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:19.952917   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:19.952941   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:19.952863   76298 retry.go:31] will retry after 820.19233ms: waiting for machine to come up
	I0816 18:14:20.774767   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:20.775175   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:20.775200   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:20.775134   76298 retry.go:31] will retry after 1.262201815s: waiting for machine to come up
	I0816 18:14:22.038872   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:22.039357   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:22.039385   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:22.039302   76298 retry.go:31] will retry after 1.164593889s: waiting for machine to come up
	I0816 18:14:20.853121   74828 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 18:14:20.866117   74828 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 18:14:20.888451   74828 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 18:14:20.902482   74828 system_pods.go:59] 8 kube-system pods found
	I0816 18:14:20.902530   74828 system_pods.go:61] "coredns-6f6b679f8f-w9cbm" [9b50c913-f492-4432-a50a-e0f727a7b856] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 18:14:20.902545   74828 system_pods.go:61] "etcd-no-preload-864476" [e45a11b8-fa3e-4a6e-9d06-5d82fdaf20dc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 18:14:20.902557   74828 system_pods.go:61] "kube-apiserver-no-preload-864476" [1cf82575-b520-4bc0-9e90-d40c02b4468d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 18:14:20.902568   74828 system_pods.go:61] "kube-controller-manager-no-preload-864476" [8c9123e0-16a4-4940-8464-4bec383bac90] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 18:14:20.902577   74828 system_pods.go:61] "kube-proxy-vdqxz" [0332e87e-5c0c-41f5-88a9-31b7f8494eb6] Running
	I0816 18:14:20.902587   74828 system_pods.go:61] "kube-scheduler-no-preload-864476" [6139753f-b5cf-4af5-a9fa-03fb220e3dc9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 18:14:20.902606   74828 system_pods.go:61] "metrics-server-6867b74b74-rxtwg" [f0d04fc9-24c0-47e3-afdc-f250ef07900c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 18:14:20.902620   74828 system_pods.go:61] "storage-provisioner" [65303dd8-27d7-4bf3-ae58-ff5fe556f17f] Running
	I0816 18:14:20.902631   74828 system_pods.go:74] duration metric: took 14.150825ms to wait for pod list to return data ...
	I0816 18:14:20.902645   74828 node_conditions.go:102] verifying NodePressure condition ...
	I0816 18:14:20.909305   74828 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 18:14:20.909342   74828 node_conditions.go:123] node cpu capacity is 2
	I0816 18:14:20.909355   74828 node_conditions.go:105] duration metric: took 6.699359ms to run NodePressure ...
	I0816 18:14:20.909377   74828 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:21.193348   74828 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0816 18:14:21.198555   74828 kubeadm.go:739] kubelet initialised
	I0816 18:14:21.198585   74828 kubeadm.go:740] duration metric: took 5.20722ms waiting for restarted kubelet to initialise ...
	I0816 18:14:21.198595   74828 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:14:21.204695   74828 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-w9cbm" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:21.212855   74828 pod_ready.go:98] node "no-preload-864476" hosting pod "coredns-6f6b679f8f-w9cbm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-864476" has status "Ready":"False"
	I0816 18:14:21.212877   74828 pod_ready.go:82] duration metric: took 8.157781ms for pod "coredns-6f6b679f8f-w9cbm" in "kube-system" namespace to be "Ready" ...
	E0816 18:14:21.212889   74828 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-864476" hosting pod "coredns-6f6b679f8f-w9cbm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-864476" has status "Ready":"False"
	I0816 18:14:21.212899   74828 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:21.220125   74828 pod_ready.go:98] node "no-preload-864476" hosting pod "etcd-no-preload-864476" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-864476" has status "Ready":"False"
	I0816 18:14:21.220150   74828 pod_ready.go:82] duration metric: took 7.241861ms for pod "etcd-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	E0816 18:14:21.220158   74828 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-864476" hosting pod "etcd-no-preload-864476" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-864476" has status "Ready":"False"
	I0816 18:14:21.220166   74828 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:21.226930   74828 pod_ready.go:98] node "no-preload-864476" hosting pod "kube-apiserver-no-preload-864476" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-864476" has status "Ready":"False"
	I0816 18:14:21.226957   74828 pod_ready.go:82] duration metric: took 6.783402ms for pod "kube-apiserver-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	E0816 18:14:21.226967   74828 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-864476" hosting pod "kube-apiserver-no-preload-864476" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-864476" has status "Ready":"False"
	I0816 18:14:21.226976   74828 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:21.292011   74828 pod_ready.go:98] node "no-preload-864476" hosting pod "kube-controller-manager-no-preload-864476" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-864476" has status "Ready":"False"
	I0816 18:14:21.292054   74828 pod_ready.go:82] duration metric: took 65.066708ms for pod "kube-controller-manager-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	E0816 18:14:21.292066   74828 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-864476" hosting pod "kube-controller-manager-no-preload-864476" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-864476" has status "Ready":"False"
	I0816 18:14:21.292075   74828 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-vdqxz" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:21.692536   74828 pod_ready.go:93] pod "kube-proxy-vdqxz" in "kube-system" namespace has status "Ready":"True"
	I0816 18:14:21.692564   74828 pod_ready.go:82] duration metric: took 400.476293ms for pod "kube-proxy-vdqxz" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:21.692577   74828 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:21.155261   75006 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.437939279s)
	I0816 18:14:21.155296   75006 crio.go:469] duration metric: took 2.438065212s to extract the tarball
	I0816 18:14:21.155325   75006 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 18:14:21.199451   75006 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 18:14:21.249963   75006 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 18:14:21.249990   75006 cache_images.go:84] Images are preloaded, skipping loading
	I0816 18:14:21.250002   75006 kubeadm.go:934] updating node { 192.168.72.144 8444 v1.31.0 crio true true} ...
	I0816 18:14:21.250129   75006 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-256678 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.144
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-256678 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 18:14:21.250211   75006 ssh_runner.go:195] Run: crio config
	I0816 18:14:21.299619   75006 cni.go:84] Creating CNI manager for ""
	I0816 18:14:21.299644   75006 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 18:14:21.299663   75006 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 18:14:21.299684   75006 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.144 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-256678 NodeName:default-k8s-diff-port-256678 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.144"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.144 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 18:14:21.299813   75006 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.144
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-256678"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.144
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.144"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 18:14:21.299880   75006 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 18:14:21.310127   75006 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 18:14:21.310205   75006 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 18:14:21.319566   75006 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0816 18:14:21.337043   75006 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 18:14:21.352319   75006 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0816 18:14:21.370117   75006 ssh_runner.go:195] Run: grep 192.168.72.144	control-plane.minikube.internal$ /etc/hosts
	I0816 18:14:21.373986   75006 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.144	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 18:14:21.386518   75006 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:14:21.508855   75006 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 18:14:21.525184   75006 certs.go:68] Setting up /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/default-k8s-diff-port-256678 for IP: 192.168.72.144
	I0816 18:14:21.525209   75006 certs.go:194] generating shared ca certs ...
	I0816 18:14:21.525230   75006 certs.go:226] acquiring lock for ca certs: {Name:mk1e000575c94c138513704c2900b8a68810eb65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:14:21.525413   75006 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key
	I0816 18:14:21.525468   75006 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key
	I0816 18:14:21.525481   75006 certs.go:256] generating profile certs ...
	I0816 18:14:21.525604   75006 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/default-k8s-diff-port-256678/client.key
	I0816 18:14:21.525688   75006 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/default-k8s-diff-port-256678/apiserver.key.ac6d83aa
	I0816 18:14:21.525738   75006 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/default-k8s-diff-port-256678/proxy-client.key
	I0816 18:14:21.525888   75006 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem (1338 bytes)
	W0816 18:14:21.525931   75006 certs.go:480] ignoring /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753_empty.pem, impossibly tiny 0 bytes
	I0816 18:14:21.525944   75006 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 18:14:21.525991   75006 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem (1082 bytes)
	I0816 18:14:21.526028   75006 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem (1123 bytes)
	I0816 18:14:21.526052   75006 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem (1679 bytes)
	I0816 18:14:21.526101   75006 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem (1708 bytes)
	I0816 18:14:21.526719   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 18:14:21.556992   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 18:14:21.590311   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 18:14:21.624782   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 18:14:21.655118   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/default-k8s-diff-port-256678/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0816 18:14:21.695431   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/default-k8s-diff-port-256678/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 18:14:21.722575   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/default-k8s-diff-port-256678/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 18:14:21.744870   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/default-k8s-diff-port-256678/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 18:14:21.770850   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 18:14:21.793906   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem --> /usr/share/ca-certificates/16753.pem (1338 bytes)
	I0816 18:14:21.817643   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /usr/share/ca-certificates/167532.pem (1708 bytes)
	I0816 18:14:21.839584   75006 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 18:14:21.856447   75006 ssh_runner.go:195] Run: openssl version
	I0816 18:14:21.862104   75006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16753.pem && ln -fs /usr/share/ca-certificates/16753.pem /etc/ssl/certs/16753.pem"
	I0816 18:14:21.872584   75006 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16753.pem
	I0816 18:14:21.876886   75006 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 17:00 /usr/share/ca-certificates/16753.pem
	I0816 18:14:21.876945   75006 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16753.pem
	I0816 18:14:21.882424   75006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16753.pem /etc/ssl/certs/51391683.0"
	I0816 18:14:21.892761   75006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167532.pem && ln -fs /usr/share/ca-certificates/167532.pem /etc/ssl/certs/167532.pem"
	I0816 18:14:21.904506   75006 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167532.pem
	I0816 18:14:21.909624   75006 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 17:00 /usr/share/ca-certificates/167532.pem
	I0816 18:14:21.909687   75006 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167532.pem
	I0816 18:14:21.915765   75006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167532.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 18:14:21.927160   75006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 18:14:21.937381   75006 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:14:21.941423   75006 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:49 /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:14:21.941477   75006 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:14:21.946741   75006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 18:14:21.958082   75006 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 18:14:21.962431   75006 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 18:14:21.969889   75006 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 18:14:21.977302   75006 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 18:14:21.983468   75006 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 18:14:21.989115   75006 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 18:14:21.994569   75006 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 18:14:21.999962   75006 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-256678 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-256678 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.144 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 18:14:22.000090   75006 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 18:14:22.000139   75006 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 18:14:22.034063   75006 cri.go:89] found id: ""
	I0816 18:14:22.034158   75006 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 18:14:22.043988   75006 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 18:14:22.044003   75006 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 18:14:22.044040   75006 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 18:14:22.053276   75006 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 18:14:22.054255   75006 kubeconfig.go:125] found "default-k8s-diff-port-256678" server: "https://192.168.72.144:8444"
	I0816 18:14:22.056408   75006 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 18:14:22.065394   75006 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.144
	I0816 18:14:22.065429   75006 kubeadm.go:1160] stopping kube-system containers ...
	I0816 18:14:22.065443   75006 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 18:14:22.065496   75006 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 18:14:22.112797   75006 cri.go:89] found id: ""
	I0816 18:14:22.112889   75006 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 18:14:22.130231   75006 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 18:14:22.139432   75006 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 18:14:22.139451   75006 kubeadm.go:157] found existing configuration files:
	
	I0816 18:14:22.139493   75006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0816 18:14:22.148118   75006 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 18:14:22.148168   75006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 18:14:22.158088   75006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0816 18:14:22.166741   75006 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 18:14:22.166803   75006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 18:14:22.175578   75006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0816 18:14:22.185238   75006 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 18:14:22.185286   75006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 18:14:22.194074   75006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0816 18:14:22.205053   75006 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 18:14:22.205105   75006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 18:14:22.216506   75006 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 18:14:22.228754   75006 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:22.344597   75006 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:23.006750   75006 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:23.275587   75006 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:23.356515   75006 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:23.432890   75006 api_server.go:52] waiting for apiserver process to appear ...
	I0816 18:14:23.432991   75006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:23.933834   75006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:24.433736   75006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:23.205567   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:23.206051   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:23.206078   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:23.206007   76298 retry.go:31] will retry after 2.304886921s: waiting for machine to come up
	I0816 18:14:25.512748   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:25.513295   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:25.513321   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:25.513261   76298 retry.go:31] will retry after 2.603393394s: waiting for machine to come up
	I0816 18:14:23.801346   74828 pod_ready.go:103] pod "kube-scheduler-no-preload-864476" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:26.199045   74828 pod_ready.go:103] pod "kube-scheduler-no-preload-864476" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:28.205981   74828 pod_ready.go:103] pod "kube-scheduler-no-preload-864476" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:24.933846   75006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:24.954190   75006 api_server.go:72] duration metric: took 1.521307594s to wait for apiserver process to appear ...
	I0816 18:14:24.954219   75006 api_server.go:88] waiting for apiserver healthz status ...
	I0816 18:14:24.954242   75006 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I0816 18:14:27.835517   75006 api_server.go:279] https://192.168.72.144:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W0816 18:14:27.835552   75006 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I0816 18:14:27.835567   75006 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I0816 18:14:27.842961   75006 api_server.go:279] https://192.168.72.144:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W0816 18:14:27.842992   75006 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I0816 18:14:27.954290   75006 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I0816 18:14:27.963372   75006 api_server.go:279] https://192.168.72.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:14:27.963400   75006 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:14:28.455035   75006 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I0816 18:14:28.460244   75006 api_server.go:279] https://192.168.72.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:14:28.460279   75006 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:14:28.954475   75006 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I0816 18:14:28.962766   75006 api_server.go:279] https://192.168.72.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:14:28.962802   75006 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:14:29.454298   75006 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I0816 18:14:29.458650   75006 api_server.go:279] https://192.168.72.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:14:29.458681   75006 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:14:29.954582   75006 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I0816 18:14:29.959359   75006 api_server.go:279] https://192.168.72.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:14:29.959384   75006 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:14:30.455077   75006 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I0816 18:14:30.461068   75006 api_server.go:279] https://192.168.72.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:14:30.461099   75006 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:14:30.954772   75006 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I0816 18:14:30.960557   75006 api_server.go:279] https://192.168.72.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:14:30.960588   75006 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:14:31.455232   75006 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I0816 18:14:31.460157   75006 api_server.go:279] https://192.168.72.144:8444/healthz returned 200:
	ok
	I0816 18:14:31.471015   75006 api_server.go:141] control plane version: v1.31.0
	I0816 18:14:31.471046   75006 api_server.go:131] duration metric: took 6.516819341s to wait for apiserver health ...
	I0816 18:14:31.471056   75006 cni.go:84] Creating CNI manager for ""
	I0816 18:14:31.471064   75006 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 18:14:31.472930   75006 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 18:14:28.118105   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:28.118675   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:28.118706   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:28.118637   76298 retry.go:31] will retry after 2.400714985s: waiting for machine to come up
	I0816 18:14:30.521623   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:30.522157   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:30.522196   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:30.522111   76298 retry.go:31] will retry after 3.210603239s: waiting for machine to come up
	I0816 18:14:30.699930   74828 pod_ready.go:103] pod "kube-scheduler-no-preload-864476" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:33.200755   74828 pod_ready.go:103] pod "kube-scheduler-no-preload-864476" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:31.474388   75006 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 18:14:31.484723   75006 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 18:14:31.502094   75006 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 18:14:31.511169   75006 system_pods.go:59] 8 kube-system pods found
	I0816 18:14:31.511207   75006 system_pods.go:61] "coredns-6f6b679f8f-2sgmk" [3c98207c-ab70-435e-a725-3d6b108515d9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 18:14:31.511215   75006 system_pods.go:61] "etcd-default-k8s-diff-port-256678" [c6d0dbe2-8b80-4fb2-8408-7b2e668cf4cc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 18:14:31.511221   75006 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-256678" [4506e38e-6685-41f8-98b1-738b35476ad7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 18:14:31.511228   75006 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-256678" [14282ea5-2ebc-4ea6-8e06-829e86296333] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 18:14:31.511232   75006 system_pods.go:61] "kube-proxy-l4lr2" [880ceec6-c3d1-4934-b02a-7a175ded8a02] Running
	I0816 18:14:31.511236   75006 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-256678" [b122d1cd-12e8-4b87-a179-c50baf4c89d4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 18:14:31.511241   75006 system_pods.go:61] "metrics-server-6867b74b74-fc4h4" [3cb9624e-98b4-4edb-a2de-d6a971520cac] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 18:14:31.511244   75006 system_pods.go:61] "storage-provisioner" [79442d12-c28b-447e-ae96-e4c2ddb5c4da] Running
	I0816 18:14:31.511250   75006 system_pods.go:74] duration metric: took 9.137933ms to wait for pod list to return data ...
	I0816 18:14:31.511256   75006 node_conditions.go:102] verifying NodePressure condition ...
	I0816 18:14:31.515339   75006 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 18:14:31.515361   75006 node_conditions.go:123] node cpu capacity is 2
	I0816 18:14:31.515370   75006 node_conditions.go:105] duration metric: took 4.110442ms to run NodePressure ...
	I0816 18:14:31.515387   75006 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:31.774197   75006 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0816 18:14:31.778258   75006 kubeadm.go:739] kubelet initialised
	I0816 18:14:31.778276   75006 kubeadm.go:740] duration metric: took 4.052927ms waiting for restarted kubelet to initialise ...
	I0816 18:14:31.778283   75006 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:14:31.782595   75006 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-2sgmk" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:33.788205   75006 pod_ready.go:103] pod "coredns-6f6b679f8f-2sgmk" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:35.053312   74510 start.go:364] duration metric: took 53.786665535s to acquireMachinesLock for "embed-certs-777541"
	I0816 18:14:35.053367   74510 start.go:96] Skipping create...Using existing machine configuration
	I0816 18:14:35.053372   74510 fix.go:54] fixHost starting: 
	I0816 18:14:35.053687   74510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:14:35.053718   74510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:14:35.073509   74510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33223
	I0816 18:14:35.073935   74510 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:14:35.074396   74510 main.go:141] libmachine: Using API Version  1
	I0816 18:14:35.074420   74510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:14:35.074749   74510 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:14:35.074928   74510 main.go:141] libmachine: (embed-certs-777541) Calling .DriverName
	I0816 18:14:35.075102   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetState
	I0816 18:14:35.076710   74510 fix.go:112] recreateIfNeeded on embed-certs-777541: state=Stopped err=<nil>
	I0816 18:14:35.076738   74510 main.go:141] libmachine: (embed-certs-777541) Calling .DriverName
	W0816 18:14:35.076903   74510 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 18:14:35.078759   74510 out.go:177] * Restarting existing kvm2 VM for "embed-certs-777541" ...
	I0816 18:14:33.735394   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:33.735898   75402 main.go:141] libmachine: (old-k8s-version-783465) Found IP for machine: 192.168.39.211
	I0816 18:14:33.735925   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has current primary IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:33.735933   75402 main.go:141] libmachine: (old-k8s-version-783465) Reserving static IP address...
	I0816 18:14:33.736407   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "old-k8s-version-783465", mac: "52:54:00:d1:97:35", ip: "192.168.39.211"} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:33.736439   75402 main.go:141] libmachine: (old-k8s-version-783465) Reserved static IP address: 192.168.39.211
	I0816 18:14:33.736459   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | skip adding static IP to network mk-old-k8s-version-783465 - found existing host DHCP lease matching {name: "old-k8s-version-783465", mac: "52:54:00:d1:97:35", ip: "192.168.39.211"}
	I0816 18:14:33.736478   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | Getting to WaitForSSH function...
	I0816 18:14:33.736492   75402 main.go:141] libmachine: (old-k8s-version-783465) Waiting for SSH to be available...
	I0816 18:14:33.739028   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:33.739377   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:33.739397   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:33.739596   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | Using SSH client type: external
	I0816 18:14:33.739689   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | Using SSH private key: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/old-k8s-version-783465/id_rsa (-rw-------)
	I0816 18:14:33.739724   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.211 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19461-9545/.minikube/machines/old-k8s-version-783465/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 18:14:33.739747   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | About to run SSH command:
	I0816 18:14:33.739785   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | exit 0
	I0816 18:14:33.861036   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | SSH cmd err, output: <nil>: 
	I0816 18:14:33.861405   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetConfigRaw
	I0816 18:14:33.862105   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetIP
	I0816 18:14:33.864850   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:33.865245   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:33.865272   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:33.865542   75402 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/config.json ...
	I0816 18:14:33.865796   75402 machine.go:93] provisionDockerMachine start ...
	I0816 18:14:33.865820   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	I0816 18:14:33.866053   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:14:33.868422   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:33.868761   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:33.868795   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:33.868911   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHPort
	I0816 18:14:33.869095   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:33.869267   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:33.869415   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHUsername
	I0816 18:14:33.869579   75402 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:33.869796   75402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0816 18:14:33.869810   75402 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 18:14:33.972880   75402 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 18:14:33.972907   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetMachineName
	I0816 18:14:33.973141   75402 buildroot.go:166] provisioning hostname "old-k8s-version-783465"
	I0816 18:14:33.973172   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetMachineName
	I0816 18:14:33.973378   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:14:33.976198   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:33.976530   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:33.976563   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:33.976747   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHPort
	I0816 18:14:33.976945   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:33.977086   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:33.977228   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHUsername
	I0816 18:14:33.977369   75402 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:33.977529   75402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0816 18:14:33.977540   75402 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-783465 && echo "old-k8s-version-783465" | sudo tee /etc/hostname
	I0816 18:14:34.086092   75402 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-783465
	
	I0816 18:14:34.086123   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:14:34.088785   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.089107   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:34.089132   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.089285   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHPort
	I0816 18:14:34.089527   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:34.089684   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:34.089828   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHUsername
	I0816 18:14:34.089997   75402 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:34.090152   75402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0816 18:14:34.090168   75402 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-783465' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-783465/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-783465' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 18:14:34.200744   75402 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 18:14:34.200779   75402 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19461-9545/.minikube CaCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19461-9545/.minikube}
	I0816 18:14:34.200834   75402 buildroot.go:174] setting up certificates
	I0816 18:14:34.200848   75402 provision.go:84] configureAuth start
	I0816 18:14:34.200862   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetMachineName
	I0816 18:14:34.201175   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetIP
	I0816 18:14:34.203868   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.204297   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:34.204344   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.204506   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:14:34.207067   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.207441   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:34.207464   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.207810   75402 provision.go:143] copyHostCerts
	I0816 18:14:34.207869   75402 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem, removing ...
	I0816 18:14:34.207892   75402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem
	I0816 18:14:34.207951   75402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem (1082 bytes)
	I0816 18:14:34.208058   75402 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem, removing ...
	I0816 18:14:34.208069   75402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem
	I0816 18:14:34.208103   75402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem (1123 bytes)
	I0816 18:14:34.208180   75402 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem, removing ...
	I0816 18:14:34.208192   75402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem
	I0816 18:14:34.208220   75402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem (1679 bytes)
	I0816 18:14:34.208291   75402 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-783465 san=[127.0.0.1 192.168.39.211 localhost minikube old-k8s-version-783465]
	I0816 18:14:34.413800   75402 provision.go:177] copyRemoteCerts
	I0816 18:14:34.413857   75402 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 18:14:34.413881   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:14:34.416724   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.417138   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:34.417173   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.417345   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHPort
	I0816 18:14:34.417673   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:34.417894   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHUsername
	I0816 18:14:34.418089   75402 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/old-k8s-version-783465/id_rsa Username:docker}
	I0816 18:14:34.495519   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 18:14:34.517414   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0816 18:14:34.540423   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 18:14:34.563983   75402 provision.go:87] duration metric: took 363.122639ms to configureAuth
	I0816 18:14:34.564019   75402 buildroot.go:189] setting minikube options for container-runtime
	I0816 18:14:34.564229   75402 config.go:182] Loaded profile config "old-k8s-version-783465": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0816 18:14:34.564299   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:14:34.567149   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.567550   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:34.567580   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.567753   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHPort
	I0816 18:14:34.567935   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:34.568098   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:34.568255   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHUsername
	I0816 18:14:34.568448   75402 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:34.568659   75402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0816 18:14:34.568680   75402 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 18:14:34.824064   75402 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 18:14:34.824091   75402 machine.go:96] duration metric: took 958.278616ms to provisionDockerMachine
	I0816 18:14:34.824106   75402 start.go:293] postStartSetup for "old-k8s-version-783465" (driver="kvm2")
	I0816 18:14:34.824120   75402 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 18:14:34.824169   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	I0816 18:14:34.824556   75402 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 18:14:34.824599   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:14:34.827203   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.827517   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:34.827547   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.827677   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHPort
	I0816 18:14:34.827869   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:34.828033   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHUsername
	I0816 18:14:34.828171   75402 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/old-k8s-version-783465/id_rsa Username:docker}
	I0816 18:14:34.912148   75402 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 18:14:34.916652   75402 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 18:14:34.916681   75402 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/addons for local assets ...
	I0816 18:14:34.916755   75402 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/files for local assets ...
	I0816 18:14:34.916864   75402 filesync.go:149] local asset: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem -> 167532.pem in /etc/ssl/certs
	I0816 18:14:34.916989   75402 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 18:14:34.927061   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /etc/ssl/certs/167532.pem (1708 bytes)
	I0816 18:14:34.949703   75402 start.go:296] duration metric: took 125.581331ms for postStartSetup
	I0816 18:14:34.949743   75402 fix.go:56] duration metric: took 19.13519024s for fixHost
	I0816 18:14:34.949763   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:14:34.952740   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.953090   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:34.953124   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.953307   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHPort
	I0816 18:14:34.953532   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:34.953715   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:34.953861   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHUsername
	I0816 18:14:34.954029   75402 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:34.954229   75402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0816 18:14:34.954242   75402 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 18:14:35.053143   75402 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723832075.025252523
	
	I0816 18:14:35.053171   75402 fix.go:216] guest clock: 1723832075.025252523
	I0816 18:14:35.053180   75402 fix.go:229] Guest: 2024-08-16 18:14:35.025252523 +0000 UTC Remote: 2024-08-16 18:14:34.949747236 +0000 UTC m=+221.880938919 (delta=75.505287ms)
	I0816 18:14:35.053204   75402 fix.go:200] guest clock delta is within tolerance: 75.505287ms
	I0816 18:14:35.053211   75402 start.go:83] releasing machines lock for "old-k8s-version-783465", held for 19.238692888s
	I0816 18:14:35.053243   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	I0816 18:14:35.053549   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetIP
	I0816 18:14:35.056365   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:35.056792   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:35.056823   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:35.057009   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	I0816 18:14:35.057509   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	I0816 18:14:35.057731   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	I0816 18:14:35.057831   75402 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 18:14:35.057892   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:14:35.057951   75402 ssh_runner.go:195] Run: cat /version.json
	I0816 18:14:35.057972   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:14:35.060543   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:35.060733   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:35.060987   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:35.061016   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:35.061126   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:35.061148   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:35.061154   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHPort
	I0816 18:14:35.061319   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHPort
	I0816 18:14:35.061339   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:35.061456   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:35.061518   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHUsername
	I0816 18:14:35.061639   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHUsername
	I0816 18:14:35.061720   75402 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/old-k8s-version-783465/id_rsa Username:docker}
	I0816 18:14:35.061773   75402 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/old-k8s-version-783465/id_rsa Username:docker}
	I0816 18:14:35.174137   75402 ssh_runner.go:195] Run: systemctl --version
	I0816 18:14:35.181704   75402 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 18:14:35.323490   75402 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 18:14:35.330733   75402 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 18:14:35.330807   75402 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 18:14:35.350653   75402 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 18:14:35.350679   75402 start.go:495] detecting cgroup driver to use...
	I0816 18:14:35.350763   75402 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 18:14:35.372307   75402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 18:14:35.386513   75402 docker.go:217] disabling cri-docker service (if available) ...
	I0816 18:14:35.386598   75402 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 18:14:35.400406   75402 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 18:14:35.414761   75402 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 18:14:35.540356   75402 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 18:14:35.675726   75402 docker.go:233] disabling docker service ...
	I0816 18:14:35.675793   75402 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 18:14:35.691169   75402 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 18:14:35.707288   75402 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 18:14:35.858149   75402 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 18:14:35.981654   75402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 18:14:35.996396   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 18:14:36.013656   75402 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0816 18:14:36.013711   75402 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:36.023839   75402 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 18:14:36.023907   75402 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:36.033889   75402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:36.043727   75402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:36.053496   75402 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 18:14:36.063694   75402 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 18:14:36.072919   75402 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 18:14:36.072979   75402 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 18:14:36.085707   75402 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 18:14:36.095377   75402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:14:36.219235   75402 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 18:14:36.384915   75402 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 18:14:36.384990   75402 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 18:14:36.392122   75402 start.go:563] Will wait 60s for crictl version
	I0816 18:14:36.392196   75402 ssh_runner.go:195] Run: which crictl
	I0816 18:14:36.397589   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 18:14:36.443581   75402 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 18:14:36.443710   75402 ssh_runner.go:195] Run: crio --version
	I0816 18:14:36.473740   75402 ssh_runner.go:195] Run: crio --version
	I0816 18:14:36.512542   75402 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0816 18:14:36.513678   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetIP
	I0816 18:14:36.517404   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:36.517912   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:36.517948   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:36.518190   75402 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0816 18:14:36.523577   75402 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 18:14:36.536188   75402 kubeadm.go:883] updating cluster {Name:old-k8s-version-783465 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-783465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 18:14:36.536361   75402 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0816 18:14:36.536425   75402 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 18:14:36.587027   75402 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0816 18:14:36.587085   75402 ssh_runner.go:195] Run: which lz4
	I0816 18:14:36.590780   75402 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 18:14:36.594635   75402 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 18:14:36.594673   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0816 18:14:35.080033   74510 main.go:141] libmachine: (embed-certs-777541) Calling .Start
	I0816 18:14:35.080220   74510 main.go:141] libmachine: (embed-certs-777541) Ensuring networks are active...
	I0816 18:14:35.080971   74510 main.go:141] libmachine: (embed-certs-777541) Ensuring network default is active
	I0816 18:14:35.081366   74510 main.go:141] libmachine: (embed-certs-777541) Ensuring network mk-embed-certs-777541 is active
	I0816 18:14:35.081887   74510 main.go:141] libmachine: (embed-certs-777541) Getting domain xml...
	I0816 18:14:35.082634   74510 main.go:141] libmachine: (embed-certs-777541) Creating domain...
	I0816 18:14:36.459300   74510 main.go:141] libmachine: (embed-certs-777541) Waiting to get IP...
	I0816 18:14:36.460282   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:36.460801   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:36.460883   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:36.460778   76422 retry.go:31] will retry after 291.491491ms: waiting for machine to come up
	I0816 18:14:36.754548   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:36.755372   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:36.755412   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:36.755313   76422 retry.go:31] will retry after 356.347467ms: waiting for machine to come up
	I0816 18:14:37.113124   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:37.113704   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:37.113739   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:37.113676   76422 retry.go:31] will retry after 386.244375ms: waiting for machine to come up
	I0816 18:14:37.502241   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:37.502796   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:37.502826   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:37.502750   76422 retry.go:31] will retry after 437.69847ms: waiting for machine to come up
	I0816 18:14:37.942667   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:37.943423   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:37.943456   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:37.943378   76422 retry.go:31] will retry after 709.064032ms: waiting for machine to come up
	I0816 18:14:38.653840   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:38.654349   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:38.654386   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:38.654297   76422 retry.go:31] will retry after 594.417028ms: waiting for machine to come up
	I0816 18:14:34.700134   74828 pod_ready.go:93] pod "kube-scheduler-no-preload-864476" in "kube-system" namespace has status "Ready":"True"
	I0816 18:14:34.700158   74828 pod_ready.go:82] duration metric: took 13.007571631s for pod "kube-scheduler-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:34.700171   74828 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:36.707977   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:38.708527   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:35.790842   75006 pod_ready.go:103] pod "coredns-6f6b679f8f-2sgmk" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:37.791236   75006 pod_ready.go:93] pod "coredns-6f6b679f8f-2sgmk" in "kube-system" namespace has status "Ready":"True"
	I0816 18:14:37.791278   75006 pod_ready.go:82] duration metric: took 6.008656328s for pod "coredns-6f6b679f8f-2sgmk" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:37.791294   75006 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:39.798513   75006 pod_ready.go:93] pod "etcd-default-k8s-diff-port-256678" in "kube-system" namespace has status "Ready":"True"
	I0816 18:14:39.798543   75006 pod_ready.go:82] duration metric: took 2.007240233s for pod "etcd-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:39.798557   75006 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:38.127403   75402 crio.go:462] duration metric: took 1.536659915s to copy over tarball
	I0816 18:14:38.127504   75402 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 18:14:41.109575   75402 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.982013621s)
	I0816 18:14:41.109639   75402 crio.go:469] duration metric: took 2.982198625s to extract the tarball
	I0816 18:14:41.109650   75402 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 18:14:41.152940   75402 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 18:14:41.185863   75402 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0816 18:14:41.185892   75402 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0816 18:14:41.185982   75402 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:14:41.186003   75402 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0816 18:14:41.186036   75402 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0816 18:14:41.186044   75402 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 18:14:41.186103   75402 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 18:14:41.185993   75402 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0816 18:14:41.186171   75402 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 18:14:41.185993   75402 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 18:14:41.187521   75402 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0816 18:14:41.187532   75402 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 18:14:41.187542   75402 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0816 18:14:41.187527   75402 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 18:14:41.187595   75402 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:14:41.187605   75402 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 18:14:41.187688   75402 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 18:14:41.187840   75402 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0816 18:14:41.421551   75402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0816 18:14:41.462506   75402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0816 18:14:41.467716   75402 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0816 18:14:41.467758   75402 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0816 18:14:41.467810   75402 ssh_runner.go:195] Run: which crictl
	I0816 18:14:41.508571   75402 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0816 18:14:41.508638   75402 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 18:14:41.508687   75402 ssh_runner.go:195] Run: which crictl
	I0816 18:14:41.508691   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 18:14:41.514560   75402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 18:14:41.520003   75402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0816 18:14:41.526475   75402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0816 18:14:41.526892   75402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0816 18:14:41.533271   75402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0816 18:14:41.569269   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 18:14:41.569426   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 18:14:41.694043   75402 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0816 18:14:41.694100   75402 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 18:14:41.694049   75402 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0816 18:14:41.694210   75402 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 18:14:41.694173   75402 ssh_runner.go:195] Run: which crictl
	I0816 18:14:41.694268   75402 ssh_runner.go:195] Run: which crictl
	I0816 18:14:41.701292   75402 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0816 18:14:41.701337   75402 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0816 18:14:41.701389   75402 ssh_runner.go:195] Run: which crictl
	I0816 18:14:41.707345   75402 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0816 18:14:41.707415   75402 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 18:14:41.707467   75402 ssh_runner.go:195] Run: which crictl
	I0816 18:14:41.711820   75402 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0816 18:14:41.711854   75402 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0816 18:14:41.711896   75402 ssh_runner.go:195] Run: which crictl
	I0816 18:14:41.723813   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 18:14:41.723850   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 18:14:41.723814   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 18:14:41.723939   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 18:14:41.723951   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 18:14:41.724003   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 18:14:41.724060   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 18:14:41.872645   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 18:14:41.872674   75402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0816 18:14:41.873747   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 18:14:41.873786   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 18:14:41.873891   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 18:14:41.873899   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 18:14:41.873960   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 18:14:41.997519   75402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0816 18:14:42.002048   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 18:14:42.002091   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 18:14:42.002140   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 18:14:42.002178   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 18:14:42.002218   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 18:14:42.070993   75402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:14:42.115418   75402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0816 18:14:42.115527   75402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0816 18:14:42.115623   75402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0816 18:14:42.115631   75402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0816 18:14:42.115689   75402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0816 18:14:42.235706   75402 cache_images.go:92] duration metric: took 1.049784726s to LoadCachedImages
	W0816 18:14:42.235807   75402 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0816 18:14:42.235821   75402 kubeadm.go:934] updating node { 192.168.39.211 8443 v1.20.0 crio true true} ...
	I0816 18:14:42.235939   75402 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-783465 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.211
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-783465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 18:14:42.236024   75402 ssh_runner.go:195] Run: crio config
	I0816 18:14:42.286742   75402 cni.go:84] Creating CNI manager for ""
	I0816 18:14:42.286763   75402 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 18:14:42.286771   75402 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 18:14:42.286789   75402 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.211 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-783465 NodeName:old-k8s-version-783465 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.211"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.211 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0816 18:14:42.286904   75402 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.211
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-783465"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.211
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.211"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 18:14:42.286961   75402 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0816 18:14:42.297015   75402 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 18:14:42.297098   75402 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 18:14:42.306400   75402 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0816 18:14:42.322812   75402 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 18:14:42.339791   75402 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0816 18:14:42.356930   75402 ssh_runner.go:195] Run: grep 192.168.39.211	control-plane.minikube.internal$ /etc/hosts
	I0816 18:14:42.360578   75402 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.211	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 18:14:42.373248   75402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:14:42.495499   75402 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 18:14:42.511910   75402 certs.go:68] Setting up /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465 for IP: 192.168.39.211
	I0816 18:14:42.511942   75402 certs.go:194] generating shared ca certs ...
	I0816 18:14:42.511964   75402 certs.go:226] acquiring lock for ca certs: {Name:mk1e000575c94c138513704c2900b8a68810eb65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:14:42.512147   75402 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key
	I0816 18:14:42.512206   75402 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key
	I0816 18:14:42.512220   75402 certs.go:256] generating profile certs ...
	I0816 18:14:42.512361   75402 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/client.key
	I0816 18:14:42.512431   75402 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/apiserver.key.94c45fb6
	I0816 18:14:42.512483   75402 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/proxy-client.key
	I0816 18:14:42.512664   75402 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem (1338 bytes)
	W0816 18:14:42.512709   75402 certs.go:480] ignoring /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753_empty.pem, impossibly tiny 0 bytes
	I0816 18:14:42.512724   75402 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 18:14:42.512754   75402 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem (1082 bytes)
	I0816 18:14:42.512794   75402 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem (1123 bytes)
	I0816 18:14:42.512825   75402 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem (1679 bytes)
	I0816 18:14:42.512881   75402 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem (1708 bytes)
	I0816 18:14:42.513660   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 18:14:42.552291   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 18:14:42.585617   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 18:14:42.611017   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 18:14:42.638092   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0816 18:14:42.676877   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 18:14:42.710091   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 18:14:42.743734   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 18:14:42.779905   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /usr/share/ca-certificates/167532.pem (1708 bytes)
	I0816 18:14:42.802779   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 18:14:42.826432   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem --> /usr/share/ca-certificates/16753.pem (1338 bytes)
	I0816 18:14:42.849286   75402 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 18:14:42.866901   75402 ssh_runner.go:195] Run: openssl version
	I0816 18:14:42.872283   75402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167532.pem && ln -fs /usr/share/ca-certificates/167532.pem /etc/ssl/certs/167532.pem"
	I0816 18:14:42.882976   75402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167532.pem
	I0816 18:14:42.887432   75402 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 17:00 /usr/share/ca-certificates/167532.pem
	I0816 18:14:42.887504   75402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167532.pem
	I0816 18:14:42.893275   75402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167532.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 18:14:42.903687   75402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 18:14:42.915232   75402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:14:42.919669   75402 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:49 /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:14:42.919735   75402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:14:42.925282   75402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 18:14:42.937888   75402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16753.pem && ln -fs /usr/share/ca-certificates/16753.pem /etc/ssl/certs/16753.pem"
	I0816 18:14:42.949994   75402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16753.pem
	I0816 18:14:42.954495   75402 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 17:00 /usr/share/ca-certificates/16753.pem
	I0816 18:14:42.954548   75402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16753.pem
	I0816 18:14:42.960295   75402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16753.pem /etc/ssl/certs/51391683.0"
	I0816 18:14:42.972006   75402 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 18:14:42.976450   75402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 18:14:42.982741   75402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 18:14:42.988649   75402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 18:14:42.995021   75402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 18:14:43.000965   75402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 18:14:43.007030   75402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 18:14:43.012891   75402 kubeadm.go:392] StartCluster: {Name:old-k8s-version-783465 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-783465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 18:14:43.012983   75402 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 18:14:43.013071   75402 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 18:14:43.050670   75402 cri.go:89] found id: ""
	I0816 18:14:43.050741   75402 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 18:14:43.060748   75402 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 18:14:43.060773   75402 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 18:14:43.060825   75402 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 18:14:43.070299   75402 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 18:14:43.071251   75402 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-783465" does not appear in /home/jenkins/minikube-integration/19461-9545/kubeconfig
	I0816 18:14:43.071945   75402 kubeconfig.go:62] /home/jenkins/minikube-integration/19461-9545/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-783465" cluster setting kubeconfig missing "old-k8s-version-783465" context setting]
	I0816 18:14:43.072870   75402 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/kubeconfig: {Name:mk7d5abb3a1b9b654c5d7490d32aeae2c9a1bdd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:14:39.250064   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:39.250979   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:39.251028   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:39.250914   76422 retry.go:31] will retry after 1.014851653s: waiting for machine to come up
	I0816 18:14:40.266811   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:40.267287   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:40.267323   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:40.267238   76422 retry.go:31] will retry after 1.333311972s: waiting for machine to come up
	I0816 18:14:41.602031   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:41.602532   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:41.602565   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:41.602480   76422 retry.go:31] will retry after 1.525496469s: waiting for machine to come up
	I0816 18:14:43.130136   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:43.130620   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:43.130661   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:43.130563   76422 retry.go:31] will retry after 2.206344656s: waiting for machine to come up
	I0816 18:14:41.206879   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:43.706278   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:41.806382   75006 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-256678" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:43.927145   75006 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-256678" in "kube-system" namespace has status "Ready":"True"
	I0816 18:14:43.927173   75006 pod_ready.go:82] duration metric: took 4.128607781s for pod "kube-apiserver-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:43.927182   75006 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:43.932293   75006 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-256678" in "kube-system" namespace has status "Ready":"True"
	I0816 18:14:43.932314   75006 pod_ready.go:82] duration metric: took 5.122737ms for pod "kube-controller-manager-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:43.932326   75006 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-l4lr2" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:43.937128   75006 pod_ready.go:93] pod "kube-proxy-l4lr2" in "kube-system" namespace has status "Ready":"True"
	I0816 18:14:43.937146   75006 pod_ready.go:82] duration metric: took 4.812798ms for pod "kube-proxy-l4lr2" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:43.937154   75006 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:43.941992   75006 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-256678" in "kube-system" namespace has status "Ready":"True"
	I0816 18:14:43.942018   75006 pod_ready.go:82] duration metric: took 4.856588ms for pod "kube-scheduler-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:43.942030   75006 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:43.141753   75402 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 18:14:43.154269   75402 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.211
	I0816 18:14:43.154324   75402 kubeadm.go:1160] stopping kube-system containers ...
	I0816 18:14:43.154341   75402 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 18:14:43.154404   75402 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 18:14:43.192966   75402 cri.go:89] found id: ""
	I0816 18:14:43.193035   75402 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 18:14:43.213101   75402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 18:14:43.222811   75402 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 18:14:43.222826   75402 kubeadm.go:157] found existing configuration files:
	
	I0816 18:14:43.222870   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 18:14:43.232196   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 18:14:43.232261   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 18:14:43.241633   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 18:14:43.250751   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 18:14:43.250800   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 18:14:43.260197   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 18:14:43.268943   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 18:14:43.269000   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 18:14:43.277887   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 18:14:43.286281   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 18:14:43.286391   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 18:14:43.295899   75402 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 18:14:43.306026   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:43.441487   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:44.213457   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:44.431649   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:44.553955   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:44.646817   75402 api_server.go:52] waiting for apiserver process to appear ...
	I0816 18:14:44.646923   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:45.147202   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:45.648050   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:46.147958   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:46.647398   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:47.147403   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:47.646992   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:45.338228   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:45.338729   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:45.338763   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:45.338660   76422 retry.go:31] will retry after 2.526891535s: waiting for machine to come up
	I0816 18:14:47.868326   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:47.868821   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:47.868853   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:47.868774   76422 retry.go:31] will retry after 2.866643935s: waiting for machine to come up
	I0816 18:14:45.706669   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:47.707062   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:45.948791   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:48.447930   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:48.147987   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:48.646974   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:49.147114   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:49.647020   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:50.147765   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:50.647135   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:51.147506   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:51.647568   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:52.147648   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:52.647865   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:50.736760   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:50.737295   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:50.737331   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:50.737245   76422 retry.go:31] will retry after 3.824271015s: waiting for machine to come up
	I0816 18:14:50.206249   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:52.206435   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:50.449586   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:52.948577   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:54.566285   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:54.566784   74510 main.go:141] libmachine: (embed-certs-777541) Found IP for machine: 192.168.61.218
	I0816 18:14:54.566809   74510 main.go:141] libmachine: (embed-certs-777541) Reserving static IP address...
	I0816 18:14:54.566825   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has current primary IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:54.567171   74510 main.go:141] libmachine: (embed-certs-777541) Reserved static IP address: 192.168.61.218
	I0816 18:14:54.567193   74510 main.go:141] libmachine: (embed-certs-777541) Waiting for SSH to be available...
	I0816 18:14:54.567211   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "embed-certs-777541", mac: "52:54:00:54:9a:0c", ip: "192.168.61.218"} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:54.567231   74510 main.go:141] libmachine: (embed-certs-777541) DBG | skip adding static IP to network mk-embed-certs-777541 - found existing host DHCP lease matching {name: "embed-certs-777541", mac: "52:54:00:54:9a:0c", ip: "192.168.61.218"}
	I0816 18:14:54.567245   74510 main.go:141] libmachine: (embed-certs-777541) DBG | Getting to WaitForSSH function...
	I0816 18:14:54.569546   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:54.569864   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:54.569890   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:54.570019   74510 main.go:141] libmachine: (embed-certs-777541) DBG | Using SSH client type: external
	I0816 18:14:54.570046   74510 main.go:141] libmachine: (embed-certs-777541) DBG | Using SSH private key: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/embed-certs-777541/id_rsa (-rw-------)
	I0816 18:14:54.570073   74510 main.go:141] libmachine: (embed-certs-777541) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.218 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19461-9545/.minikube/machines/embed-certs-777541/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 18:14:54.570082   74510 main.go:141] libmachine: (embed-certs-777541) DBG | About to run SSH command:
	I0816 18:14:54.570109   74510 main.go:141] libmachine: (embed-certs-777541) DBG | exit 0
	I0816 18:14:54.692450   74510 main.go:141] libmachine: (embed-certs-777541) DBG | SSH cmd err, output: <nil>: 
	I0816 18:14:54.692828   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetConfigRaw
	I0816 18:14:54.693486   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetIP
	I0816 18:14:54.696565   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:54.696943   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:54.696987   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:54.697248   74510 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/embed-certs-777541/config.json ...
	I0816 18:14:54.697455   74510 machine.go:93] provisionDockerMachine start ...
	I0816 18:14:54.697475   74510 main.go:141] libmachine: (embed-certs-777541) Calling .DriverName
	I0816 18:14:54.697686   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:14:54.700172   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:54.700491   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:54.700520   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:54.700716   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHPort
	I0816 18:14:54.700906   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:54.701102   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:54.701239   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHUsername
	I0816 18:14:54.701440   74510 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:54.701650   74510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.218 22 <nil> <nil>}
	I0816 18:14:54.701662   74510 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 18:14:54.800770   74510 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 18:14:54.800805   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetMachineName
	I0816 18:14:54.801047   74510 buildroot.go:166] provisioning hostname "embed-certs-777541"
	I0816 18:14:54.801079   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetMachineName
	I0816 18:14:54.801264   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:14:54.804313   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:54.804734   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:54.804761   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:54.804940   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHPort
	I0816 18:14:54.805132   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:54.805322   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:54.805485   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHUsername
	I0816 18:14:54.805711   74510 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:54.805869   74510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.218 22 <nil> <nil>}
	I0816 18:14:54.805886   74510 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-777541 && echo "embed-certs-777541" | sudo tee /etc/hostname
	I0816 18:14:54.918908   74510 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-777541
	
	I0816 18:14:54.918949   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:14:54.921760   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:54.922117   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:54.922146   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:54.922338   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHPort
	I0816 18:14:54.922511   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:54.922681   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:54.922843   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHUsername
	I0816 18:14:54.923033   74510 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:54.923243   74510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.218 22 <nil> <nil>}
	I0816 18:14:54.923261   74510 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-777541' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-777541/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-777541' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 18:14:55.028983   74510 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 18:14:55.029016   74510 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19461-9545/.minikube CaCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19461-9545/.minikube}
	I0816 18:14:55.029040   74510 buildroot.go:174] setting up certificates
	I0816 18:14:55.029051   74510 provision.go:84] configureAuth start
	I0816 18:14:55.029064   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetMachineName
	I0816 18:14:55.029320   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetIP
	I0816 18:14:55.032273   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.032693   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:55.032743   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.032983   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:14:55.035257   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.035581   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:55.035606   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.035742   74510 provision.go:143] copyHostCerts
	I0816 18:14:55.035797   74510 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem, removing ...
	I0816 18:14:55.035814   74510 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem
	I0816 18:14:55.035899   74510 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem (1123 bytes)
	I0816 18:14:55.035996   74510 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem, removing ...
	I0816 18:14:55.036004   74510 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem
	I0816 18:14:55.036024   74510 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem (1679 bytes)
	I0816 18:14:55.036081   74510 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem, removing ...
	I0816 18:14:55.036087   74510 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem
	I0816 18:14:55.036106   74510 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem (1082 bytes)
	I0816 18:14:55.036155   74510 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem org=jenkins.embed-certs-777541 san=[127.0.0.1 192.168.61.218 embed-certs-777541 localhost minikube]
	I0816 18:14:55.182540   74510 provision.go:177] copyRemoteCerts
	I0816 18:14:55.182606   74510 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 18:14:55.182633   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:14:55.185807   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.186179   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:55.186229   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.186429   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHPort
	I0816 18:14:55.186619   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:55.186770   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHUsername
	I0816 18:14:55.186884   74510 sshutil.go:53] new ssh client: &{IP:192.168.61.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/embed-certs-777541/id_rsa Username:docker}
	I0816 18:14:55.262494   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0816 18:14:55.285186   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 18:14:55.307082   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 18:14:55.328912   74510 provision.go:87] duration metric: took 299.848734ms to configureAuth
	I0816 18:14:55.328934   74510 buildroot.go:189] setting minikube options for container-runtime
	I0816 18:14:55.329140   74510 config.go:182] Loaded profile config "embed-certs-777541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 18:14:55.329215   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:14:55.331989   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.332366   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:55.332414   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.332594   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHPort
	I0816 18:14:55.332801   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:55.333006   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:55.333158   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHUsername
	I0816 18:14:55.333312   74510 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:55.333501   74510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.218 22 <nil> <nil>}
	I0816 18:14:55.333522   74510 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 18:14:55.579734   74510 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 18:14:55.579765   74510 machine.go:96] duration metric: took 882.296402ms to provisionDockerMachine
	I0816 18:14:55.579781   74510 start.go:293] postStartSetup for "embed-certs-777541" (driver="kvm2")
	I0816 18:14:55.579793   74510 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 18:14:55.579814   74510 main.go:141] libmachine: (embed-certs-777541) Calling .DriverName
	I0816 18:14:55.580182   74510 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 18:14:55.580216   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:14:55.582826   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.583250   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:55.583285   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.583374   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHPort
	I0816 18:14:55.583574   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:55.583739   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHUsername
	I0816 18:14:55.583972   74510 sshutil.go:53] new ssh client: &{IP:192.168.61.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/embed-certs-777541/id_rsa Username:docker}
	I0816 18:14:55.663379   74510 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 18:14:55.667205   74510 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 18:14:55.667231   74510 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/addons for local assets ...
	I0816 18:14:55.667321   74510 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/files for local assets ...
	I0816 18:14:55.667426   74510 filesync.go:149] local asset: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem -> 167532.pem in /etc/ssl/certs
	I0816 18:14:55.667560   74510 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 18:14:55.676427   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /etc/ssl/certs/167532.pem (1708 bytes)
	I0816 18:14:55.698188   74510 start.go:296] duration metric: took 118.396211ms for postStartSetup
	I0816 18:14:55.698226   74510 fix.go:56] duration metric: took 20.644852989s for fixHost
	I0816 18:14:55.698245   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:14:55.701014   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.701359   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:55.701390   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.701587   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHPort
	I0816 18:14:55.701755   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:55.701924   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:55.702070   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHUsername
	I0816 18:14:55.702241   74510 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:55.702452   74510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.218 22 <nil> <nil>}
	I0816 18:14:55.702464   74510 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 18:14:55.801397   74510 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723832095.756052952
	
	I0816 18:14:55.801431   74510 fix.go:216] guest clock: 1723832095.756052952
	I0816 18:14:55.801443   74510 fix.go:229] Guest: 2024-08-16 18:14:55.756052952 +0000 UTC Remote: 2024-08-16 18:14:55.698231489 +0000 UTC m=+357.018707788 (delta=57.821463ms)
	I0816 18:14:55.801492   74510 fix.go:200] guest clock delta is within tolerance: 57.821463ms
	I0816 18:14:55.801504   74510 start.go:83] releasing machines lock for "embed-certs-777541", held for 20.74815396s
	I0816 18:14:55.801528   74510 main.go:141] libmachine: (embed-certs-777541) Calling .DriverName
	I0816 18:14:55.801781   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetIP
	I0816 18:14:55.804216   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.804617   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:55.804659   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.804795   74510 main.go:141] libmachine: (embed-certs-777541) Calling .DriverName
	I0816 18:14:55.805395   74510 main.go:141] libmachine: (embed-certs-777541) Calling .DriverName
	I0816 18:14:55.805622   74510 main.go:141] libmachine: (embed-certs-777541) Calling .DriverName
	I0816 18:14:55.805730   74510 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 18:14:55.805781   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:14:55.805849   74510 ssh_runner.go:195] Run: cat /version.json
	I0816 18:14:55.805877   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:14:55.808587   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.808946   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:55.808978   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.809080   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.809249   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHPort
	I0816 18:14:55.809415   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:55.809417   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:55.809442   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.809575   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHPort
	I0816 18:14:55.809597   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHUsername
	I0816 18:14:55.809720   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:55.809766   74510 sshutil.go:53] new ssh client: &{IP:192.168.61.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/embed-certs-777541/id_rsa Username:docker}
	I0816 18:14:55.809857   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHUsername
	I0816 18:14:55.809970   74510 sshutil.go:53] new ssh client: &{IP:192.168.61.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/embed-certs-777541/id_rsa Username:docker}
	I0816 18:14:55.885026   74510 ssh_runner.go:195] Run: systemctl --version
	I0816 18:14:55.927940   74510 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 18:14:56.072936   74510 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 18:14:56.080952   74510 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 18:14:56.081029   74510 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 18:14:56.100709   74510 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 18:14:56.100734   74510 start.go:495] detecting cgroup driver to use...
	I0816 18:14:56.100791   74510 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 18:14:56.115759   74510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 18:14:56.129714   74510 docker.go:217] disabling cri-docker service (if available) ...
	I0816 18:14:56.129774   74510 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 18:14:56.142909   74510 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 18:14:56.156413   74510 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 18:14:56.268818   74510 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 18:14:56.424536   74510 docker.go:233] disabling docker service ...
	I0816 18:14:56.424612   74510 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 18:14:56.438033   74510 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 18:14:56.450479   74510 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 18:14:56.560132   74510 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 18:14:56.683671   74510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 18:14:56.697636   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 18:14:56.716486   74510 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 18:14:56.716560   74510 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:56.726082   74510 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 18:14:56.726144   74510 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:56.735971   74510 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:56.745410   74510 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:56.754952   74510 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 18:14:56.764717   74510 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:56.774153   74510 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:56.789843   74510 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:56.799399   74510 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 18:14:56.807679   74510 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 18:14:56.807743   74510 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 18:14:56.819873   74510 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 18:14:56.829921   74510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:14:56.936372   74510 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 18:14:57.073931   74510 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 18:14:57.073998   74510 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 18:14:57.078254   74510 start.go:563] Will wait 60s for crictl version
	I0816 18:14:57.078327   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:14:57.081833   74510 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 18:14:57.121402   74510 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 18:14:57.121476   74510 ssh_runner.go:195] Run: crio --version
	I0816 18:14:57.149262   74510 ssh_runner.go:195] Run: crio --version
	I0816 18:14:57.183015   74510 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 18:14:53.146986   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:53.647279   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:54.147587   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:54.647911   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:55.147322   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:55.647765   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:56.147695   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:56.647296   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:57.147031   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:57.647108   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:57.184157   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetIP
	I0816 18:14:57.186758   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:57.187177   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:57.187206   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:57.187439   74510 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0816 18:14:57.191152   74510 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 18:14:57.203073   74510 kubeadm.go:883] updating cluster {Name:embed-certs-777541 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-777541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.218 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 18:14:57.203240   74510 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 18:14:57.203332   74510 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 18:14:57.238289   74510 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0816 18:14:57.238348   74510 ssh_runner.go:195] Run: which lz4
	I0816 18:14:57.242251   74510 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 18:14:57.246081   74510 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 18:14:57.246124   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0816 18:14:58.459887   74510 crio.go:462] duration metric: took 1.217672418s to copy over tarball
	I0816 18:14:58.459960   74510 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 18:14:54.707069   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:57.206750   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:55.449391   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:57.449830   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:59.451338   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:58.147661   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:58.647270   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:59.147355   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:59.647821   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:00.148023   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:00.647165   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:01.147669   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:01.647960   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:02.147721   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:02.647932   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:00.545989   74510 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.085985152s)
	I0816 18:15:00.546028   74510 crio.go:469] duration metric: took 2.086110527s to extract the tarball
	I0816 18:15:00.546039   74510 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 18:15:00.587096   74510 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 18:15:00.630366   74510 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 18:15:00.630394   74510 cache_images.go:84] Images are preloaded, skipping loading
	I0816 18:15:00.630405   74510 kubeadm.go:934] updating node { 192.168.61.218 8443 v1.31.0 crio true true} ...
	I0816 18:15:00.630540   74510 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-777541 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.218
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-777541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 18:15:00.630630   74510 ssh_runner.go:195] Run: crio config
	I0816 18:15:00.681196   74510 cni.go:84] Creating CNI manager for ""
	I0816 18:15:00.681224   74510 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 18:15:00.681235   74510 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 18:15:00.681262   74510 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.218 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-777541 NodeName:embed-certs-777541 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.218"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.218 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 18:15:00.681439   74510 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.218
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-777541"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.218
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.218"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 18:15:00.681534   74510 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 18:15:00.691239   74510 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 18:15:00.691294   74510 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 18:15:00.700059   74510 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0816 18:15:00.717826   74510 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 18:15:00.733475   74510 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0816 18:15:00.750175   74510 ssh_runner.go:195] Run: grep 192.168.61.218	control-plane.minikube.internal$ /etc/hosts
	I0816 18:15:00.753865   74510 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.218	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 18:15:00.765531   74510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:15:00.875234   74510 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 18:15:00.893095   74510 certs.go:68] Setting up /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/embed-certs-777541 for IP: 192.168.61.218
	I0816 18:15:00.893115   74510 certs.go:194] generating shared ca certs ...
	I0816 18:15:00.893131   74510 certs.go:226] acquiring lock for ca certs: {Name:mk1e000575c94c138513704c2900b8a68810eb65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:15:00.893274   74510 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key
	I0816 18:15:00.893318   74510 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key
	I0816 18:15:00.893327   74510 certs.go:256] generating profile certs ...
	I0816 18:15:00.893403   74510 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/embed-certs-777541/client.key
	I0816 18:15:00.893459   74510 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/embed-certs-777541/apiserver.key.dd0c1a01
	I0816 18:15:00.893503   74510 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/embed-certs-777541/proxy-client.key
	I0816 18:15:00.893617   74510 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem (1338 bytes)
	W0816 18:15:00.893645   74510 certs.go:480] ignoring /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753_empty.pem, impossibly tiny 0 bytes
	I0816 18:15:00.893655   74510 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 18:15:00.893675   74510 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem (1082 bytes)
	I0816 18:15:00.893698   74510 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem (1123 bytes)
	I0816 18:15:00.893721   74510 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem (1679 bytes)
	I0816 18:15:00.893759   74510 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem (1708 bytes)
	I0816 18:15:00.894445   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 18:15:00.936535   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 18:15:00.969775   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 18:15:01.013053   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 18:15:01.046087   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/embed-certs-777541/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0816 18:15:01.073290   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/embed-certs-777541/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 18:15:01.097033   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/embed-certs-777541/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 18:15:01.119859   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/embed-certs-777541/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 18:15:01.141943   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem --> /usr/share/ca-certificates/16753.pem (1338 bytes)
	I0816 18:15:01.168752   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /usr/share/ca-certificates/167532.pem (1708 bytes)
	I0816 18:15:01.191193   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 18:15:01.213691   74510 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 18:15:01.229374   74510 ssh_runner.go:195] Run: openssl version
	I0816 18:15:01.234563   74510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16753.pem && ln -fs /usr/share/ca-certificates/16753.pem /etc/ssl/certs/16753.pem"
	I0816 18:15:01.244301   74510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16753.pem
	I0816 18:15:01.248156   74510 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 17:00 /usr/share/ca-certificates/16753.pem
	I0816 18:15:01.248220   74510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16753.pem
	I0816 18:15:01.253468   74510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16753.pem /etc/ssl/certs/51391683.0"
	I0816 18:15:01.262917   74510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167532.pem && ln -fs /usr/share/ca-certificates/167532.pem /etc/ssl/certs/167532.pem"
	I0816 18:15:01.272577   74510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167532.pem
	I0816 18:15:01.276790   74510 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 17:00 /usr/share/ca-certificates/167532.pem
	I0816 18:15:01.276841   74510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167532.pem
	I0816 18:15:01.281847   74510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167532.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 18:15:01.291789   74510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 18:15:01.302422   74510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:15:01.306320   74510 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:49 /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:15:01.306364   74510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:15:01.311335   74510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 18:15:01.320713   74510 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 18:15:01.324442   74510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 18:15:01.330137   74510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 18:15:01.335693   74510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 18:15:01.340987   74510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 18:15:01.346071   74510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 18:15:01.351280   74510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 18:15:01.357275   74510 kubeadm.go:392] StartCluster: {Name:embed-certs-777541 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-777541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.218 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 18:15:01.357388   74510 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 18:15:01.357427   74510 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 18:15:01.400422   74510 cri.go:89] found id: ""
	I0816 18:15:01.400497   74510 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 18:15:01.410142   74510 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 18:15:01.410162   74510 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 18:15:01.410211   74510 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 18:15:01.419129   74510 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 18:15:01.420130   74510 kubeconfig.go:125] found "embed-certs-777541" server: "https://192.168.61.218:8443"
	I0816 18:15:01.422036   74510 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 18:15:01.430665   74510 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.218
	I0816 18:15:01.430694   74510 kubeadm.go:1160] stopping kube-system containers ...
	I0816 18:15:01.430705   74510 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 18:15:01.430762   74510 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 18:15:01.469108   74510 cri.go:89] found id: ""
	I0816 18:15:01.469182   74510 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 18:15:01.486125   74510 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 18:15:01.495311   74510 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 18:15:01.495335   74510 kubeadm.go:157] found existing configuration files:
	
	I0816 18:15:01.495384   74510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 18:15:01.504066   74510 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 18:15:01.504128   74510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 18:15:01.513222   74510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 18:15:01.521593   74510 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 18:15:01.521692   74510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 18:15:01.530413   74510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 18:15:01.539027   74510 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 18:15:01.539101   74510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 18:15:01.547802   74510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 18:15:01.557143   74510 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 18:15:01.557203   74510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 18:15:01.568616   74510 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 18:15:01.578091   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:15:01.700661   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:15:02.631047   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:15:02.833132   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:15:02.900476   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:15:02.972431   74510 api_server.go:52] waiting for apiserver process to appear ...
	I0816 18:15:02.972514   74510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:03.473296   74510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:59.707731   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:02.206825   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:01.948070   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:03.948398   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:03.147098   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:03.646983   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:04.147320   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:04.647649   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:05.147258   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:05.647999   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:06.147901   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:06.647340   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:07.147339   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:07.648033   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:03.973603   74510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:04.472779   74510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:04.972846   74510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:05.473594   74510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:05.487878   74510 api_server.go:72] duration metric: took 2.51545841s to wait for apiserver process to appear ...
	I0816 18:15:05.487914   74510 api_server.go:88] waiting for apiserver healthz status ...
	I0816 18:15:05.487937   74510 api_server.go:253] Checking apiserver healthz at https://192.168.61.218:8443/healthz ...
	I0816 18:15:08.450583   74510 api_server.go:279] https://192.168.61.218:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 18:15:08.450618   74510 api_server.go:103] status: https://192.168.61.218:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 18:15:08.450635   74510 api_server.go:253] Checking apiserver healthz at https://192.168.61.218:8443/healthz ...
	I0816 18:15:08.495625   74510 api_server.go:279] https://192.168.61.218:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 18:15:08.495656   74510 api_server.go:103] status: https://192.168.61.218:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 18:15:08.495669   74510 api_server.go:253] Checking apiserver healthz at https://192.168.61.218:8443/healthz ...
	I0816 18:15:08.516711   74510 api_server.go:279] https://192.168.61.218:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:15:08.516744   74510 api_server.go:103] status: https://192.168.61.218:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:15:04.836663   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:07.206999   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:06.447839   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:08.449939   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:08.988897   74510 api_server.go:253] Checking apiserver healthz at https://192.168.61.218:8443/healthz ...
	I0816 18:15:08.996347   74510 api_server.go:279] https://192.168.61.218:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:15:08.996374   74510 api_server.go:103] status: https://192.168.61.218:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:15:09.488013   74510 api_server.go:253] Checking apiserver healthz at https://192.168.61.218:8443/healthz ...
	I0816 18:15:09.499514   74510 api_server.go:279] https://192.168.61.218:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:15:09.499559   74510 api_server.go:103] status: https://192.168.61.218:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:15:09.988080   74510 api_server.go:253] Checking apiserver healthz at https://192.168.61.218:8443/healthz ...
	I0816 18:15:09.992106   74510 api_server.go:279] https://192.168.61.218:8443/healthz returned 200:
	ok
	I0816 18:15:09.998515   74510 api_server.go:141] control plane version: v1.31.0
	I0816 18:15:09.998542   74510 api_server.go:131] duration metric: took 4.510619176s to wait for apiserver health ...
	I0816 18:15:09.998555   74510 cni.go:84] Creating CNI manager for ""
	I0816 18:15:09.998563   74510 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 18:15:10.000470   74510 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 18:15:10.001870   74510 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 18:15:10.011805   74510 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 18:15:10.032349   74510 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 18:15:10.046765   74510 system_pods.go:59] 8 kube-system pods found
	I0816 18:15:10.046798   74510 system_pods.go:61] "coredns-6f6b679f8f-8njs2" [f29c31e1-4c2a-4dd8-ba60-62998504c55e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 18:15:10.046808   74510 system_pods.go:61] "etcd-embed-certs-777541" [9cad9a1c-cea5-4271-9a68-3689ecdad607] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 18:15:10.046817   74510 system_pods.go:61] "kube-apiserver-embed-certs-777541" [a5105f98-368f-4687-8eca-ecc66ae59b42] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 18:15:10.046829   74510 system_pods.go:61] "kube-controller-manager-embed-certs-777541" [63cb72fb-167b-446b-874d-6ee665ad8a55] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 18:15:10.046838   74510 system_pods.go:61] "kube-proxy-j5rl7" [fcbc8903-6fa2-4f55-9ec0-92b77e21fb08] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0816 18:15:10.046847   74510 system_pods.go:61] "kube-scheduler-embed-certs-777541" [f2224375-d2f9-4e85-9eb8-26070a90642f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 18:15:10.046855   74510 system_pods.go:61] "metrics-server-6867b74b74-6hkzb" [3e01da8d-7ddf-47cc-9079-5162cf2c2b53] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 18:15:10.046867   74510 system_pods.go:61] "storage-provisioner" [6fc6c4da-0e0f-45cc-84a6-bd4907f5e852] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0816 18:15:10.046876   74510 system_pods.go:74] duration metric: took 14.506593ms to wait for pod list to return data ...
	I0816 18:15:10.046889   74510 node_conditions.go:102] verifying NodePressure condition ...
	I0816 18:15:10.050663   74510 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 18:15:10.050686   74510 node_conditions.go:123] node cpu capacity is 2
	I0816 18:15:10.050699   74510 node_conditions.go:105] duration metric: took 3.805313ms to run NodePressure ...
	I0816 18:15:10.050717   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:15:10.344177   74510 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0816 18:15:10.348795   74510 kubeadm.go:739] kubelet initialised
	I0816 18:15:10.348820   74510 kubeadm.go:740] duration metric: took 4.612695ms waiting for restarted kubelet to initialise ...
	I0816 18:15:10.348830   74510 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:15:10.355270   74510 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-8njs2" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:10.361564   74510 pod_ready.go:98] node "embed-certs-777541" hosting pod "coredns-6f6b679f8f-8njs2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:10.361584   74510 pod_ready.go:82] duration metric: took 6.2936ms for pod "coredns-6f6b679f8f-8njs2" in "kube-system" namespace to be "Ready" ...
	E0816 18:15:10.361592   74510 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-777541" hosting pod "coredns-6f6b679f8f-8njs2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:10.361598   74510 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:10.367126   74510 pod_ready.go:98] node "embed-certs-777541" hosting pod "etcd-embed-certs-777541" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:10.367149   74510 pod_ready.go:82] duration metric: took 5.542782ms for pod "etcd-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	E0816 18:15:10.367159   74510 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-777541" hosting pod "etcd-embed-certs-777541" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:10.367166   74510 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:10.372241   74510 pod_ready.go:98] node "embed-certs-777541" hosting pod "kube-apiserver-embed-certs-777541" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:10.372262   74510 pod_ready.go:82] duration metric: took 5.086551ms for pod "kube-apiserver-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	E0816 18:15:10.372273   74510 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-777541" hosting pod "kube-apiserver-embed-certs-777541" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:10.372301   74510 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:10.436397   74510 pod_ready.go:98] node "embed-certs-777541" hosting pod "kube-controller-manager-embed-certs-777541" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:10.436423   74510 pod_ready.go:82] duration metric: took 64.108858ms for pod "kube-controller-manager-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	E0816 18:15:10.436432   74510 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-777541" hosting pod "kube-controller-manager-embed-certs-777541" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:10.436443   74510 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-j5rl7" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:10.836116   74510 pod_ready.go:98] node "embed-certs-777541" hosting pod "kube-proxy-j5rl7" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:10.836146   74510 pod_ready.go:82] duration metric: took 399.693364ms for pod "kube-proxy-j5rl7" in "kube-system" namespace to be "Ready" ...
	E0816 18:15:10.836158   74510 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-777541" hosting pod "kube-proxy-j5rl7" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:10.836165   74510 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:11.235403   74510 pod_ready.go:98] node "embed-certs-777541" hosting pod "kube-scheduler-embed-certs-777541" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:11.235426   74510 pod_ready.go:82] duration metric: took 399.255693ms for pod "kube-scheduler-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	E0816 18:15:11.235439   74510 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-777541" hosting pod "kube-scheduler-embed-certs-777541" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:11.235445   74510 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:11.635717   74510 pod_ready.go:98] node "embed-certs-777541" hosting pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:11.635746   74510 pod_ready.go:82] duration metric: took 400.29283ms for pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace to be "Ready" ...
	E0816 18:15:11.635756   74510 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-777541" hosting pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:11.635762   74510 pod_ready.go:39] duration metric: took 1.286923943s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:15:11.635784   74510 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 18:15:11.646221   74510 ops.go:34] apiserver oom_adj: -16
	I0816 18:15:11.646248   74510 kubeadm.go:597] duration metric: took 10.23607804s to restartPrimaryControlPlane
	I0816 18:15:11.646269   74510 kubeadm.go:394] duration metric: took 10.288999278s to StartCluster
	I0816 18:15:11.646322   74510 settings.go:142] acquiring lock: {Name:mk7d6bbff611e99a9222156e58c7ea3b0a5541f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:15:11.646405   74510 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19461-9545/kubeconfig
	I0816 18:15:11.648652   74510 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/kubeconfig: {Name:mk7d5abb3a1b9b654c5d7490d32aeae2c9a1bdd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:15:11.648939   74510 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.218 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 18:15:11.649056   74510 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 18:15:11.649124   74510 config.go:182] Loaded profile config "embed-certs-777541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 18:15:11.649155   74510 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-777541"
	I0816 18:15:11.649165   74510 addons.go:69] Setting metrics-server=true in profile "embed-certs-777541"
	I0816 18:15:11.649192   74510 addons.go:234] Setting addon metrics-server=true in "embed-certs-777541"
	I0816 18:15:11.649201   74510 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-777541"
	W0816 18:15:11.649205   74510 addons.go:243] addon metrics-server should already be in state true
	I0816 18:15:11.649193   74510 addons.go:69] Setting default-storageclass=true in profile "embed-certs-777541"
	I0816 18:15:11.649252   74510 host.go:66] Checking if "embed-certs-777541" exists ...
	I0816 18:15:11.649254   74510 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-777541"
	W0816 18:15:11.649209   74510 addons.go:243] addon storage-provisioner should already be in state true
	I0816 18:15:11.649332   74510 host.go:66] Checking if "embed-certs-777541" exists ...
	I0816 18:15:11.649702   74510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:15:11.649706   74510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:15:11.649742   74510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:15:11.649772   74510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:15:11.649877   74510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:15:11.649930   74510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:15:11.651580   74510 out.go:177] * Verifying Kubernetes components...
	I0816 18:15:11.652903   74510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:15:11.665975   74510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33631
	I0816 18:15:11.666041   74510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44231
	I0816 18:15:11.666404   74510 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:15:11.666439   74510 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:15:11.666986   74510 main.go:141] libmachine: Using API Version  1
	I0816 18:15:11.667005   74510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:15:11.667051   74510 main.go:141] libmachine: Using API Version  1
	I0816 18:15:11.667085   74510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:15:11.667312   74510 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:15:11.667517   74510 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:15:11.667846   74510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:15:11.667899   74510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:15:11.668039   74510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:15:11.668077   74510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:15:11.669328   74510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38849
	I0816 18:15:11.669765   74510 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:15:11.670270   74510 main.go:141] libmachine: Using API Version  1
	I0816 18:15:11.670301   74510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:15:11.670658   74510 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:15:11.670896   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetState
	I0816 18:15:11.674148   74510 addons.go:234] Setting addon default-storageclass=true in "embed-certs-777541"
	W0816 18:15:11.674165   74510 addons.go:243] addon default-storageclass should already be in state true
	I0816 18:15:11.674184   74510 host.go:66] Checking if "embed-certs-777541" exists ...
	I0816 18:15:11.674448   74510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:15:11.674482   74510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:15:11.683629   74510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39851
	I0816 18:15:11.683637   74510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42943
	I0816 18:15:11.684040   74510 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:15:11.684048   74510 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:15:11.684499   74510 main.go:141] libmachine: Using API Version  1
	I0816 18:15:11.684516   74510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:15:11.684653   74510 main.go:141] libmachine: Using API Version  1
	I0816 18:15:11.684670   74510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:15:11.684968   74510 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:15:11.685114   74510 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:15:11.685136   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetState
	I0816 18:15:11.685329   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetState
	I0816 18:15:11.687030   74510 main.go:141] libmachine: (embed-certs-777541) Calling .DriverName
	I0816 18:15:11.687130   74510 main.go:141] libmachine: (embed-certs-777541) Calling .DriverName
	I0816 18:15:11.688852   74510 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0816 18:15:11.688855   74510 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:15:08.147308   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:08.647669   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:09.147149   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:09.647072   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:10.147381   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:10.647567   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:11.147101   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:11.647587   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:12.146972   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:12.647842   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:11.689590   74510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46491
	I0816 18:15:11.690041   74510 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:15:11.690152   74510 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 18:15:11.690170   74510 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0816 18:15:11.690186   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:15:11.690223   74510 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 18:15:11.690238   74510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 18:15:11.690253   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:15:11.690606   74510 main.go:141] libmachine: Using API Version  1
	I0816 18:15:11.690627   74510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:15:11.691006   74510 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:15:11.691543   74510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:15:11.691575   74510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:15:11.693646   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:15:11.693669   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:15:11.693988   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:15:11.694007   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:15:11.694051   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:15:11.694064   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:15:11.694275   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHPort
	I0816 18:15:11.694322   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHPort
	I0816 18:15:11.694436   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:15:11.694468   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:15:11.694545   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHUsername
	I0816 18:15:11.694602   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHUsername
	I0816 18:15:11.694677   74510 sshutil.go:53] new ssh client: &{IP:192.168.61.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/embed-certs-777541/id_rsa Username:docker}
	I0816 18:15:11.694885   74510 sshutil.go:53] new ssh client: &{IP:192.168.61.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/embed-certs-777541/id_rsa Username:docker}
	I0816 18:15:11.709409   74510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36055
	I0816 18:15:11.709800   74510 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:15:11.710343   74510 main.go:141] libmachine: Using API Version  1
	I0816 18:15:11.710363   74510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:15:11.710700   74510 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:15:11.710874   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetState
	I0816 18:15:11.712484   74510 main.go:141] libmachine: (embed-certs-777541) Calling .DriverName
	I0816 18:15:11.712691   74510 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 18:15:11.712706   74510 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 18:15:11.712723   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:15:11.715590   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:15:11.716017   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:15:11.716050   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:15:11.716167   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHPort
	I0816 18:15:11.716379   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:15:11.716572   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHUsername
	I0816 18:15:11.716737   74510 sshutil.go:53] new ssh client: &{IP:192.168.61.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/embed-certs-777541/id_rsa Username:docker}
	I0816 18:15:11.864710   74510 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 18:15:11.885871   74510 node_ready.go:35] waiting up to 6m0s for node "embed-certs-777541" to be "Ready" ...
	I0816 18:15:11.985725   74510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 18:15:12.007635   74510 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 18:15:12.007669   74510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0816 18:15:12.040044   74510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 18:15:12.059661   74510 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 18:15:12.059687   74510 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0816 18:15:12.123787   74510 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 18:15:12.123812   74510 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0816 18:15:12.167249   74510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 18:15:12.457960   74510 main.go:141] libmachine: Making call to close driver server
	I0816 18:15:12.457985   74510 main.go:141] libmachine: (embed-certs-777541) Calling .Close
	I0816 18:15:12.458264   74510 main.go:141] libmachine: (embed-certs-777541) DBG | Closing plugin on server side
	I0816 18:15:12.458315   74510 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:15:12.458334   74510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:15:12.458348   74510 main.go:141] libmachine: Making call to close driver server
	I0816 18:15:12.458360   74510 main.go:141] libmachine: (embed-certs-777541) Calling .Close
	I0816 18:15:12.458577   74510 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:15:12.458590   74510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:15:12.468651   74510 main.go:141] libmachine: Making call to close driver server
	I0816 18:15:12.468675   74510 main.go:141] libmachine: (embed-certs-777541) Calling .Close
	I0816 18:15:12.468921   74510 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:15:12.468940   74510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:15:12.468963   74510 main.go:141] libmachine: (embed-certs-777541) DBG | Closing plugin on server side
	I0816 18:15:13.203995   74510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.163904081s)
	I0816 18:15:13.204048   74510 main.go:141] libmachine: Making call to close driver server
	I0816 18:15:13.204060   74510 main.go:141] libmachine: (embed-certs-777541) Calling .Close
	I0816 18:15:13.204309   74510 main.go:141] libmachine: (embed-certs-777541) DBG | Closing plugin on server side
	I0816 18:15:13.204350   74510 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:15:13.204359   74510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:15:13.204368   74510 main.go:141] libmachine: Making call to close driver server
	I0816 18:15:13.204376   74510 main.go:141] libmachine: (embed-certs-777541) Calling .Close
	I0816 18:15:13.204562   74510 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:15:13.204589   74510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:15:13.213068   74510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.045790147s)
	I0816 18:15:13.213101   74510 main.go:141] libmachine: Making call to close driver server
	I0816 18:15:13.213115   74510 main.go:141] libmachine: (embed-certs-777541) Calling .Close
	I0816 18:15:13.213533   74510 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:15:13.213551   74510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:15:13.213555   74510 main.go:141] libmachine: (embed-certs-777541) DBG | Closing plugin on server side
	I0816 18:15:13.213560   74510 main.go:141] libmachine: Making call to close driver server
	I0816 18:15:13.213595   74510 main.go:141] libmachine: (embed-certs-777541) Calling .Close
	I0816 18:15:13.213869   74510 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:15:13.213887   74510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:15:13.213897   74510 addons.go:475] Verifying addon metrics-server=true in "embed-certs-777541"
	I0816 18:15:13.213901   74510 main.go:141] libmachine: (embed-certs-777541) DBG | Closing plugin on server side
	I0816 18:15:13.215724   74510 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0816 18:15:13.217031   74510 addons.go:510] duration metric: took 1.567977779s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0816 18:15:09.706813   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:11.708577   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:10.947986   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:12.949227   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:13.147558   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:13.647755   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:14.147408   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:14.647810   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:15.147888   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:15.647476   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:16.147258   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:16.647785   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:17.147086   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:17.647852   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:13.889379   74510 node_ready.go:53] node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:15.889764   74510 node_ready.go:53] node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:18.390031   74510 node_ready.go:53] node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:14.207743   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:16.705831   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:15.448826   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:17.950756   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:18.147086   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:18.647013   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:19.147027   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:19.647100   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:20.147070   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:20.647097   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:21.147251   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:21.647856   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:22.147427   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:22.647231   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:18.890110   74510 node_ready.go:49] node "embed-certs-777541" has status "Ready":"True"
	I0816 18:15:18.890138   74510 node_ready.go:38] duration metric: took 7.004237799s for node "embed-certs-777541" to be "Ready" ...
	I0816 18:15:18.890156   74510 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:15:18.897124   74510 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-8njs2" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:18.902860   74510 pod_ready.go:93] pod "coredns-6f6b679f8f-8njs2" in "kube-system" namespace has status "Ready":"True"
	I0816 18:15:18.902878   74510 pod_ready.go:82] duration metric: took 5.73242ms for pod "coredns-6f6b679f8f-8njs2" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:18.902886   74510 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:20.909185   74510 pod_ready.go:103] pod "etcd-embed-certs-777541" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:21.909629   74510 pod_ready.go:93] pod "etcd-embed-certs-777541" in "kube-system" namespace has status "Ready":"True"
	I0816 18:15:21.909660   74510 pod_ready.go:82] duration metric: took 3.006768325s for pod "etcd-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:21.909670   74510 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:22.916066   74510 pod_ready.go:93] pod "kube-apiserver-embed-certs-777541" in "kube-system" namespace has status "Ready":"True"
	I0816 18:15:22.916090   74510 pod_ready.go:82] duration metric: took 1.006414177s for pod "kube-apiserver-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:22.916099   74510 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:22.920882   74510 pod_ready.go:93] pod "kube-controller-manager-embed-certs-777541" in "kube-system" namespace has status "Ready":"True"
	I0816 18:15:22.920908   74510 pod_ready.go:82] duration metric: took 4.802561ms for pod "kube-controller-manager-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:22.920918   74510 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-j5rl7" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:22.926952   74510 pod_ready.go:93] pod "kube-proxy-j5rl7" in "kube-system" namespace has status "Ready":"True"
	I0816 18:15:22.926975   74510 pod_ready.go:82] duration metric: took 6.0498ms for pod "kube-proxy-j5rl7" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:22.926984   74510 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:19.206127   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:21.206280   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:23.705588   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:20.448793   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:22.948798   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:23.147403   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:23.647030   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:24.147677   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:24.647324   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:25.147973   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:25.647097   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:26.147160   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:26.646963   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:27.147620   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:27.647918   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:24.933953   74510 pod_ready.go:103] pod "kube-scheduler-embed-certs-777541" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:25.433826   74510 pod_ready.go:93] pod "kube-scheduler-embed-certs-777541" in "kube-system" namespace has status "Ready":"True"
	I0816 18:15:25.433846   74510 pod_ready.go:82] duration metric: took 2.506855714s for pod "kube-scheduler-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:25.433855   74510 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:27.440119   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:25.707915   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:28.206580   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:25.447687   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:27.948700   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:28.146994   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:28.647364   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:29.147332   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:29.647773   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:30.147276   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:30.647794   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:31.147398   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:31.647565   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:32.147139   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:32.647961   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:29.440564   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:31.940747   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:30.706544   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:32.706852   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:29.948982   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:32.447920   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:34.448186   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:33.147648   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:33.647087   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:34.147881   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:34.646988   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:35.147118   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:35.647978   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:36.147541   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:36.647423   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:37.147051   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:37.647726   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:34.439692   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:36.439956   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:38.440315   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:35.206291   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:37.206902   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:36.948416   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:39.447952   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:38.147192   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:38.647318   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:39.147186   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:39.647662   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:40.147044   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:40.647787   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:41.147638   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:41.647490   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:42.147787   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:42.647959   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:40.440405   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:42.440727   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:39.207086   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:41.706048   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:43.706585   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:41.450069   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:43.948101   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:43.147938   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:43.647855   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:44.147781   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:44.647710   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:15:44.647796   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:15:44.682176   75402 cri.go:89] found id: ""
	I0816 18:15:44.682207   75402 logs.go:276] 0 containers: []
	W0816 18:15:44.682218   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:15:44.682226   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:15:44.682285   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:15:44.717500   75402 cri.go:89] found id: ""
	I0816 18:15:44.717530   75402 logs.go:276] 0 containers: []
	W0816 18:15:44.717540   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:15:44.717552   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:15:44.717620   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:15:44.751816   75402 cri.go:89] found id: ""
	I0816 18:15:44.751847   75402 logs.go:276] 0 containers: []
	W0816 18:15:44.751858   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:15:44.751865   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:15:44.751942   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:15:44.783236   75402 cri.go:89] found id: ""
	I0816 18:15:44.783260   75402 logs.go:276] 0 containers: []
	W0816 18:15:44.783267   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:15:44.783272   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:15:44.783337   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:15:44.813087   75402 cri.go:89] found id: ""
	I0816 18:15:44.813110   75402 logs.go:276] 0 containers: []
	W0816 18:15:44.813116   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:15:44.813122   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:15:44.813166   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:15:44.843568   75402 cri.go:89] found id: ""
	I0816 18:15:44.843599   75402 logs.go:276] 0 containers: []
	W0816 18:15:44.843609   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:15:44.843616   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:15:44.843679   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:15:44.873694   75402 cri.go:89] found id: ""
	I0816 18:15:44.873723   75402 logs.go:276] 0 containers: []
	W0816 18:15:44.873734   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:15:44.873741   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:15:44.873808   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:15:44.906183   75402 cri.go:89] found id: ""
	I0816 18:15:44.906212   75402 logs.go:276] 0 containers: []
	W0816 18:15:44.906222   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:15:44.906231   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:15:44.906241   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:15:44.958963   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:15:44.958993   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:15:44.972390   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:15:44.972415   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:15:45.091624   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:15:45.091645   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:15:45.091661   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:15:45.159927   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:15:45.159963   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:15:47.698398   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:47.711848   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:15:47.711917   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:15:47.744247   75402 cri.go:89] found id: ""
	I0816 18:15:47.744278   75402 logs.go:276] 0 containers: []
	W0816 18:15:47.744288   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:15:47.744295   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:15:47.744374   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:15:47.783188   75402 cri.go:89] found id: ""
	I0816 18:15:47.783211   75402 logs.go:276] 0 containers: []
	W0816 18:15:47.783219   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:15:47.783224   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:15:47.783270   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:15:47.829284   75402 cri.go:89] found id: ""
	I0816 18:15:47.829320   75402 logs.go:276] 0 containers: []
	W0816 18:15:47.829333   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:15:47.829341   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:15:47.829413   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:15:47.879482   75402 cri.go:89] found id: ""
	I0816 18:15:47.879514   75402 logs.go:276] 0 containers: []
	W0816 18:15:47.879525   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:15:47.879532   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:15:47.879606   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:15:47.913766   75402 cri.go:89] found id: ""
	I0816 18:15:47.913797   75402 logs.go:276] 0 containers: []
	W0816 18:15:47.913808   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:15:47.913815   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:15:47.913880   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:15:47.947262   75402 cri.go:89] found id: ""
	I0816 18:15:47.947340   75402 logs.go:276] 0 containers: []
	W0816 18:15:47.947353   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:15:47.947362   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:15:47.947427   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:15:47.979638   75402 cri.go:89] found id: ""
	I0816 18:15:47.979667   75402 logs.go:276] 0 containers: []
	W0816 18:15:47.979678   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:15:47.979685   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:15:47.979741   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:15:48.010246   75402 cri.go:89] found id: ""
	I0816 18:15:48.010277   75402 logs.go:276] 0 containers: []
	W0816 18:15:48.010288   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:15:48.010296   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:15:48.010310   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:15:48.083916   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:15:48.083953   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:15:44.940775   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:47.440356   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:46.207236   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:48.705791   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:45.948300   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:47.948501   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:48.120254   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:15:48.120285   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:15:48.169590   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:15:48.169628   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:15:48.182821   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:15:48.182850   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:15:48.254088   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:15:50.755114   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:50.768167   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:15:50.768250   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:15:50.800881   75402 cri.go:89] found id: ""
	I0816 18:15:50.800906   75402 logs.go:276] 0 containers: []
	W0816 18:15:50.800913   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:15:50.800918   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:15:50.800969   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:15:50.833538   75402 cri.go:89] found id: ""
	I0816 18:15:50.833567   75402 logs.go:276] 0 containers: []
	W0816 18:15:50.833578   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:15:50.833586   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:15:50.833649   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:15:50.867306   75402 cri.go:89] found id: ""
	I0816 18:15:50.867336   75402 logs.go:276] 0 containers: []
	W0816 18:15:50.867347   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:15:50.867353   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:15:50.867400   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:15:50.900029   75402 cri.go:89] found id: ""
	I0816 18:15:50.900055   75402 logs.go:276] 0 containers: []
	W0816 18:15:50.900064   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:15:50.900072   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:15:50.900135   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:15:50.933604   75402 cri.go:89] found id: ""
	I0816 18:15:50.933630   75402 logs.go:276] 0 containers: []
	W0816 18:15:50.933638   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:15:50.933643   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:15:50.933707   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:15:50.966102   75402 cri.go:89] found id: ""
	I0816 18:15:50.966131   75402 logs.go:276] 0 containers: []
	W0816 18:15:50.966141   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:15:50.966149   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:15:50.966210   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:15:50.998007   75402 cri.go:89] found id: ""
	I0816 18:15:50.998036   75402 logs.go:276] 0 containers: []
	W0816 18:15:50.998047   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:15:50.998054   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:15:50.998115   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:15:51.032306   75402 cri.go:89] found id: ""
	I0816 18:15:51.032342   75402 logs.go:276] 0 containers: []
	W0816 18:15:51.032349   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:15:51.032357   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:15:51.032369   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:15:51.083186   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:15:51.083222   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:15:51.096072   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:15:51.096153   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:15:51.162667   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:15:51.162693   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:15:51.162709   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:15:51.241913   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:15:51.241954   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:15:49.440546   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:51.940026   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:50.706662   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:53.206075   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:50.447947   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:52.448340   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:54.448431   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:53.779323   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:53.793358   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:15:53.793433   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:15:53.827380   75402 cri.go:89] found id: ""
	I0816 18:15:53.827414   75402 logs.go:276] 0 containers: []
	W0816 18:15:53.827424   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:15:53.827430   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:15:53.827489   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:15:53.867331   75402 cri.go:89] found id: ""
	I0816 18:15:53.867370   75402 logs.go:276] 0 containers: []
	W0816 18:15:53.867380   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:15:53.867386   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:15:53.867438   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:15:53.899445   75402 cri.go:89] found id: ""
	I0816 18:15:53.899477   75402 logs.go:276] 0 containers: []
	W0816 18:15:53.899489   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:15:53.899498   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:15:53.899588   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:15:53.936527   75402 cri.go:89] found id: ""
	I0816 18:15:53.936556   75402 logs.go:276] 0 containers: []
	W0816 18:15:53.936568   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:15:53.936576   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:15:53.936653   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:15:53.970739   75402 cri.go:89] found id: ""
	I0816 18:15:53.970765   75402 logs.go:276] 0 containers: []
	W0816 18:15:53.970773   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:15:53.970780   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:15:53.970842   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:15:54.004119   75402 cri.go:89] found id: ""
	I0816 18:15:54.004150   75402 logs.go:276] 0 containers: []
	W0816 18:15:54.004159   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:15:54.004164   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:15:54.004217   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:15:54.038370   75402 cri.go:89] found id: ""
	I0816 18:15:54.038400   75402 logs.go:276] 0 containers: []
	W0816 18:15:54.038411   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:15:54.038416   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:15:54.038472   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:15:54.079346   75402 cri.go:89] found id: ""
	I0816 18:15:54.079375   75402 logs.go:276] 0 containers: []
	W0816 18:15:54.079383   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:15:54.079392   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:15:54.079403   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:15:54.116551   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:15:54.116586   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:15:54.169930   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:15:54.169970   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:15:54.182416   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:15:54.182448   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:15:54.253516   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:15:54.253539   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:15:54.253559   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:15:56.833124   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:56.846139   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:15:56.846211   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:15:56.880899   75402 cri.go:89] found id: ""
	I0816 18:15:56.880928   75402 logs.go:276] 0 containers: []
	W0816 18:15:56.880939   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:15:56.880945   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:15:56.880994   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:15:56.913362   75402 cri.go:89] found id: ""
	I0816 18:15:56.913393   75402 logs.go:276] 0 containers: []
	W0816 18:15:56.913406   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:15:56.913415   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:15:56.913507   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:15:56.951876   75402 cri.go:89] found id: ""
	I0816 18:15:56.951904   75402 logs.go:276] 0 containers: []
	W0816 18:15:56.951914   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:15:56.951919   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:15:56.951988   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:15:56.986335   75402 cri.go:89] found id: ""
	I0816 18:15:56.986358   75402 logs.go:276] 0 containers: []
	W0816 18:15:56.986366   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:15:56.986372   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:15:56.986423   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:15:57.022485   75402 cri.go:89] found id: ""
	I0816 18:15:57.022511   75402 logs.go:276] 0 containers: []
	W0816 18:15:57.022522   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:15:57.022529   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:15:57.022641   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:15:57.055436   75402 cri.go:89] found id: ""
	I0816 18:15:57.055463   75402 logs.go:276] 0 containers: []
	W0816 18:15:57.055470   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:15:57.055476   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:15:57.055536   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:15:57.085930   75402 cri.go:89] found id: ""
	I0816 18:15:57.085965   75402 logs.go:276] 0 containers: []
	W0816 18:15:57.085975   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:15:57.085981   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:15:57.086032   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:15:57.120436   75402 cri.go:89] found id: ""
	I0816 18:15:57.120466   75402 logs.go:276] 0 containers: []
	W0816 18:15:57.120477   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:15:57.120488   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:15:57.120501   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:15:57.202161   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:15:57.202218   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:15:57.243766   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:15:57.243805   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:15:57.295552   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:15:57.295585   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:15:57.307769   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:15:57.307802   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:15:57.390480   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:15:53.941399   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:56.439763   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:58.440357   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:55.206970   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:57.207312   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:56.948085   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:59.448174   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:59.891480   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:59.904766   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:15:59.904836   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:15:59.939209   75402 cri.go:89] found id: ""
	I0816 18:15:59.939241   75402 logs.go:276] 0 containers: []
	W0816 18:15:59.939252   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:15:59.939260   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:15:59.939324   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:15:59.971782   75402 cri.go:89] found id: ""
	I0816 18:15:59.971812   75402 logs.go:276] 0 containers: []
	W0816 18:15:59.971822   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:15:59.971832   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:15:59.971894   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:00.018585   75402 cri.go:89] found id: ""
	I0816 18:16:00.018630   75402 logs.go:276] 0 containers: []
	W0816 18:16:00.018643   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:00.018654   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:00.018722   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:00.050484   75402 cri.go:89] found id: ""
	I0816 18:16:00.050520   75402 logs.go:276] 0 containers: []
	W0816 18:16:00.050532   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:00.050540   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:00.050603   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:00.082900   75402 cri.go:89] found id: ""
	I0816 18:16:00.082930   75402 logs.go:276] 0 containers: []
	W0816 18:16:00.082942   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:00.082951   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:00.083025   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:00.115330   75402 cri.go:89] found id: ""
	I0816 18:16:00.115363   75402 logs.go:276] 0 containers: []
	W0816 18:16:00.115372   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:00.115378   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:00.115442   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:00.150898   75402 cri.go:89] found id: ""
	I0816 18:16:00.150935   75402 logs.go:276] 0 containers: []
	W0816 18:16:00.150952   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:00.150960   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:00.151033   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:00.193304   75402 cri.go:89] found id: ""
	I0816 18:16:00.193338   75402 logs.go:276] 0 containers: []
	W0816 18:16:00.193349   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:00.193359   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:00.193370   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:00.247340   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:00.247376   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:00.260470   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:00.260500   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:00.336483   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:00.336506   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:00.336521   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:00.421251   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:00.421289   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:02.964042   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:02.977284   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:02.977381   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:03.009533   75402 cri.go:89] found id: ""
	I0816 18:16:03.009574   75402 logs.go:276] 0 containers: []
	W0816 18:16:03.009586   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:03.009594   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:03.009673   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:03.043756   75402 cri.go:89] found id: ""
	I0816 18:16:03.043784   75402 logs.go:276] 0 containers: []
	W0816 18:16:03.043794   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:03.043802   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:03.043867   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:03.078817   75402 cri.go:89] found id: ""
	I0816 18:16:03.078840   75402 logs.go:276] 0 containers: []
	W0816 18:16:03.078848   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:03.078853   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:03.078906   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:00.440728   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:02.440788   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:59.706129   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:01.707967   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:01.948193   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:04.448504   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:03.112874   75402 cri.go:89] found id: ""
	I0816 18:16:03.112903   75402 logs.go:276] 0 containers: []
	W0816 18:16:03.112912   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:03.112918   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:03.112985   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:03.152008   75402 cri.go:89] found id: ""
	I0816 18:16:03.152040   75402 logs.go:276] 0 containers: []
	W0816 18:16:03.152052   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:03.152059   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:03.152125   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:03.187353   75402 cri.go:89] found id: ""
	I0816 18:16:03.187386   75402 logs.go:276] 0 containers: []
	W0816 18:16:03.187396   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:03.187404   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:03.187467   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:03.220860   75402 cri.go:89] found id: ""
	I0816 18:16:03.220895   75402 logs.go:276] 0 containers: []
	W0816 18:16:03.220903   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:03.220909   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:03.220958   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:03.252202   75402 cri.go:89] found id: ""
	I0816 18:16:03.252240   75402 logs.go:276] 0 containers: []
	W0816 18:16:03.252247   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:03.252256   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:03.252268   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:03.286907   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:03.286934   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:03.338212   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:03.338249   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:03.352548   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:03.352585   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:03.427580   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:03.427610   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:03.427626   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:06.011792   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:06.024201   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:06.024277   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:06.058328   75402 cri.go:89] found id: ""
	I0816 18:16:06.058356   75402 logs.go:276] 0 containers: []
	W0816 18:16:06.058367   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:06.058373   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:06.058433   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:06.091262   75402 cri.go:89] found id: ""
	I0816 18:16:06.091298   75402 logs.go:276] 0 containers: []
	W0816 18:16:06.091311   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:06.091318   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:06.091382   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:06.124114   75402 cri.go:89] found id: ""
	I0816 18:16:06.124146   75402 logs.go:276] 0 containers: []
	W0816 18:16:06.124154   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:06.124159   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:06.124220   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:06.155379   75402 cri.go:89] found id: ""
	I0816 18:16:06.155406   75402 logs.go:276] 0 containers: []
	W0816 18:16:06.155416   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:06.155422   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:06.155471   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:06.189442   75402 cri.go:89] found id: ""
	I0816 18:16:06.189472   75402 logs.go:276] 0 containers: []
	W0816 18:16:06.189480   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:06.189485   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:06.189538   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:06.228881   75402 cri.go:89] found id: ""
	I0816 18:16:06.228910   75402 logs.go:276] 0 containers: []
	W0816 18:16:06.228921   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:06.228929   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:06.229003   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:06.262272   75402 cri.go:89] found id: ""
	I0816 18:16:06.262299   75402 logs.go:276] 0 containers: []
	W0816 18:16:06.262310   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:06.262317   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:06.262386   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:06.295427   75402 cri.go:89] found id: ""
	I0816 18:16:06.295456   75402 logs.go:276] 0 containers: []
	W0816 18:16:06.295468   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:06.295478   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:06.295492   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:06.347569   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:06.347608   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:06.362786   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:06.362825   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:06.432020   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:06.432044   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:06.432059   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:06.512085   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:06.512120   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:04.940128   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:07.439708   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:04.206477   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:06.208125   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:08.706765   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:06.947599   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:08.948183   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:09.051957   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:09.066630   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:09.066690   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:09.101484   75402 cri.go:89] found id: ""
	I0816 18:16:09.101515   75402 logs.go:276] 0 containers: []
	W0816 18:16:09.101526   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:09.101536   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:09.101614   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:09.140645   75402 cri.go:89] found id: ""
	I0816 18:16:09.140677   75402 logs.go:276] 0 containers: []
	W0816 18:16:09.140689   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:09.140696   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:09.140758   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:09.174666   75402 cri.go:89] found id: ""
	I0816 18:16:09.174698   75402 logs.go:276] 0 containers: []
	W0816 18:16:09.174708   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:09.174717   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:09.174780   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:09.209715   75402 cri.go:89] found id: ""
	I0816 18:16:09.209748   75402 logs.go:276] 0 containers: []
	W0816 18:16:09.209758   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:09.209767   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:09.209845   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:09.243681   75402 cri.go:89] found id: ""
	I0816 18:16:09.243712   75402 logs.go:276] 0 containers: []
	W0816 18:16:09.243720   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:09.243726   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:09.243781   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:09.278058   75402 cri.go:89] found id: ""
	I0816 18:16:09.278090   75402 logs.go:276] 0 containers: []
	W0816 18:16:09.278102   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:09.278111   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:09.278178   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:09.313092   75402 cri.go:89] found id: ""
	I0816 18:16:09.313122   75402 logs.go:276] 0 containers: []
	W0816 18:16:09.313132   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:09.313137   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:09.313201   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:09.345203   75402 cri.go:89] found id: ""
	I0816 18:16:09.345229   75402 logs.go:276] 0 containers: []
	W0816 18:16:09.345236   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:09.345245   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:09.345259   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:09.358198   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:09.358225   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:09.422024   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:09.422047   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:09.422059   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:09.498684   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:09.498717   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:09.535349   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:09.535382   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:12.087472   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:12.100412   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:12.100477   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:12.133982   75402 cri.go:89] found id: ""
	I0816 18:16:12.134018   75402 logs.go:276] 0 containers: []
	W0816 18:16:12.134030   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:12.134038   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:12.134100   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:12.166466   75402 cri.go:89] found id: ""
	I0816 18:16:12.166497   75402 logs.go:276] 0 containers: []
	W0816 18:16:12.166507   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:12.166514   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:12.166589   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:12.197752   75402 cri.go:89] found id: ""
	I0816 18:16:12.197779   75402 logs.go:276] 0 containers: []
	W0816 18:16:12.197790   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:12.197797   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:12.197856   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:12.239759   75402 cri.go:89] found id: ""
	I0816 18:16:12.239789   75402 logs.go:276] 0 containers: []
	W0816 18:16:12.239801   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:12.239810   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:12.239871   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:12.273263   75402 cri.go:89] found id: ""
	I0816 18:16:12.273292   75402 logs.go:276] 0 containers: []
	W0816 18:16:12.273302   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:12.273310   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:12.273370   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:12.308788   75402 cri.go:89] found id: ""
	I0816 18:16:12.308820   75402 logs.go:276] 0 containers: []
	W0816 18:16:12.308831   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:12.308839   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:12.308897   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:12.345243   75402 cri.go:89] found id: ""
	I0816 18:16:12.345274   75402 logs.go:276] 0 containers: []
	W0816 18:16:12.345281   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:12.345288   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:12.345341   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:12.379939   75402 cri.go:89] found id: ""
	I0816 18:16:12.379968   75402 logs.go:276] 0 containers: []
	W0816 18:16:12.379978   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:12.379989   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:12.380004   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:12.436097   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:12.436130   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:12.449328   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:12.449357   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:12.518723   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:12.518749   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:12.518764   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:12.600228   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:12.600268   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:09.441051   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:11.441097   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:11.206853   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:13.705328   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:11.449793   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:13.948517   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:15.137940   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:15.150617   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:15.150694   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:15.186029   75402 cri.go:89] found id: ""
	I0816 18:16:15.186057   75402 logs.go:276] 0 containers: []
	W0816 18:16:15.186067   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:15.186074   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:15.186134   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:15.219812   75402 cri.go:89] found id: ""
	I0816 18:16:15.219840   75402 logs.go:276] 0 containers: []
	W0816 18:16:15.219851   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:15.219864   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:15.219927   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:15.253434   75402 cri.go:89] found id: ""
	I0816 18:16:15.253462   75402 logs.go:276] 0 containers: []
	W0816 18:16:15.253472   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:15.253479   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:15.253542   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:15.286697   75402 cri.go:89] found id: ""
	I0816 18:16:15.286729   75402 logs.go:276] 0 containers: []
	W0816 18:16:15.286745   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:15.286751   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:15.286810   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:15.319363   75402 cri.go:89] found id: ""
	I0816 18:16:15.319405   75402 logs.go:276] 0 containers: []
	W0816 18:16:15.319415   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:15.319422   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:15.319506   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:15.353900   75402 cri.go:89] found id: ""
	I0816 18:16:15.353924   75402 logs.go:276] 0 containers: []
	W0816 18:16:15.353931   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:15.353937   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:15.353991   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:15.389086   75402 cri.go:89] found id: ""
	I0816 18:16:15.389114   75402 logs.go:276] 0 containers: []
	W0816 18:16:15.389122   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:15.389127   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:15.389184   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:15.424069   75402 cri.go:89] found id: ""
	I0816 18:16:15.424099   75402 logs.go:276] 0 containers: []
	W0816 18:16:15.424110   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:15.424121   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:15.424136   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:15.482703   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:15.482738   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:15.496859   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:15.496886   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:15.562178   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:15.562196   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:15.562212   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:15.643484   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:15.643521   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:13.944174   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:16.439987   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:18.442569   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:15.706743   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:18.206088   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:16.448775   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:18.948447   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:18.180963   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:18.194705   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:18.194783   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:18.231302   75402 cri.go:89] found id: ""
	I0816 18:16:18.231337   75402 logs.go:276] 0 containers: []
	W0816 18:16:18.231348   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:18.231355   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:18.231413   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:18.264098   75402 cri.go:89] found id: ""
	I0816 18:16:18.264124   75402 logs.go:276] 0 containers: []
	W0816 18:16:18.264135   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:18.264155   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:18.264228   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:18.298133   75402 cri.go:89] found id: ""
	I0816 18:16:18.298165   75402 logs.go:276] 0 containers: []
	W0816 18:16:18.298178   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:18.298186   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:18.298252   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:18.331323   75402 cri.go:89] found id: ""
	I0816 18:16:18.331354   75402 logs.go:276] 0 containers: []
	W0816 18:16:18.331362   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:18.331367   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:18.331416   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:18.365677   75402 cri.go:89] found id: ""
	I0816 18:16:18.365709   75402 logs.go:276] 0 containers: []
	W0816 18:16:18.365718   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:18.365724   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:18.365774   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:18.399801   75402 cri.go:89] found id: ""
	I0816 18:16:18.399835   75402 logs.go:276] 0 containers: []
	W0816 18:16:18.399844   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:18.399850   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:18.399908   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:18.438148   75402 cri.go:89] found id: ""
	I0816 18:16:18.438179   75402 logs.go:276] 0 containers: []
	W0816 18:16:18.438189   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:18.438197   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:18.438257   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:18.472185   75402 cri.go:89] found id: ""
	I0816 18:16:18.472215   75402 logs.go:276] 0 containers: []
	W0816 18:16:18.472223   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:18.472232   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:18.472243   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:18.523369   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:18.523400   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:18.536152   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:18.536179   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:18.611539   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:18.611560   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:18.611571   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:18.688043   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:18.688079   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:21.229163   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:21.242641   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:21.242717   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:21.275188   75402 cri.go:89] found id: ""
	I0816 18:16:21.275213   75402 logs.go:276] 0 containers: []
	W0816 18:16:21.275220   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:21.275226   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:21.275275   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:21.308377   75402 cri.go:89] found id: ""
	I0816 18:16:21.308406   75402 logs.go:276] 0 containers: []
	W0816 18:16:21.308417   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:21.308424   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:21.308475   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:21.341067   75402 cri.go:89] found id: ""
	I0816 18:16:21.341098   75402 logs.go:276] 0 containers: []
	W0816 18:16:21.341106   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:21.341112   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:21.341170   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:21.372707   75402 cri.go:89] found id: ""
	I0816 18:16:21.372743   75402 logs.go:276] 0 containers: []
	W0816 18:16:21.372756   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:21.372763   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:21.372847   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:21.410210   75402 cri.go:89] found id: ""
	I0816 18:16:21.410241   75402 logs.go:276] 0 containers: []
	W0816 18:16:21.410252   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:21.410259   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:21.410323   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:21.444840   75402 cri.go:89] found id: ""
	I0816 18:16:21.444863   75402 logs.go:276] 0 containers: []
	W0816 18:16:21.444872   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:21.444879   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:21.444942   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:21.478278   75402 cri.go:89] found id: ""
	I0816 18:16:21.478319   75402 logs.go:276] 0 containers: []
	W0816 18:16:21.478327   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:21.478333   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:21.478395   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:21.512026   75402 cri.go:89] found id: ""
	I0816 18:16:21.512063   75402 logs.go:276] 0 containers: []
	W0816 18:16:21.512073   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:21.512090   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:21.512111   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:21.564800   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:21.564834   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:21.577343   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:21.577368   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:21.663216   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:21.663238   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:21.663251   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:21.741960   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:21.741994   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:20.939740   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:22.942844   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:20.706032   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:22.707112   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:21.449404   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:23.454804   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:24.282136   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:24.296452   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:24.296513   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:24.337173   75402 cri.go:89] found id: ""
	I0816 18:16:24.337200   75402 logs.go:276] 0 containers: []
	W0816 18:16:24.337210   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:24.337218   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:24.337282   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:24.374163   75402 cri.go:89] found id: ""
	I0816 18:16:24.374200   75402 logs.go:276] 0 containers: []
	W0816 18:16:24.374213   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:24.374222   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:24.374287   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:24.407823   75402 cri.go:89] found id: ""
	I0816 18:16:24.407854   75402 logs.go:276] 0 containers: []
	W0816 18:16:24.407866   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:24.407881   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:24.407953   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:24.444006   75402 cri.go:89] found id: ""
	I0816 18:16:24.444032   75402 logs.go:276] 0 containers: []
	W0816 18:16:24.444042   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:24.444049   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:24.444113   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:24.479082   75402 cri.go:89] found id: ""
	I0816 18:16:24.479110   75402 logs.go:276] 0 containers: []
	W0816 18:16:24.479119   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:24.479125   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:24.479174   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:24.524738   75402 cri.go:89] found id: ""
	I0816 18:16:24.524764   75402 logs.go:276] 0 containers: []
	W0816 18:16:24.524775   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:24.524782   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:24.524842   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:24.560298   75402 cri.go:89] found id: ""
	I0816 18:16:24.560326   75402 logs.go:276] 0 containers: []
	W0816 18:16:24.560335   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:24.560343   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:24.560406   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:24.597182   75402 cri.go:89] found id: ""
	I0816 18:16:24.597214   75402 logs.go:276] 0 containers: []
	W0816 18:16:24.597227   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:24.597239   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:24.597254   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:24.653063   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:24.653106   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:24.665940   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:24.665972   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:24.736599   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:24.736639   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:24.736657   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:24.821883   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:24.821939   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:27.359558   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:27.382980   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:27.383053   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:27.416766   75402 cri.go:89] found id: ""
	I0816 18:16:27.416793   75402 logs.go:276] 0 containers: []
	W0816 18:16:27.416802   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:27.416811   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:27.416873   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:27.452966   75402 cri.go:89] found id: ""
	I0816 18:16:27.452988   75402 logs.go:276] 0 containers: []
	W0816 18:16:27.452995   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:27.453001   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:27.453050   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:27.485850   75402 cri.go:89] found id: ""
	I0816 18:16:27.485885   75402 logs.go:276] 0 containers: []
	W0816 18:16:27.485896   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:27.485903   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:27.485960   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:27.517667   75402 cri.go:89] found id: ""
	I0816 18:16:27.517694   75402 logs.go:276] 0 containers: []
	W0816 18:16:27.517704   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:27.517711   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:27.517774   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:27.553547   75402 cri.go:89] found id: ""
	I0816 18:16:27.553574   75402 logs.go:276] 0 containers: []
	W0816 18:16:27.553582   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:27.553593   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:27.553653   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:27.586857   75402 cri.go:89] found id: ""
	I0816 18:16:27.586884   75402 logs.go:276] 0 containers: []
	W0816 18:16:27.586893   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:27.586898   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:27.586957   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:27.621739   75402 cri.go:89] found id: ""
	I0816 18:16:27.621766   75402 logs.go:276] 0 containers: []
	W0816 18:16:27.621776   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:27.621784   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:27.621844   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:27.657772   75402 cri.go:89] found id: ""
	I0816 18:16:27.657797   75402 logs.go:276] 0 containers: []
	W0816 18:16:27.657805   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:27.657819   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:27.657831   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:27.729769   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:27.729796   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:27.729810   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:27.813351   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:27.813403   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:27.852985   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:27.853010   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:27.908434   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:27.908476   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:25.439828   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:27.440749   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:25.207590   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:27.706496   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:25.948579   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:28.448590   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:30.422781   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:30.435987   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:30.436070   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:30.470878   75402 cri.go:89] found id: ""
	I0816 18:16:30.470907   75402 logs.go:276] 0 containers: []
	W0816 18:16:30.470918   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:30.470926   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:30.470983   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:30.504940   75402 cri.go:89] found id: ""
	I0816 18:16:30.504969   75402 logs.go:276] 0 containers: []
	W0816 18:16:30.504980   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:30.504988   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:30.505058   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:30.538680   75402 cri.go:89] found id: ""
	I0816 18:16:30.538708   75402 logs.go:276] 0 containers: []
	W0816 18:16:30.538716   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:30.538722   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:30.538788   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:30.574757   75402 cri.go:89] found id: ""
	I0816 18:16:30.574782   75402 logs.go:276] 0 containers: []
	W0816 18:16:30.574791   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:30.574797   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:30.574853   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:30.612500   75402 cri.go:89] found id: ""
	I0816 18:16:30.612529   75402 logs.go:276] 0 containers: []
	W0816 18:16:30.612539   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:30.612547   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:30.612613   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:30.644572   75402 cri.go:89] found id: ""
	I0816 18:16:30.644595   75402 logs.go:276] 0 containers: []
	W0816 18:16:30.644603   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:30.644609   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:30.644678   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:30.678199   75402 cri.go:89] found id: ""
	I0816 18:16:30.678232   75402 logs.go:276] 0 containers: []
	W0816 18:16:30.678243   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:30.678252   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:30.678331   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:30.709435   75402 cri.go:89] found id: ""
	I0816 18:16:30.709470   75402 logs.go:276] 0 containers: []
	W0816 18:16:30.709482   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:30.709494   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:30.709511   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:30.723430   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:30.723464   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:30.800340   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:30.800374   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:30.800390   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:30.883945   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:30.883986   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:30.922107   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:30.922139   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:29.940430   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:32.440198   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:29.706649   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:32.205271   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:30.949515   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:33.448456   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:33.480016   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:33.494178   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:33.494241   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:33.529497   75402 cri.go:89] found id: ""
	I0816 18:16:33.529527   75402 logs.go:276] 0 containers: []
	W0816 18:16:33.529546   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:33.529554   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:33.529614   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:33.566670   75402 cri.go:89] found id: ""
	I0816 18:16:33.566700   75402 logs.go:276] 0 containers: []
	W0816 18:16:33.566711   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:33.566718   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:33.566781   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:33.603898   75402 cri.go:89] found id: ""
	I0816 18:16:33.603926   75402 logs.go:276] 0 containers: []
	W0816 18:16:33.603937   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:33.603944   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:33.604003   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:33.636077   75402 cri.go:89] found id: ""
	I0816 18:16:33.636111   75402 logs.go:276] 0 containers: []
	W0816 18:16:33.636125   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:33.636134   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:33.636200   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:33.668974   75402 cri.go:89] found id: ""
	I0816 18:16:33.669002   75402 logs.go:276] 0 containers: []
	W0816 18:16:33.669011   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:33.669017   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:33.669070   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:33.700981   75402 cri.go:89] found id: ""
	I0816 18:16:33.701010   75402 logs.go:276] 0 containers: []
	W0816 18:16:33.701019   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:33.701026   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:33.701088   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:33.735430   75402 cri.go:89] found id: ""
	I0816 18:16:33.735463   75402 logs.go:276] 0 containers: []
	W0816 18:16:33.735474   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:33.735481   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:33.735539   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:33.779797   75402 cri.go:89] found id: ""
	I0816 18:16:33.779829   75402 logs.go:276] 0 containers: []
	W0816 18:16:33.779840   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:33.779851   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:33.779865   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:33.824873   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:33.824908   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:33.874177   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:33.874217   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:33.888535   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:33.888561   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:33.957590   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:33.957608   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:33.957627   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:36.533660   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:36.546542   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:36.546606   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:36.584056   75402 cri.go:89] found id: ""
	I0816 18:16:36.584085   75402 logs.go:276] 0 containers: []
	W0816 18:16:36.584094   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:36.584099   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:36.584149   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:36.622143   75402 cri.go:89] found id: ""
	I0816 18:16:36.622172   75402 logs.go:276] 0 containers: []
	W0816 18:16:36.622184   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:36.622193   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:36.622262   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:36.655479   75402 cri.go:89] found id: ""
	I0816 18:16:36.655509   75402 logs.go:276] 0 containers: []
	W0816 18:16:36.655520   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:36.655528   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:36.655603   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:36.688044   75402 cri.go:89] found id: ""
	I0816 18:16:36.688076   75402 logs.go:276] 0 containers: []
	W0816 18:16:36.688088   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:36.688096   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:36.688161   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:36.725831   75402 cri.go:89] found id: ""
	I0816 18:16:36.725861   75402 logs.go:276] 0 containers: []
	W0816 18:16:36.725868   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:36.725874   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:36.725925   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:36.758398   75402 cri.go:89] found id: ""
	I0816 18:16:36.758433   75402 logs.go:276] 0 containers: []
	W0816 18:16:36.758444   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:36.758453   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:36.758517   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:36.791097   75402 cri.go:89] found id: ""
	I0816 18:16:36.791126   75402 logs.go:276] 0 containers: []
	W0816 18:16:36.791136   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:36.791144   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:36.791207   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:36.829337   75402 cri.go:89] found id: ""
	I0816 18:16:36.829369   75402 logs.go:276] 0 containers: []
	W0816 18:16:36.829380   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:36.829391   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:36.829405   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:36.881898   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:36.881932   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:36.895584   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:36.895618   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:36.967175   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:36.967197   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:36.967213   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:37.046993   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:37.047025   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:34.440475   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:36.946369   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:34.206677   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:36.207893   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:38.706193   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:35.449611   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:37.947527   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:39.588683   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:39.607205   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:39.607287   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:39.640517   75402 cri.go:89] found id: ""
	I0816 18:16:39.640541   75402 logs.go:276] 0 containers: []
	W0816 18:16:39.640549   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:39.640554   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:39.640604   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:39.673777   75402 cri.go:89] found id: ""
	I0816 18:16:39.673805   75402 logs.go:276] 0 containers: []
	W0816 18:16:39.673813   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:39.673818   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:39.673899   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:39.709574   75402 cri.go:89] found id: ""
	I0816 18:16:39.709598   75402 logs.go:276] 0 containers: []
	W0816 18:16:39.709606   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:39.709611   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:39.709666   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:39.743946   75402 cri.go:89] found id: ""
	I0816 18:16:39.743971   75402 logs.go:276] 0 containers: []
	W0816 18:16:39.743979   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:39.743985   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:39.744041   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:39.776140   75402 cri.go:89] found id: ""
	I0816 18:16:39.776171   75402 logs.go:276] 0 containers: []
	W0816 18:16:39.776181   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:39.776187   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:39.776254   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:39.808697   75402 cri.go:89] found id: ""
	I0816 18:16:39.808719   75402 logs.go:276] 0 containers: []
	W0816 18:16:39.808728   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:39.808734   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:39.808793   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:39.840163   75402 cri.go:89] found id: ""
	I0816 18:16:39.840190   75402 logs.go:276] 0 containers: []
	W0816 18:16:39.840200   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:39.840206   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:39.840270   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:39.874396   75402 cri.go:89] found id: ""
	I0816 18:16:39.874419   75402 logs.go:276] 0 containers: []
	W0816 18:16:39.874426   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:39.874434   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:39.874448   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:39.927922   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:39.927963   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:39.942048   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:39.942076   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:40.012143   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:40.012166   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:40.012181   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:40.088798   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:40.088844   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:42.625875   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:42.640386   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:42.640448   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:42.675201   75402 cri.go:89] found id: ""
	I0816 18:16:42.675224   75402 logs.go:276] 0 containers: []
	W0816 18:16:42.675231   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:42.675236   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:42.675293   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:42.705156   75402 cri.go:89] found id: ""
	I0816 18:16:42.705182   75402 logs.go:276] 0 containers: []
	W0816 18:16:42.705192   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:42.705199   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:42.705258   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:42.738921   75402 cri.go:89] found id: ""
	I0816 18:16:42.738948   75402 logs.go:276] 0 containers: []
	W0816 18:16:42.738956   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:42.738962   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:42.739013   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:42.771130   75402 cri.go:89] found id: ""
	I0816 18:16:42.771160   75402 logs.go:276] 0 containers: []
	W0816 18:16:42.771168   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:42.771175   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:42.771231   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:42.805774   75402 cri.go:89] found id: ""
	I0816 18:16:42.805803   75402 logs.go:276] 0 containers: []
	W0816 18:16:42.805811   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:42.805817   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:42.805879   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:42.840248   75402 cri.go:89] found id: ""
	I0816 18:16:42.840277   75402 logs.go:276] 0 containers: []
	W0816 18:16:42.840293   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:42.840302   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:42.840360   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:42.873260   75402 cri.go:89] found id: ""
	I0816 18:16:42.873287   75402 logs.go:276] 0 containers: []
	W0816 18:16:42.873297   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:42.873322   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:42.873383   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:42.906205   75402 cri.go:89] found id: ""
	I0816 18:16:42.906230   75402 logs.go:276] 0 containers: []
	W0816 18:16:42.906238   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:42.906247   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:42.906257   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:42.959235   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:42.959272   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:42.972063   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:42.972090   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:43.039530   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:43.039558   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:43.039569   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:39.440219   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:41.441052   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:40.707059   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:43.210643   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:39.948907   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:42.448534   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:43.115486   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:43.115519   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:45.651040   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:45.663718   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:45.663812   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:45.696548   75402 cri.go:89] found id: ""
	I0816 18:16:45.696578   75402 logs.go:276] 0 containers: []
	W0816 18:16:45.696586   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:45.696591   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:45.696663   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:45.731032   75402 cri.go:89] found id: ""
	I0816 18:16:45.731059   75402 logs.go:276] 0 containers: []
	W0816 18:16:45.731068   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:45.731073   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:45.731126   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:45.764801   75402 cri.go:89] found id: ""
	I0816 18:16:45.764829   75402 logs.go:276] 0 containers: []
	W0816 18:16:45.764840   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:45.764846   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:45.764908   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:45.800768   75402 cri.go:89] found id: ""
	I0816 18:16:45.800795   75402 logs.go:276] 0 containers: []
	W0816 18:16:45.800803   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:45.800809   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:45.800858   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:45.841460   75402 cri.go:89] found id: ""
	I0816 18:16:45.841486   75402 logs.go:276] 0 containers: []
	W0816 18:16:45.841493   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:45.841505   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:45.841566   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:45.875230   75402 cri.go:89] found id: ""
	I0816 18:16:45.875254   75402 logs.go:276] 0 containers: []
	W0816 18:16:45.875261   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:45.875266   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:45.875319   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:45.907711   75402 cri.go:89] found id: ""
	I0816 18:16:45.907739   75402 logs.go:276] 0 containers: []
	W0816 18:16:45.907747   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:45.907753   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:45.907804   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:45.943147   75402 cri.go:89] found id: ""
	I0816 18:16:45.943171   75402 logs.go:276] 0 containers: []
	W0816 18:16:45.943182   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:45.943192   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:45.943206   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:45.998459   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:45.998491   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:46.013237   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:46.013267   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:46.079248   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:46.079273   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:46.079288   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:46.158842   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:46.158874   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:43.939212   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:45.939893   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:47.940331   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:45.706588   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:48.206342   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:44.948046   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:46.948752   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:49.448263   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:48.696728   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:48.710946   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:48.711041   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:48.746696   75402 cri.go:89] found id: ""
	I0816 18:16:48.746727   75402 logs.go:276] 0 containers: []
	W0816 18:16:48.746735   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:48.746741   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:48.746803   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:48.781496   75402 cri.go:89] found id: ""
	I0816 18:16:48.781522   75402 logs.go:276] 0 containers: []
	W0816 18:16:48.781532   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:48.781539   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:48.781603   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:48.815628   75402 cri.go:89] found id: ""
	I0816 18:16:48.815654   75402 logs.go:276] 0 containers: []
	W0816 18:16:48.815665   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:48.815673   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:48.815736   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:48.848990   75402 cri.go:89] found id: ""
	I0816 18:16:48.849018   75402 logs.go:276] 0 containers: []
	W0816 18:16:48.849030   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:48.849040   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:48.849098   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:48.886924   75402 cri.go:89] found id: ""
	I0816 18:16:48.886949   75402 logs.go:276] 0 containers: []
	W0816 18:16:48.886960   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:48.886968   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:48.887022   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:48.923989   75402 cri.go:89] found id: ""
	I0816 18:16:48.924018   75402 logs.go:276] 0 containers: []
	W0816 18:16:48.924030   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:48.924038   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:48.924102   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:48.959513   75402 cri.go:89] found id: ""
	I0816 18:16:48.959546   75402 logs.go:276] 0 containers: []
	W0816 18:16:48.959556   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:48.959562   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:48.959614   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:48.995615   75402 cri.go:89] found id: ""
	I0816 18:16:48.995651   75402 logs.go:276] 0 containers: []
	W0816 18:16:48.995662   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:48.995673   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:48.995688   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:49.008440   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:49.008468   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:49.076761   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:49.076780   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:49.076797   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:49.152855   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:49.152893   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:49.190857   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:49.190887   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:51.745344   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:51.759552   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:51.759628   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:51.795494   75402 cri.go:89] found id: ""
	I0816 18:16:51.795520   75402 logs.go:276] 0 containers: []
	W0816 18:16:51.795531   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:51.795539   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:51.795600   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:51.833162   75402 cri.go:89] found id: ""
	I0816 18:16:51.833188   75402 logs.go:276] 0 containers: []
	W0816 18:16:51.833198   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:51.833205   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:51.833265   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:51.866940   75402 cri.go:89] found id: ""
	I0816 18:16:51.866968   75402 logs.go:276] 0 containers: []
	W0816 18:16:51.866979   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:51.866986   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:51.867051   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:51.899824   75402 cri.go:89] found id: ""
	I0816 18:16:51.899857   75402 logs.go:276] 0 containers: []
	W0816 18:16:51.899867   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:51.899874   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:51.899937   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:51.932273   75402 cri.go:89] found id: ""
	I0816 18:16:51.932297   75402 logs.go:276] 0 containers: []
	W0816 18:16:51.932312   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:51.932320   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:51.932390   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:51.966885   75402 cri.go:89] found id: ""
	I0816 18:16:51.966911   75402 logs.go:276] 0 containers: []
	W0816 18:16:51.966922   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:51.966930   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:51.966996   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:52.002988   75402 cri.go:89] found id: ""
	I0816 18:16:52.003020   75402 logs.go:276] 0 containers: []
	W0816 18:16:52.003029   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:52.003035   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:52.003098   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:52.038858   75402 cri.go:89] found id: ""
	I0816 18:16:52.038894   75402 logs.go:276] 0 containers: []
	W0816 18:16:52.038909   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:52.038919   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:52.038933   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:52.076404   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:52.076431   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:52.127735   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:52.127767   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:52.140657   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:52.140680   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:52.202961   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:52.202989   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:52.203008   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:50.440577   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:52.441865   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:50.705618   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:52.706795   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:51.448948   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:53.947907   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:54.787095   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:54.801258   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:54.801332   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:54.837987   75402 cri.go:89] found id: ""
	I0816 18:16:54.838018   75402 logs.go:276] 0 containers: []
	W0816 18:16:54.838028   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:54.838034   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:54.838118   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:54.872439   75402 cri.go:89] found id: ""
	I0816 18:16:54.872466   75402 logs.go:276] 0 containers: []
	W0816 18:16:54.872477   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:54.872490   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:54.872554   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:54.904676   75402 cri.go:89] found id: ""
	I0816 18:16:54.904706   75402 logs.go:276] 0 containers: []
	W0816 18:16:54.904717   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:54.904724   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:54.904783   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:54.938101   75402 cri.go:89] found id: ""
	I0816 18:16:54.938134   75402 logs.go:276] 0 containers: []
	W0816 18:16:54.938145   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:54.938154   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:54.938218   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:54.977409   75402 cri.go:89] found id: ""
	I0816 18:16:54.977442   75402 logs.go:276] 0 containers: []
	W0816 18:16:54.977453   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:54.977460   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:54.977521   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:55.013248   75402 cri.go:89] found id: ""
	I0816 18:16:55.013275   75402 logs.go:276] 0 containers: []
	W0816 18:16:55.013286   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:55.013294   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:55.013363   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:55.044555   75402 cri.go:89] found id: ""
	I0816 18:16:55.044588   75402 logs.go:276] 0 containers: []
	W0816 18:16:55.044597   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:55.044603   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:55.044690   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:55.075970   75402 cri.go:89] found id: ""
	I0816 18:16:55.075997   75402 logs.go:276] 0 containers: []
	W0816 18:16:55.076006   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:55.076014   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:55.076025   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:55.149982   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:55.150017   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:55.190160   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:55.190194   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:55.242629   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:55.242660   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:55.255229   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:55.255254   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:55.324775   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:57.824996   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:57.838666   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:57.838740   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:57.872828   75402 cri.go:89] found id: ""
	I0816 18:16:57.872861   75402 logs.go:276] 0 containers: []
	W0816 18:16:57.872869   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:57.872875   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:57.872927   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:57.907324   75402 cri.go:89] found id: ""
	I0816 18:16:57.907354   75402 logs.go:276] 0 containers: []
	W0816 18:16:57.907366   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:57.907373   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:57.907433   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:57.941657   75402 cri.go:89] found id: ""
	I0816 18:16:57.941682   75402 logs.go:276] 0 containers: []
	W0816 18:16:57.941689   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:57.941695   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:57.941746   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:57.981424   75402 cri.go:89] found id: ""
	I0816 18:16:57.981466   75402 logs.go:276] 0 containers: []
	W0816 18:16:57.981480   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:57.981489   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:57.981562   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:58.015534   75402 cri.go:89] found id: ""
	I0816 18:16:58.015587   75402 logs.go:276] 0 containers: []
	W0816 18:16:58.015598   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:58.015606   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:58.015669   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:58.047875   75402 cri.go:89] found id: ""
	I0816 18:16:58.047908   75402 logs.go:276] 0 containers: []
	W0816 18:16:58.047917   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:58.047923   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:58.047976   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:58.079294   75402 cri.go:89] found id: ""
	I0816 18:16:58.079324   75402 logs.go:276] 0 containers: []
	W0816 18:16:58.079334   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:58.079342   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:58.079406   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:54.940977   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:57.439254   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:55.208298   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:57.706380   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:55.948080   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:57.949589   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:58.112357   75402 cri.go:89] found id: ""
	I0816 18:16:58.112389   75402 logs.go:276] 0 containers: []
	W0816 18:16:58.112401   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:58.112413   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:58.112428   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:58.159903   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:58.159934   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:58.172763   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:58.172789   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:58.245827   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:58.245856   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:58.245872   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:58.325008   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:58.325049   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:00.864354   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:00.877517   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:00.877593   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:00.915396   75402 cri.go:89] found id: ""
	I0816 18:17:00.915428   75402 logs.go:276] 0 containers: []
	W0816 18:17:00.915438   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:00.915446   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:00.915611   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:00.953950   75402 cri.go:89] found id: ""
	I0816 18:17:00.953977   75402 logs.go:276] 0 containers: []
	W0816 18:17:00.953987   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:00.953993   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:00.954051   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:00.987673   75402 cri.go:89] found id: ""
	I0816 18:17:00.987703   75402 logs.go:276] 0 containers: []
	W0816 18:17:00.987713   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:00.987721   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:00.987784   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:01.021230   75402 cri.go:89] found id: ""
	I0816 18:17:01.021277   75402 logs.go:276] 0 containers: []
	W0816 18:17:01.021308   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:01.021315   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:01.021388   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:01.057087   75402 cri.go:89] found id: ""
	I0816 18:17:01.057117   75402 logs.go:276] 0 containers: []
	W0816 18:17:01.057127   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:01.057135   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:01.057207   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:01.094142   75402 cri.go:89] found id: ""
	I0816 18:17:01.094168   75402 logs.go:276] 0 containers: []
	W0816 18:17:01.094176   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:01.094183   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:01.094233   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:01.132799   75402 cri.go:89] found id: ""
	I0816 18:17:01.132824   75402 logs.go:276] 0 containers: []
	W0816 18:17:01.132831   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:01.132837   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:01.132888   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:01.173367   75402 cri.go:89] found id: ""
	I0816 18:17:01.173402   75402 logs.go:276] 0 containers: []
	W0816 18:17:01.173414   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:01.173425   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:01.173443   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:01.186856   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:01.186896   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:01.259913   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:01.259941   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:01.259955   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:01.340914   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:01.340947   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:01.381023   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:01.381058   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:59.440314   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:01.440377   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:59.706750   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:01.707186   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:00.448182   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:02.448773   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:03.933420   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:03.946940   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:03.947008   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:03.984529   75402 cri.go:89] found id: ""
	I0816 18:17:03.984560   75402 logs.go:276] 0 containers: []
	W0816 18:17:03.984571   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:03.984581   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:03.984668   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:04.017900   75402 cri.go:89] found id: ""
	I0816 18:17:04.017929   75402 logs.go:276] 0 containers: []
	W0816 18:17:04.017940   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:04.017948   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:04.018009   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:04.050837   75402 cri.go:89] found id: ""
	I0816 18:17:04.050871   75402 logs.go:276] 0 containers: []
	W0816 18:17:04.050888   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:04.050896   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:04.050959   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:04.085448   75402 cri.go:89] found id: ""
	I0816 18:17:04.085477   75402 logs.go:276] 0 containers: []
	W0816 18:17:04.085487   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:04.085495   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:04.085564   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:04.118177   75402 cri.go:89] found id: ""
	I0816 18:17:04.118203   75402 logs.go:276] 0 containers: []
	W0816 18:17:04.118213   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:04.118220   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:04.118284   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:04.150289   75402 cri.go:89] found id: ""
	I0816 18:17:04.150317   75402 logs.go:276] 0 containers: []
	W0816 18:17:04.150330   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:04.150338   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:04.150404   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:04.184258   75402 cri.go:89] found id: ""
	I0816 18:17:04.184282   75402 logs.go:276] 0 containers: []
	W0816 18:17:04.184290   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:04.184295   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:04.184347   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:04.217142   75402 cri.go:89] found id: ""
	I0816 18:17:04.217174   75402 logs.go:276] 0 containers: []
	W0816 18:17:04.217184   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:04.217192   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:04.217204   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:04.253000   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:04.253034   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:04.304978   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:04.305018   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:04.320210   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:04.320241   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:04.396146   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:04.396169   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:04.396184   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:06.980747   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:06.992944   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:06.993006   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:07.026303   75402 cri.go:89] found id: ""
	I0816 18:17:07.026356   75402 logs.go:276] 0 containers: []
	W0816 18:17:07.026368   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:07.026376   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:07.026443   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:07.059226   75402 cri.go:89] found id: ""
	I0816 18:17:07.059257   75402 logs.go:276] 0 containers: []
	W0816 18:17:07.059268   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:07.059277   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:07.059339   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:07.092142   75402 cri.go:89] found id: ""
	I0816 18:17:07.092171   75402 logs.go:276] 0 containers: []
	W0816 18:17:07.092182   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:07.092188   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:07.092248   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:07.125284   75402 cri.go:89] found id: ""
	I0816 18:17:07.125330   75402 logs.go:276] 0 containers: []
	W0816 18:17:07.125347   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:07.125355   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:07.125420   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:07.163890   75402 cri.go:89] found id: ""
	I0816 18:17:07.163919   75402 logs.go:276] 0 containers: []
	W0816 18:17:07.163930   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:07.163938   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:07.164002   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:07.197988   75402 cri.go:89] found id: ""
	I0816 18:17:07.198014   75402 logs.go:276] 0 containers: []
	W0816 18:17:07.198025   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:07.198033   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:07.198116   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:07.232709   75402 cri.go:89] found id: ""
	I0816 18:17:07.232738   75402 logs.go:276] 0 containers: []
	W0816 18:17:07.232749   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:07.232756   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:07.232817   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:07.264514   75402 cri.go:89] found id: ""
	I0816 18:17:07.264548   75402 logs.go:276] 0 containers: []
	W0816 18:17:07.264558   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:07.264569   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:07.264583   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:07.316138   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:07.316173   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:07.329659   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:07.329688   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:07.397345   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:07.397380   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:07.397397   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:07.481245   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:07.481280   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:03.940100   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:05.940355   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:07.940821   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:04.207253   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:06.705745   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:08.706828   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:04.949027   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:07.447957   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:10.024405   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:10.036860   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:10.036927   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:10.069402   75402 cri.go:89] found id: ""
	I0816 18:17:10.069436   75402 logs.go:276] 0 containers: []
	W0816 18:17:10.069448   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:10.069458   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:10.069511   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:10.101480   75402 cri.go:89] found id: ""
	I0816 18:17:10.101508   75402 logs.go:276] 0 containers: []
	W0816 18:17:10.101518   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:10.101529   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:10.101601   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:10.131673   75402 cri.go:89] found id: ""
	I0816 18:17:10.131708   75402 logs.go:276] 0 containers: []
	W0816 18:17:10.131719   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:10.131726   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:10.131821   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:10.166476   75402 cri.go:89] found id: ""
	I0816 18:17:10.166508   75402 logs.go:276] 0 containers: []
	W0816 18:17:10.166518   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:10.166525   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:10.166590   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:10.199296   75402 cri.go:89] found id: ""
	I0816 18:17:10.199321   75402 logs.go:276] 0 containers: []
	W0816 18:17:10.199332   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:10.199340   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:10.199406   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:10.232640   75402 cri.go:89] found id: ""
	I0816 18:17:10.232672   75402 logs.go:276] 0 containers: []
	W0816 18:17:10.232683   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:10.232691   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:10.232775   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:10.263958   75402 cri.go:89] found id: ""
	I0816 18:17:10.263988   75402 logs.go:276] 0 containers: []
	W0816 18:17:10.263998   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:10.264003   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:10.264052   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:10.295904   75402 cri.go:89] found id: ""
	I0816 18:17:10.295929   75402 logs.go:276] 0 containers: []
	W0816 18:17:10.295937   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:10.295946   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:10.295957   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:10.344874   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:10.344909   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:10.358523   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:10.358552   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:10.433311   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:10.433334   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:10.433351   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:10.514580   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:10.514620   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:13.053815   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:13.068517   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:13.068597   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:10.440472   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:12.939209   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:10.707438   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:13.207630   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:09.947889   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:11.949408   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:14.447906   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:13.104251   75402 cri.go:89] found id: ""
	I0816 18:17:13.104279   75402 logs.go:276] 0 containers: []
	W0816 18:17:13.104313   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:13.104321   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:13.104375   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:13.137415   75402 cri.go:89] found id: ""
	I0816 18:17:13.137442   75402 logs.go:276] 0 containers: []
	W0816 18:17:13.137453   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:13.137461   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:13.137510   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:13.174165   75402 cri.go:89] found id: ""
	I0816 18:17:13.174191   75402 logs.go:276] 0 containers: []
	W0816 18:17:13.174203   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:13.174210   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:13.174271   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:13.206789   75402 cri.go:89] found id: ""
	I0816 18:17:13.206814   75402 logs.go:276] 0 containers: []
	W0816 18:17:13.206823   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:13.206831   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:13.206892   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:13.238950   75402 cri.go:89] found id: ""
	I0816 18:17:13.238975   75402 logs.go:276] 0 containers: []
	W0816 18:17:13.238984   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:13.238990   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:13.239037   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:13.271485   75402 cri.go:89] found id: ""
	I0816 18:17:13.271518   75402 logs.go:276] 0 containers: []
	W0816 18:17:13.271535   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:13.271544   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:13.271612   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:13.307576   75402 cri.go:89] found id: ""
	I0816 18:17:13.307610   75402 logs.go:276] 0 containers: []
	W0816 18:17:13.307622   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:13.307632   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:13.307698   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:13.339746   75402 cri.go:89] found id: ""
	I0816 18:17:13.339792   75402 logs.go:276] 0 containers: []
	W0816 18:17:13.339802   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:13.339813   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:13.339827   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:13.352847   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:13.352875   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:13.440397   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:13.440418   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:13.440432   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:13.514879   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:13.514916   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:13.553848   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:13.553882   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:16.103318   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:16.115837   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:16.115922   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:16.147079   75402 cri.go:89] found id: ""
	I0816 18:17:16.147108   75402 logs.go:276] 0 containers: []
	W0816 18:17:16.147119   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:16.147127   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:16.147189   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:16.184207   75402 cri.go:89] found id: ""
	I0816 18:17:16.184233   75402 logs.go:276] 0 containers: []
	W0816 18:17:16.184241   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:16.184247   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:16.184295   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:16.219036   75402 cri.go:89] found id: ""
	I0816 18:17:16.219065   75402 logs.go:276] 0 containers: []
	W0816 18:17:16.219072   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:16.219078   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:16.219163   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:16.251269   75402 cri.go:89] found id: ""
	I0816 18:17:16.251307   75402 logs.go:276] 0 containers: []
	W0816 18:17:16.251320   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:16.251329   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:16.251394   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:16.286549   75402 cri.go:89] found id: ""
	I0816 18:17:16.286576   75402 logs.go:276] 0 containers: []
	W0816 18:17:16.286585   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:16.286591   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:16.286647   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:16.322017   75402 cri.go:89] found id: ""
	I0816 18:17:16.322045   75402 logs.go:276] 0 containers: []
	W0816 18:17:16.322055   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:16.322063   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:16.322128   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:16.353606   75402 cri.go:89] found id: ""
	I0816 18:17:16.353636   75402 logs.go:276] 0 containers: []
	W0816 18:17:16.353646   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:16.353653   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:16.353719   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:16.386973   75402 cri.go:89] found id: ""
	I0816 18:17:16.387005   75402 logs.go:276] 0 containers: []
	W0816 18:17:16.387016   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:16.387027   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:16.387039   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:16.437031   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:16.437066   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:16.451258   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:16.451292   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:16.519130   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:16.519155   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:16.519170   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:16.598591   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:16.598626   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:14.939993   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:17.440655   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:15.705969   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:17.706271   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:16.449266   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:18.948220   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:19.147916   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:19.160525   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:19.160600   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:19.193494   75402 cri.go:89] found id: ""
	I0816 18:17:19.193520   75402 logs.go:276] 0 containers: []
	W0816 18:17:19.193527   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:19.193533   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:19.193599   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:19.230936   75402 cri.go:89] found id: ""
	I0816 18:17:19.230963   75402 logs.go:276] 0 containers: []
	W0816 18:17:19.230971   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:19.230976   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:19.231029   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:19.263713   75402 cri.go:89] found id: ""
	I0816 18:17:19.263735   75402 logs.go:276] 0 containers: []
	W0816 18:17:19.263742   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:19.263748   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:19.263794   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:19.294609   75402 cri.go:89] found id: ""
	I0816 18:17:19.294635   75402 logs.go:276] 0 containers: []
	W0816 18:17:19.294642   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:19.294647   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:19.294698   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:19.329278   75402 cri.go:89] found id: ""
	I0816 18:17:19.329303   75402 logs.go:276] 0 containers: []
	W0816 18:17:19.329313   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:19.329319   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:19.329368   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:19.362007   75402 cri.go:89] found id: ""
	I0816 18:17:19.362043   75402 logs.go:276] 0 containers: []
	W0816 18:17:19.362052   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:19.362067   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:19.362120   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:19.395190   75402 cri.go:89] found id: ""
	I0816 18:17:19.395217   75402 logs.go:276] 0 containers: []
	W0816 18:17:19.395248   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:19.395255   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:19.395302   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:19.426962   75402 cri.go:89] found id: ""
	I0816 18:17:19.426991   75402 logs.go:276] 0 containers: []
	W0816 18:17:19.427002   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:19.427012   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:19.427027   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:19.441319   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:19.441346   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:19.511390   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:19.511409   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:19.511425   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:19.590897   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:19.590935   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:19.628753   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:19.628781   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:22.182534   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:22.194844   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:22.194917   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:22.228225   75402 cri.go:89] found id: ""
	I0816 18:17:22.228247   75402 logs.go:276] 0 containers: []
	W0816 18:17:22.228269   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:22.228276   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:22.228325   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:22.258614   75402 cri.go:89] found id: ""
	I0816 18:17:22.258646   75402 logs.go:276] 0 containers: []
	W0816 18:17:22.258654   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:22.258660   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:22.258708   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:22.289103   75402 cri.go:89] found id: ""
	I0816 18:17:22.289136   75402 logs.go:276] 0 containers: []
	W0816 18:17:22.289147   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:22.289154   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:22.289215   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:22.321828   75402 cri.go:89] found id: ""
	I0816 18:17:22.321857   75402 logs.go:276] 0 containers: []
	W0816 18:17:22.321869   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:22.321877   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:22.321942   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:22.353557   75402 cri.go:89] found id: ""
	I0816 18:17:22.353588   75402 logs.go:276] 0 containers: []
	W0816 18:17:22.353597   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:22.353602   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:22.353660   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:22.385078   75402 cri.go:89] found id: ""
	I0816 18:17:22.385103   75402 logs.go:276] 0 containers: []
	W0816 18:17:22.385110   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:22.385116   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:22.385189   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:22.415864   75402 cri.go:89] found id: ""
	I0816 18:17:22.415900   75402 logs.go:276] 0 containers: []
	W0816 18:17:22.415913   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:22.415922   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:22.415990   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:22.449895   75402 cri.go:89] found id: ""
	I0816 18:17:22.449922   75402 logs.go:276] 0 containers: []
	W0816 18:17:22.449942   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:22.449957   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:22.449974   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:22.523055   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:22.523073   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:22.523084   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:22.599680   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:22.599719   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:22.638021   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:22.638057   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:22.688970   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:22.689010   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:19.941154   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:22.440580   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:20.207713   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:22.706805   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:21.448399   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:23.448444   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:25.202748   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:25.217316   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:25.217388   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:25.249528   75402 cri.go:89] found id: ""
	I0816 18:17:25.249558   75402 logs.go:276] 0 containers: []
	W0816 18:17:25.249566   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:25.249578   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:25.249625   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:25.282667   75402 cri.go:89] found id: ""
	I0816 18:17:25.282696   75402 logs.go:276] 0 containers: []
	W0816 18:17:25.282706   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:25.282712   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:25.282764   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:25.314061   75402 cri.go:89] found id: ""
	I0816 18:17:25.314091   75402 logs.go:276] 0 containers: []
	W0816 18:17:25.314101   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:25.314108   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:25.314161   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:25.351260   75402 cri.go:89] found id: ""
	I0816 18:17:25.351287   75402 logs.go:276] 0 containers: []
	W0816 18:17:25.351296   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:25.351301   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:25.351352   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:25.388303   75402 cri.go:89] found id: ""
	I0816 18:17:25.388334   75402 logs.go:276] 0 containers: []
	W0816 18:17:25.388345   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:25.388352   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:25.388412   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:25.422133   75402 cri.go:89] found id: ""
	I0816 18:17:25.422161   75402 logs.go:276] 0 containers: []
	W0816 18:17:25.422169   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:25.422175   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:25.422232   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:25.456749   75402 cri.go:89] found id: ""
	I0816 18:17:25.456775   75402 logs.go:276] 0 containers: []
	W0816 18:17:25.456783   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:25.456789   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:25.456836   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:25.494783   75402 cri.go:89] found id: ""
	I0816 18:17:25.494809   75402 logs.go:276] 0 containers: []
	W0816 18:17:25.494817   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:25.494825   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:25.494836   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:25.561253   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:25.561290   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:25.580349   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:25.580383   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:25.656333   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:25.656361   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:25.656378   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:25.733479   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:25.733515   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:24.444069   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:26.939743   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:24.707849   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:26.709711   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:25.448555   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:27.449070   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:28.272217   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:28.285750   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:28.285822   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:28.318230   75402 cri.go:89] found id: ""
	I0816 18:17:28.318260   75402 logs.go:276] 0 containers: []
	W0816 18:17:28.318268   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:28.318275   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:28.318344   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:28.351766   75402 cri.go:89] found id: ""
	I0816 18:17:28.351798   75402 logs.go:276] 0 containers: []
	W0816 18:17:28.351808   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:28.351814   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:28.351872   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:28.385543   75402 cri.go:89] found id: ""
	I0816 18:17:28.385572   75402 logs.go:276] 0 containers: []
	W0816 18:17:28.385581   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:28.385588   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:28.385653   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:28.418808   75402 cri.go:89] found id: ""
	I0816 18:17:28.418837   75402 logs.go:276] 0 containers: []
	W0816 18:17:28.418846   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:28.418852   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:28.418900   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:28.453883   75402 cri.go:89] found id: ""
	I0816 18:17:28.453911   75402 logs.go:276] 0 containers: []
	W0816 18:17:28.453922   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:28.453929   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:28.453996   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:28.486261   75402 cri.go:89] found id: ""
	I0816 18:17:28.486291   75402 logs.go:276] 0 containers: []
	W0816 18:17:28.486304   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:28.486310   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:28.486366   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:28.520617   75402 cri.go:89] found id: ""
	I0816 18:17:28.520658   75402 logs.go:276] 0 containers: []
	W0816 18:17:28.520670   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:28.520678   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:28.520731   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:28.552996   75402 cri.go:89] found id: ""
	I0816 18:17:28.553026   75402 logs.go:276] 0 containers: []
	W0816 18:17:28.553036   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:28.553046   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:28.553061   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:28.604149   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:28.604192   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:28.617393   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:28.617421   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:28.683258   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:28.683279   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:28.683294   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:28.766933   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:28.766977   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:31.305897   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:31.326070   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:31.326143   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:31.375314   75402 cri.go:89] found id: ""
	I0816 18:17:31.375350   75402 logs.go:276] 0 containers: []
	W0816 18:17:31.375361   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:31.375369   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:31.375429   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:31.407372   75402 cri.go:89] found id: ""
	I0816 18:17:31.407398   75402 logs.go:276] 0 containers: []
	W0816 18:17:31.407406   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:31.407411   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:31.407459   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:31.445679   75402 cri.go:89] found id: ""
	I0816 18:17:31.445706   75402 logs.go:276] 0 containers: []
	W0816 18:17:31.445714   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:31.445720   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:31.445781   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:31.480040   75402 cri.go:89] found id: ""
	I0816 18:17:31.480072   75402 logs.go:276] 0 containers: []
	W0816 18:17:31.480080   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:31.480085   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:31.480145   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:31.511143   75402 cri.go:89] found id: ""
	I0816 18:17:31.511171   75402 logs.go:276] 0 containers: []
	W0816 18:17:31.511182   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:31.511188   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:31.511252   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:31.544254   75402 cri.go:89] found id: ""
	I0816 18:17:31.544282   75402 logs.go:276] 0 containers: []
	W0816 18:17:31.544293   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:31.544300   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:31.544363   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:31.579007   75402 cri.go:89] found id: ""
	I0816 18:17:31.579033   75402 logs.go:276] 0 containers: []
	W0816 18:17:31.579041   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:31.579046   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:31.579108   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:31.619966   75402 cri.go:89] found id: ""
	I0816 18:17:31.619995   75402 logs.go:276] 0 containers: []
	W0816 18:17:31.620005   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:31.620018   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:31.620035   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:31.657784   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:31.657815   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:31.706824   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:31.706853   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:31.719696   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:31.719721   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:31.786096   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:31.786124   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:31.786142   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:28.940711   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:31.440514   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:29.206929   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:31.706188   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:33.706244   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:29.948053   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:32.448453   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:34.363862   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:34.377365   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:34.377430   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:34.414191   75402 cri.go:89] found id: ""
	I0816 18:17:34.414216   75402 logs.go:276] 0 containers: []
	W0816 18:17:34.414223   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:34.414229   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:34.414285   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:34.446811   75402 cri.go:89] found id: ""
	I0816 18:17:34.446836   75402 logs.go:276] 0 containers: []
	W0816 18:17:34.446843   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:34.446848   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:34.446905   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:34.477582   75402 cri.go:89] found id: ""
	I0816 18:17:34.477615   75402 logs.go:276] 0 containers: []
	W0816 18:17:34.477627   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:34.477634   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:34.477695   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:34.507868   75402 cri.go:89] found id: ""
	I0816 18:17:34.507901   75402 logs.go:276] 0 containers: []
	W0816 18:17:34.507912   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:34.507921   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:34.507984   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:34.538719   75402 cri.go:89] found id: ""
	I0816 18:17:34.538754   75402 logs.go:276] 0 containers: []
	W0816 18:17:34.538765   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:34.538772   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:34.538826   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:34.571445   75402 cri.go:89] found id: ""
	I0816 18:17:34.571468   75402 logs.go:276] 0 containers: []
	W0816 18:17:34.571477   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:34.571484   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:34.571557   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:34.601587   75402 cri.go:89] found id: ""
	I0816 18:17:34.601611   75402 logs.go:276] 0 containers: []
	W0816 18:17:34.601618   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:34.601624   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:34.601669   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:34.634850   75402 cri.go:89] found id: ""
	I0816 18:17:34.634878   75402 logs.go:276] 0 containers: []
	W0816 18:17:34.634892   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:34.634906   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:34.634920   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:34.682828   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:34.682859   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:34.695796   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:34.695820   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:34.762100   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:34.762121   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:34.762133   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:34.845329   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:34.845359   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:37.386266   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:37.398940   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:37.399005   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:37.433072   75402 cri.go:89] found id: ""
	I0816 18:17:37.433099   75402 logs.go:276] 0 containers: []
	W0816 18:17:37.433112   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:37.433118   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:37.433169   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:37.466968   75402 cri.go:89] found id: ""
	I0816 18:17:37.467001   75402 logs.go:276] 0 containers: []
	W0816 18:17:37.467012   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:37.467021   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:37.467086   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:37.509268   75402 cri.go:89] found id: ""
	I0816 18:17:37.509291   75402 logs.go:276] 0 containers: []
	W0816 18:17:37.509300   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:37.509306   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:37.509365   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:37.541295   75402 cri.go:89] found id: ""
	I0816 18:17:37.541338   75402 logs.go:276] 0 containers: []
	W0816 18:17:37.541350   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:37.541357   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:37.541421   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:37.575423   75402 cri.go:89] found id: ""
	I0816 18:17:37.575453   75402 logs.go:276] 0 containers: []
	W0816 18:17:37.575464   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:37.575472   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:37.575540   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:37.614787   75402 cri.go:89] found id: ""
	I0816 18:17:37.614817   75402 logs.go:276] 0 containers: []
	W0816 18:17:37.614828   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:37.614835   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:37.614896   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:37.646396   75402 cri.go:89] found id: ""
	I0816 18:17:37.646430   75402 logs.go:276] 0 containers: []
	W0816 18:17:37.646441   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:37.646449   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:37.646517   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:37.679383   75402 cri.go:89] found id: ""
	I0816 18:17:37.679414   75402 logs.go:276] 0 containers: []
	W0816 18:17:37.679423   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:37.679431   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:37.679442   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:37.729641   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:37.729673   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:37.742420   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:37.742448   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:37.812572   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:37.812600   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:37.812615   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:37.887100   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:37.887137   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:33.940380   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:35.941055   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:38.440700   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:35.706903   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:38.207115   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:34.947638   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:37.448511   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:39.448944   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:40.424202   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:40.438231   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:40.438337   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:40.474614   75402 cri.go:89] found id: ""
	I0816 18:17:40.474639   75402 logs.go:276] 0 containers: []
	W0816 18:17:40.474648   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:40.474653   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:40.474701   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:40.510123   75402 cri.go:89] found id: ""
	I0816 18:17:40.510154   75402 logs.go:276] 0 containers: []
	W0816 18:17:40.510162   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:40.510167   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:40.510217   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:40.548971   75402 cri.go:89] found id: ""
	I0816 18:17:40.549000   75402 logs.go:276] 0 containers: []
	W0816 18:17:40.549008   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:40.549013   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:40.549069   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:40.595126   75402 cri.go:89] found id: ""
	I0816 18:17:40.595158   75402 logs.go:276] 0 containers: []
	W0816 18:17:40.595167   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:40.595174   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:40.595220   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:40.629769   75402 cri.go:89] found id: ""
	I0816 18:17:40.629793   75402 logs.go:276] 0 containers: []
	W0816 18:17:40.629801   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:40.629807   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:40.629871   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:40.661889   75402 cri.go:89] found id: ""
	I0816 18:17:40.661922   75402 logs.go:276] 0 containers: []
	W0816 18:17:40.661932   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:40.661939   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:40.662001   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:40.697764   75402 cri.go:89] found id: ""
	I0816 18:17:40.697790   75402 logs.go:276] 0 containers: []
	W0816 18:17:40.697801   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:40.697808   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:40.697867   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:40.734825   75402 cri.go:89] found id: ""
	I0816 18:17:40.734852   75402 logs.go:276] 0 containers: []
	W0816 18:17:40.734862   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:40.734872   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:40.734939   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:40.787975   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:40.788015   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:40.800817   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:40.800843   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:40.874182   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:40.874205   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:40.874219   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:40.960032   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:40.960066   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:40.940284   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:42.943218   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:40.207943   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:42.707356   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:41.947437   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:43.947887   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:43.499770   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:43.513726   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:43.513806   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:43.548368   75402 cri.go:89] found id: ""
	I0816 18:17:43.548396   75402 logs.go:276] 0 containers: []
	W0816 18:17:43.548406   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:43.548413   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:43.548474   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:43.581177   75402 cri.go:89] found id: ""
	I0816 18:17:43.581205   75402 logs.go:276] 0 containers: []
	W0816 18:17:43.581216   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:43.581223   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:43.581291   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:43.614315   75402 cri.go:89] found id: ""
	I0816 18:17:43.614354   75402 logs.go:276] 0 containers: []
	W0816 18:17:43.614367   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:43.614374   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:43.614437   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:43.648608   75402 cri.go:89] found id: ""
	I0816 18:17:43.648645   75402 logs.go:276] 0 containers: []
	W0816 18:17:43.648658   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:43.648669   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:43.648722   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:43.680549   75402 cri.go:89] found id: ""
	I0816 18:17:43.680586   75402 logs.go:276] 0 containers: []
	W0816 18:17:43.680597   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:43.680604   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:43.680686   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:43.710473   75402 cri.go:89] found id: ""
	I0816 18:17:43.710497   75402 logs.go:276] 0 containers: []
	W0816 18:17:43.710506   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:43.710514   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:43.710576   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:43.741415   75402 cri.go:89] found id: ""
	I0816 18:17:43.741442   75402 logs.go:276] 0 containers: []
	W0816 18:17:43.741450   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:43.741456   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:43.741505   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:43.775018   75402 cri.go:89] found id: ""
	I0816 18:17:43.775051   75402 logs.go:276] 0 containers: []
	W0816 18:17:43.775063   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:43.775074   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:43.775087   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:43.825596   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:43.825630   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:43.839133   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:43.839161   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:43.905645   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:43.905667   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:43.905679   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:43.988860   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:43.988901   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:46.525896   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:46.539147   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:46.539229   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:46.570703   75402 cri.go:89] found id: ""
	I0816 18:17:46.570726   75402 logs.go:276] 0 containers: []
	W0816 18:17:46.570734   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:46.570740   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:46.570785   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:46.605909   75402 cri.go:89] found id: ""
	I0816 18:17:46.605939   75402 logs.go:276] 0 containers: []
	W0816 18:17:46.605954   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:46.605961   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:46.606013   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:46.638865   75402 cri.go:89] found id: ""
	I0816 18:17:46.638899   75402 logs.go:276] 0 containers: []
	W0816 18:17:46.638911   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:46.638919   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:46.638994   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:46.671869   75402 cri.go:89] found id: ""
	I0816 18:17:46.671904   75402 logs.go:276] 0 containers: []
	W0816 18:17:46.671917   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:46.671926   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:46.671988   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:46.703423   75402 cri.go:89] found id: ""
	I0816 18:17:46.703464   75402 logs.go:276] 0 containers: []
	W0816 18:17:46.703473   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:46.703479   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:46.703545   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:46.735824   75402 cri.go:89] found id: ""
	I0816 18:17:46.735853   75402 logs.go:276] 0 containers: []
	W0816 18:17:46.735864   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:46.735871   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:46.735926   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:46.767122   75402 cri.go:89] found id: ""
	I0816 18:17:46.767146   75402 logs.go:276] 0 containers: []
	W0816 18:17:46.767154   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:46.767160   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:46.767207   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:46.798093   75402 cri.go:89] found id: ""
	I0816 18:17:46.798126   75402 logs.go:276] 0 containers: []
	W0816 18:17:46.798140   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:46.798152   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:46.798167   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:46.832699   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:46.832725   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:46.884212   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:46.884246   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:46.896896   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:46.896921   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:46.968805   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:46.968824   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:46.968838   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:45.440474   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:47.940127   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:45.206534   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:47.206973   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:45.948252   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:48.448086   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:49.552581   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:49.565134   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:49.565212   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:49.597012   75402 cri.go:89] found id: ""
	I0816 18:17:49.597042   75402 logs.go:276] 0 containers: []
	W0816 18:17:49.597057   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:49.597067   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:49.597133   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:49.628902   75402 cri.go:89] found id: ""
	I0816 18:17:49.628935   75402 logs.go:276] 0 containers: []
	W0816 18:17:49.628948   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:49.628957   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:49.629025   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:49.662668   75402 cri.go:89] found id: ""
	I0816 18:17:49.662698   75402 logs.go:276] 0 containers: []
	W0816 18:17:49.662709   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:49.662715   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:49.662778   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:49.696354   75402 cri.go:89] found id: ""
	I0816 18:17:49.696381   75402 logs.go:276] 0 containers: []
	W0816 18:17:49.696389   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:49.696395   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:49.696487   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:49.730801   75402 cri.go:89] found id: ""
	I0816 18:17:49.730838   75402 logs.go:276] 0 containers: []
	W0816 18:17:49.730849   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:49.730856   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:49.730921   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:49.764474   75402 cri.go:89] found id: ""
	I0816 18:17:49.764503   75402 logs.go:276] 0 containers: []
	W0816 18:17:49.764514   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:49.764522   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:49.764585   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:49.798577   75402 cri.go:89] found id: ""
	I0816 18:17:49.798616   75402 logs.go:276] 0 containers: []
	W0816 18:17:49.798627   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:49.798634   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:49.798703   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:49.830987   75402 cri.go:89] found id: ""
	I0816 18:17:49.831016   75402 logs.go:276] 0 containers: []
	W0816 18:17:49.831024   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:49.831032   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:49.831043   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:49.883397   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:49.883433   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:49.897208   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:49.897239   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:49.968363   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:49.968386   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:49.968398   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:50.056552   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:50.056583   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:52.596191   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:52.609592   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:52.609668   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:52.645775   75402 cri.go:89] found id: ""
	I0816 18:17:52.645807   75402 logs.go:276] 0 containers: []
	W0816 18:17:52.645817   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:52.645823   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:52.645869   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:52.677817   75402 cri.go:89] found id: ""
	I0816 18:17:52.677852   75402 logs.go:276] 0 containers: []
	W0816 18:17:52.677862   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:52.677870   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:52.677935   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:52.710618   75402 cri.go:89] found id: ""
	I0816 18:17:52.710648   75402 logs.go:276] 0 containers: []
	W0816 18:17:52.710658   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:52.710664   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:52.710716   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:52.745830   75402 cri.go:89] found id: ""
	I0816 18:17:52.745858   75402 logs.go:276] 0 containers: []
	W0816 18:17:52.745867   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:52.745872   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:52.745929   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:52.778511   75402 cri.go:89] found id: ""
	I0816 18:17:52.778538   75402 logs.go:276] 0 containers: []
	W0816 18:17:52.778548   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:52.778567   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:52.778632   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:52.810759   75402 cri.go:89] found id: ""
	I0816 18:17:52.810788   75402 logs.go:276] 0 containers: []
	W0816 18:17:52.810800   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:52.810807   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:52.810872   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:52.843786   75402 cri.go:89] found id: ""
	I0816 18:17:52.843814   75402 logs.go:276] 0 containers: []
	W0816 18:17:52.843824   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:52.843831   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:52.843886   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:52.876886   75402 cri.go:89] found id: ""
	I0816 18:17:52.876914   75402 logs.go:276] 0 containers: []
	W0816 18:17:52.876924   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:52.876934   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:52.876950   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:52.932519   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:52.932559   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:52.946645   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:52.946671   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:53.018156   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:53.018177   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:53.018190   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:53.095562   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:53.095600   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:49.940263   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:51.940433   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:49.707635   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:52.206027   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:50.449204   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:52.949591   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:55.633820   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:55.646170   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:55.646238   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:55.678147   75402 cri.go:89] found id: ""
	I0816 18:17:55.678181   75402 logs.go:276] 0 containers: []
	W0816 18:17:55.678194   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:55.678202   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:55.678264   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:55.710910   75402 cri.go:89] found id: ""
	I0816 18:17:55.710938   75402 logs.go:276] 0 containers: []
	W0816 18:17:55.710948   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:55.710956   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:55.711012   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:55.744822   75402 cri.go:89] found id: ""
	I0816 18:17:55.744853   75402 logs.go:276] 0 containers: []
	W0816 18:17:55.744863   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:55.744870   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:55.744931   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:55.791677   75402 cri.go:89] found id: ""
	I0816 18:17:55.791708   75402 logs.go:276] 0 containers: []
	W0816 18:17:55.791719   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:55.791727   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:55.791788   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:55.826448   75402 cri.go:89] found id: ""
	I0816 18:17:55.826481   75402 logs.go:276] 0 containers: []
	W0816 18:17:55.826492   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:55.826500   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:55.826564   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:55.861178   75402 cri.go:89] found id: ""
	I0816 18:17:55.861210   75402 logs.go:276] 0 containers: []
	W0816 18:17:55.861219   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:55.861225   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:55.861280   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:55.898073   75402 cri.go:89] found id: ""
	I0816 18:17:55.898099   75402 logs.go:276] 0 containers: []
	W0816 18:17:55.898110   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:55.898117   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:55.898184   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:55.931446   75402 cri.go:89] found id: ""
	I0816 18:17:55.931478   75402 logs.go:276] 0 containers: []
	W0816 18:17:55.931487   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:55.931498   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:55.931514   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:55.999910   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:55.999931   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:55.999943   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:56.077240   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:56.077312   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:56.115479   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:56.115506   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:56.166954   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:56.166989   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:54.440166   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:56.939865   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:54.206368   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:56.206710   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:58.207053   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:55.448566   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:57.948891   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:58.680571   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:58.692824   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:58.692890   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:58.729761   75402 cri.go:89] found id: ""
	I0816 18:17:58.729786   75402 logs.go:276] 0 containers: []
	W0816 18:17:58.729794   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:58.729799   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:58.729857   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:58.764943   75402 cri.go:89] found id: ""
	I0816 18:17:58.765082   75402 logs.go:276] 0 containers: []
	W0816 18:17:58.765113   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:58.765124   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:58.765179   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:58.801314   75402 cri.go:89] found id: ""
	I0816 18:17:58.801345   75402 logs.go:276] 0 containers: []
	W0816 18:17:58.801357   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:58.801365   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:58.801429   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:58.833936   75402 cri.go:89] found id: ""
	I0816 18:17:58.833973   75402 logs.go:276] 0 containers: []
	W0816 18:17:58.833982   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:58.833988   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:58.834046   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:58.870108   75402 cri.go:89] found id: ""
	I0816 18:17:58.870137   75402 logs.go:276] 0 containers: []
	W0816 18:17:58.870148   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:58.870155   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:58.870219   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:58.904157   75402 cri.go:89] found id: ""
	I0816 18:17:58.904184   75402 logs.go:276] 0 containers: []
	W0816 18:17:58.904194   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:58.904201   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:58.904264   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:58.937862   75402 cri.go:89] found id: ""
	I0816 18:17:58.937891   75402 logs.go:276] 0 containers: []
	W0816 18:17:58.937901   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:58.937909   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:58.937972   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:58.972465   75402 cri.go:89] found id: ""
	I0816 18:17:58.972495   75402 logs.go:276] 0 containers: []
	W0816 18:17:58.972506   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:58.972517   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:58.972532   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:59.047197   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:59.047223   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:59.047238   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:59.126634   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:59.126668   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:59.165528   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:59.165562   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:59.214294   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:59.214433   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:01.729662   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:01.742582   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:01.742642   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:01.776148   75402 cri.go:89] found id: ""
	I0816 18:18:01.776180   75402 logs.go:276] 0 containers: []
	W0816 18:18:01.776188   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:01.776197   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:01.776243   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:01.809186   75402 cri.go:89] found id: ""
	I0816 18:18:01.809218   75402 logs.go:276] 0 containers: []
	W0816 18:18:01.809229   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:01.809237   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:01.809307   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:01.842379   75402 cri.go:89] found id: ""
	I0816 18:18:01.842406   75402 logs.go:276] 0 containers: []
	W0816 18:18:01.842417   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:01.842425   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:01.842490   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:01.874706   75402 cri.go:89] found id: ""
	I0816 18:18:01.874739   75402 logs.go:276] 0 containers: []
	W0816 18:18:01.874747   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:01.874753   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:01.874813   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:01.915567   75402 cri.go:89] found id: ""
	I0816 18:18:01.915596   75402 logs.go:276] 0 containers: []
	W0816 18:18:01.915607   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:01.915615   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:01.915675   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:01.951527   75402 cri.go:89] found id: ""
	I0816 18:18:01.951559   75402 logs.go:276] 0 containers: []
	W0816 18:18:01.951569   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:01.951576   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:01.951638   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:01.983822   75402 cri.go:89] found id: ""
	I0816 18:18:01.983848   75402 logs.go:276] 0 containers: []
	W0816 18:18:01.983856   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:01.983861   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:01.983909   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:02.018976   75402 cri.go:89] found id: ""
	I0816 18:18:02.019003   75402 logs.go:276] 0 containers: []
	W0816 18:18:02.019012   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:02.019019   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:02.019033   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:02.071096   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:02.071131   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:02.085163   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:02.085189   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:02.154771   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:02.154789   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:02.154800   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:02.242068   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:02.242105   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:58.941456   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:01.440404   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:00.208085   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:02.705334   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:00.447843   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:02.448334   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:04.790311   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:04.803215   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:04.803298   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:04.835834   75402 cri.go:89] found id: ""
	I0816 18:18:04.835868   75402 logs.go:276] 0 containers: []
	W0816 18:18:04.835879   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:04.835886   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:04.835951   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:04.870000   75402 cri.go:89] found id: ""
	I0816 18:18:04.870032   75402 logs.go:276] 0 containers: []
	W0816 18:18:04.870042   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:04.870049   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:04.870111   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:04.906624   75402 cri.go:89] found id: ""
	I0816 18:18:04.906653   75402 logs.go:276] 0 containers: []
	W0816 18:18:04.906663   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:04.906670   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:04.906730   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:04.940115   75402 cri.go:89] found id: ""
	I0816 18:18:04.940139   75402 logs.go:276] 0 containers: []
	W0816 18:18:04.940148   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:04.940155   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:04.940213   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:04.974461   75402 cri.go:89] found id: ""
	I0816 18:18:04.974493   75402 logs.go:276] 0 containers: []
	W0816 18:18:04.974503   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:04.974510   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:04.974571   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:05.006593   75402 cri.go:89] found id: ""
	I0816 18:18:05.006618   75402 logs.go:276] 0 containers: []
	W0816 18:18:05.006628   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:05.006635   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:05.006691   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:05.040041   75402 cri.go:89] found id: ""
	I0816 18:18:05.040066   75402 logs.go:276] 0 containers: []
	W0816 18:18:05.040082   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:05.040089   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:05.040144   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:05.072968   75402 cri.go:89] found id: ""
	I0816 18:18:05.072996   75402 logs.go:276] 0 containers: []
	W0816 18:18:05.073005   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:05.073014   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:05.073025   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:05.124510   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:05.124543   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:05.145566   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:05.145592   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:05.221874   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:05.221898   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:05.221914   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:05.297283   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:05.297316   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:07.837564   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:07.850372   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:07.850441   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:07.882879   75402 cri.go:89] found id: ""
	I0816 18:18:07.882906   75402 logs.go:276] 0 containers: []
	W0816 18:18:07.882915   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:07.882920   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:07.882978   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:07.916983   75402 cri.go:89] found id: ""
	I0816 18:18:07.917011   75402 logs.go:276] 0 containers: []
	W0816 18:18:07.917019   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:07.917024   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:07.917075   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:07.953864   75402 cri.go:89] found id: ""
	I0816 18:18:07.953886   75402 logs.go:276] 0 containers: []
	W0816 18:18:07.953896   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:07.953903   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:07.953951   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:07.994375   75402 cri.go:89] found id: ""
	I0816 18:18:07.994399   75402 logs.go:276] 0 containers: []
	W0816 18:18:07.994408   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:07.994414   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:07.994472   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:08.029137   75402 cri.go:89] found id: ""
	I0816 18:18:08.029170   75402 logs.go:276] 0 containers: []
	W0816 18:18:08.029182   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:08.029189   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:08.029253   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:08.062331   75402 cri.go:89] found id: ""
	I0816 18:18:08.062358   75402 logs.go:276] 0 containers: []
	W0816 18:18:08.062367   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:08.062373   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:08.062430   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:08.097021   75402 cri.go:89] found id: ""
	I0816 18:18:08.097044   75402 logs.go:276] 0 containers: []
	W0816 18:18:08.097051   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:08.097056   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:08.097112   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:03.940724   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:06.441847   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:04.706298   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:06.707011   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:04.948066   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:06.948125   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:08.948992   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:08.131147   75402 cri.go:89] found id: ""
	I0816 18:18:08.131174   75402 logs.go:276] 0 containers: []
	W0816 18:18:08.131184   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:08.131192   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:08.131203   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:08.182334   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:08.182373   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:08.195459   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:08.195485   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:08.260333   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:08.260351   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:08.260363   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:08.344466   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:08.344506   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:10.881640   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:10.896400   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:10.896482   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:10.934034   75402 cri.go:89] found id: ""
	I0816 18:18:10.934068   75402 logs.go:276] 0 containers: []
	W0816 18:18:10.934076   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:10.934081   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:10.934130   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:10.966697   75402 cri.go:89] found id: ""
	I0816 18:18:10.966724   75402 logs.go:276] 0 containers: []
	W0816 18:18:10.966733   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:10.966741   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:10.966807   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:11.000540   75402 cri.go:89] found id: ""
	I0816 18:18:11.000568   75402 logs.go:276] 0 containers: []
	W0816 18:18:11.000579   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:11.000587   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:11.000665   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:11.034322   75402 cri.go:89] found id: ""
	I0816 18:18:11.034346   75402 logs.go:276] 0 containers: []
	W0816 18:18:11.034354   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:11.034360   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:11.034407   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:11.067081   75402 cri.go:89] found id: ""
	I0816 18:18:11.067108   75402 logs.go:276] 0 containers: []
	W0816 18:18:11.067116   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:11.067122   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:11.067170   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:11.099726   75402 cri.go:89] found id: ""
	I0816 18:18:11.099753   75402 logs.go:276] 0 containers: []
	W0816 18:18:11.099763   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:11.099770   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:11.099834   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:11.133187   75402 cri.go:89] found id: ""
	I0816 18:18:11.133216   75402 logs.go:276] 0 containers: []
	W0816 18:18:11.133226   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:11.133235   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:11.133315   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:11.167121   75402 cri.go:89] found id: ""
	I0816 18:18:11.167157   75402 logs.go:276] 0 containers: []
	W0816 18:18:11.167166   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:11.167177   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:11.167194   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:11.181396   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:11.181424   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:11.248286   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:11.248313   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:11.248325   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:11.328546   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:11.328583   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:11.365534   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:11.365576   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:08.939686   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:10.941097   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:13.440001   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:09.207018   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:11.207677   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:13.706818   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:10.949461   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:13.448057   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:13.919889   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:13.935097   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:13.935178   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:13.973196   75402 cri.go:89] found id: ""
	I0816 18:18:13.973225   75402 logs.go:276] 0 containers: []
	W0816 18:18:13.973236   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:13.973244   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:13.973328   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:14.011913   75402 cri.go:89] found id: ""
	I0816 18:18:14.011936   75402 logs.go:276] 0 containers: []
	W0816 18:18:14.011944   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:14.011950   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:14.012013   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:14.048418   75402 cri.go:89] found id: ""
	I0816 18:18:14.048447   75402 logs.go:276] 0 containers: []
	W0816 18:18:14.048459   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:14.048466   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:14.048515   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:14.082462   75402 cri.go:89] found id: ""
	I0816 18:18:14.082496   75402 logs.go:276] 0 containers: []
	W0816 18:18:14.082506   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:14.082514   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:14.082576   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:14.114958   75402 cri.go:89] found id: ""
	I0816 18:18:14.114986   75402 logs.go:276] 0 containers: []
	W0816 18:18:14.114996   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:14.115005   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:14.115067   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:14.154829   75402 cri.go:89] found id: ""
	I0816 18:18:14.154865   75402 logs.go:276] 0 containers: []
	W0816 18:18:14.154878   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:14.154888   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:14.154957   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:14.190012   75402 cri.go:89] found id: ""
	I0816 18:18:14.190045   75402 logs.go:276] 0 containers: []
	W0816 18:18:14.190053   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:14.190058   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:14.190108   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:14.223314   75402 cri.go:89] found id: ""
	I0816 18:18:14.223341   75402 logs.go:276] 0 containers: []
	W0816 18:18:14.223350   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:14.223360   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:14.223381   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:14.274995   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:14.275035   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:14.288518   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:14.288564   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:14.365668   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:14.365691   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:14.365705   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:14.445828   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:14.445866   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:16.981802   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:16.994729   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:16.994794   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:17.029790   75402 cri.go:89] found id: ""
	I0816 18:18:17.029821   75402 logs.go:276] 0 containers: []
	W0816 18:18:17.029839   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:17.029848   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:17.029912   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:17.063194   75402 cri.go:89] found id: ""
	I0816 18:18:17.063223   75402 logs.go:276] 0 containers: []
	W0816 18:18:17.063233   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:17.063240   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:17.063293   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:17.097808   75402 cri.go:89] found id: ""
	I0816 18:18:17.097831   75402 logs.go:276] 0 containers: []
	W0816 18:18:17.097839   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:17.097844   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:17.097900   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:17.132646   75402 cri.go:89] found id: ""
	I0816 18:18:17.132682   75402 logs.go:276] 0 containers: []
	W0816 18:18:17.132691   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:17.132697   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:17.132751   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:17.164285   75402 cri.go:89] found id: ""
	I0816 18:18:17.164316   75402 logs.go:276] 0 containers: []
	W0816 18:18:17.164328   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:17.164335   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:17.164391   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:17.195642   75402 cri.go:89] found id: ""
	I0816 18:18:17.195672   75402 logs.go:276] 0 containers: []
	W0816 18:18:17.195683   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:17.195691   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:17.195754   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:17.228005   75402 cri.go:89] found id: ""
	I0816 18:18:17.228033   75402 logs.go:276] 0 containers: []
	W0816 18:18:17.228041   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:17.228047   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:17.228107   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:17.279195   75402 cri.go:89] found id: ""
	I0816 18:18:17.279229   75402 logs.go:276] 0 containers: []
	W0816 18:18:17.279241   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:17.279253   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:17.279270   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:17.360084   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:17.360125   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:17.405184   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:17.405210   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:17.457453   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:17.457483   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:17.471472   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:17.471502   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:17.536478   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:15.939660   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:17.940456   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:16.207019   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:18.706191   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:15.450419   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:17.948912   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:20.036644   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:20.050169   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:20.050244   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:20.087943   75402 cri.go:89] found id: ""
	I0816 18:18:20.087971   75402 logs.go:276] 0 containers: []
	W0816 18:18:20.087981   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:20.087988   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:20.088051   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:20.119908   75402 cri.go:89] found id: ""
	I0816 18:18:20.119931   75402 logs.go:276] 0 containers: []
	W0816 18:18:20.119940   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:20.119945   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:20.120013   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:20.152115   75402 cri.go:89] found id: ""
	I0816 18:18:20.152146   75402 logs.go:276] 0 containers: []
	W0816 18:18:20.152156   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:20.152162   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:20.152209   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:20.189464   75402 cri.go:89] found id: ""
	I0816 18:18:20.189488   75402 logs.go:276] 0 containers: []
	W0816 18:18:20.189495   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:20.189500   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:20.189550   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:20.224779   75402 cri.go:89] found id: ""
	I0816 18:18:20.224807   75402 logs.go:276] 0 containers: []
	W0816 18:18:20.224817   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:20.224824   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:20.224888   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:20.257021   75402 cri.go:89] found id: ""
	I0816 18:18:20.257048   75402 logs.go:276] 0 containers: []
	W0816 18:18:20.257059   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:20.257067   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:20.257121   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:20.290991   75402 cri.go:89] found id: ""
	I0816 18:18:20.291023   75402 logs.go:276] 0 containers: []
	W0816 18:18:20.291032   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:20.291039   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:20.291099   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:20.323674   75402 cri.go:89] found id: ""
	I0816 18:18:20.323704   75402 logs.go:276] 0 containers: []
	W0816 18:18:20.323715   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:20.323726   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:20.323742   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:20.373411   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:20.373447   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:20.386954   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:20.386981   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:20.464366   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:20.464384   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:20.464403   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:20.541836   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:20.541881   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:23.085071   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:23.100460   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:23.100524   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:20.440656   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:22.942713   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:20.706771   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:23.207824   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:20.448676   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:22.948907   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:23.141239   75402 cri.go:89] found id: ""
	I0816 18:18:23.141269   75402 logs.go:276] 0 containers: []
	W0816 18:18:23.141280   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:23.141287   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:23.141354   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:23.172914   75402 cri.go:89] found id: ""
	I0816 18:18:23.172941   75402 logs.go:276] 0 containers: []
	W0816 18:18:23.172950   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:23.172958   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:23.173015   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:23.205593   75402 cri.go:89] found id: ""
	I0816 18:18:23.205621   75402 logs.go:276] 0 containers: []
	W0816 18:18:23.205632   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:23.205640   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:23.205706   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:23.239358   75402 cri.go:89] found id: ""
	I0816 18:18:23.239383   75402 logs.go:276] 0 containers: []
	W0816 18:18:23.239392   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:23.239401   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:23.239463   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:23.271798   75402 cri.go:89] found id: ""
	I0816 18:18:23.271828   75402 logs.go:276] 0 containers: []
	W0816 18:18:23.271838   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:23.271844   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:23.271911   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:23.305287   75402 cri.go:89] found id: ""
	I0816 18:18:23.305316   75402 logs.go:276] 0 containers: []
	W0816 18:18:23.305327   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:23.305335   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:23.305397   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:23.344041   75402 cri.go:89] found id: ""
	I0816 18:18:23.344067   75402 logs.go:276] 0 containers: []
	W0816 18:18:23.344075   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:23.344080   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:23.344134   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:23.376540   75402 cri.go:89] found id: ""
	I0816 18:18:23.376571   75402 logs.go:276] 0 containers: []
	W0816 18:18:23.376583   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:23.376601   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:23.376616   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:23.428265   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:23.428301   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:23.441377   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:23.441404   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:23.509219   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:23.509243   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:23.509259   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:23.589151   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:23.589186   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:26.126176   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:26.140228   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:26.140292   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:26.176768   75402 cri.go:89] found id: ""
	I0816 18:18:26.176807   75402 logs.go:276] 0 containers: []
	W0816 18:18:26.176820   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:26.176829   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:26.176887   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:26.212357   75402 cri.go:89] found id: ""
	I0816 18:18:26.212383   75402 logs.go:276] 0 containers: []
	W0816 18:18:26.212390   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:26.212396   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:26.212457   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:26.245256   75402 cri.go:89] found id: ""
	I0816 18:18:26.245290   75402 logs.go:276] 0 containers: []
	W0816 18:18:26.245302   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:26.245309   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:26.245370   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:26.277525   75402 cri.go:89] found id: ""
	I0816 18:18:26.277561   75402 logs.go:276] 0 containers: []
	W0816 18:18:26.277569   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:26.277575   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:26.277627   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:26.310928   75402 cri.go:89] found id: ""
	I0816 18:18:26.310956   75402 logs.go:276] 0 containers: []
	W0816 18:18:26.310967   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:26.310976   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:26.311052   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:26.344595   75402 cri.go:89] found id: ""
	I0816 18:18:26.344647   75402 logs.go:276] 0 containers: []
	W0816 18:18:26.344661   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:26.344669   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:26.344741   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:26.377776   75402 cri.go:89] found id: ""
	I0816 18:18:26.377805   75402 logs.go:276] 0 containers: []
	W0816 18:18:26.377814   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:26.377820   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:26.377872   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:26.411139   75402 cri.go:89] found id: ""
	I0816 18:18:26.411167   75402 logs.go:276] 0 containers: []
	W0816 18:18:26.411179   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:26.411190   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:26.411204   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:26.493802   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:26.493838   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:26.529542   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:26.529576   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:26.583544   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:26.583588   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:26.596429   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:26.596459   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:26.667858   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:25.441062   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:27.940609   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:25.706109   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:28.206196   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:25.448352   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:27.947950   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:29.168766   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:29.182032   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:29.182103   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:29.220213   75402 cri.go:89] found id: ""
	I0816 18:18:29.220239   75402 logs.go:276] 0 containers: []
	W0816 18:18:29.220247   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:29.220253   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:29.220300   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:29.257820   75402 cri.go:89] found id: ""
	I0816 18:18:29.257850   75402 logs.go:276] 0 containers: []
	W0816 18:18:29.257861   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:29.257867   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:29.257933   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:29.290450   75402 cri.go:89] found id: ""
	I0816 18:18:29.290473   75402 logs.go:276] 0 containers: []
	W0816 18:18:29.290480   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:29.290485   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:29.290546   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:29.328032   75402 cri.go:89] found id: ""
	I0816 18:18:29.328061   75402 logs.go:276] 0 containers: []
	W0816 18:18:29.328070   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:29.328076   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:29.328135   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:29.362104   75402 cri.go:89] found id: ""
	I0816 18:18:29.362132   75402 logs.go:276] 0 containers: []
	W0816 18:18:29.362141   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:29.362149   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:29.362218   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:29.395258   75402 cri.go:89] found id: ""
	I0816 18:18:29.395290   75402 logs.go:276] 0 containers: []
	W0816 18:18:29.395301   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:29.395309   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:29.395375   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:29.426617   75402 cri.go:89] found id: ""
	I0816 18:18:29.426646   75402 logs.go:276] 0 containers: []
	W0816 18:18:29.426656   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:29.426663   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:29.426725   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:29.462861   75402 cri.go:89] found id: ""
	I0816 18:18:29.462890   75402 logs.go:276] 0 containers: []
	W0816 18:18:29.462901   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:29.462912   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:29.462928   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:29.514882   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:29.514915   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:29.528101   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:29.528128   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:29.598983   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:29.599005   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:29.599020   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:29.684955   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:29.684991   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:32.230155   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:32.244158   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:32.244226   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:32.281993   75402 cri.go:89] found id: ""
	I0816 18:18:32.282020   75402 logs.go:276] 0 containers: []
	W0816 18:18:32.282031   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:32.282037   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:32.282100   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:32.316870   75402 cri.go:89] found id: ""
	I0816 18:18:32.316896   75402 logs.go:276] 0 containers: []
	W0816 18:18:32.316906   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:32.316914   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:32.316976   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:32.352597   75402 cri.go:89] found id: ""
	I0816 18:18:32.352637   75402 logs.go:276] 0 containers: []
	W0816 18:18:32.352649   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:32.352656   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:32.352722   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:32.387520   75402 cri.go:89] found id: ""
	I0816 18:18:32.387564   75402 logs.go:276] 0 containers: []
	W0816 18:18:32.387576   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:32.387584   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:32.387638   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:32.421499   75402 cri.go:89] found id: ""
	I0816 18:18:32.421526   75402 logs.go:276] 0 containers: []
	W0816 18:18:32.421537   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:32.421544   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:32.421603   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:32.460048   75402 cri.go:89] found id: ""
	I0816 18:18:32.460075   75402 logs.go:276] 0 containers: []
	W0816 18:18:32.460086   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:32.460093   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:32.460151   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:32.498148   75402 cri.go:89] found id: ""
	I0816 18:18:32.498176   75402 logs.go:276] 0 containers: []
	W0816 18:18:32.498184   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:32.498190   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:32.498248   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:32.530683   75402 cri.go:89] found id: ""
	I0816 18:18:32.530717   75402 logs.go:276] 0 containers: []
	W0816 18:18:32.530730   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:32.530741   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:32.530762   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:32.614776   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:32.614820   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:32.655628   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:32.655667   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:32.722763   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:32.722807   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:32.739817   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:32.739847   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:32.819297   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:30.440684   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:32.441210   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:30.206433   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:32.707436   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:30.448781   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:32.457660   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:35.320173   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:35.332427   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:35.332503   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:35.366316   75402 cri.go:89] found id: ""
	I0816 18:18:35.366346   75402 logs.go:276] 0 containers: []
	W0816 18:18:35.366357   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:35.366365   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:35.366433   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:35.399308   75402 cri.go:89] found id: ""
	I0816 18:18:35.399346   75402 logs.go:276] 0 containers: []
	W0816 18:18:35.399357   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:35.399367   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:35.399434   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:35.434926   75402 cri.go:89] found id: ""
	I0816 18:18:35.434958   75402 logs.go:276] 0 containers: []
	W0816 18:18:35.434971   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:35.434980   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:35.435042   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:35.473222   75402 cri.go:89] found id: ""
	I0816 18:18:35.473247   75402 logs.go:276] 0 containers: []
	W0816 18:18:35.473258   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:35.473266   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:35.473343   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:35.505484   75402 cri.go:89] found id: ""
	I0816 18:18:35.505521   75402 logs.go:276] 0 containers: []
	W0816 18:18:35.505533   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:35.505540   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:35.505608   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:35.540532   75402 cri.go:89] found id: ""
	I0816 18:18:35.540573   75402 logs.go:276] 0 containers: []
	W0816 18:18:35.540584   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:35.540590   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:35.540663   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:35.574205   75402 cri.go:89] found id: ""
	I0816 18:18:35.574235   75402 logs.go:276] 0 containers: []
	W0816 18:18:35.574245   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:35.574252   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:35.574343   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:35.614707   75402 cri.go:89] found id: ""
	I0816 18:18:35.614732   75402 logs.go:276] 0 containers: []
	W0816 18:18:35.614739   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:35.614747   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:35.614759   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:35.690830   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:35.690861   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:35.726601   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:35.726627   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:35.774706   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:35.774736   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:35.787557   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:35.787616   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:35.857474   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:34.940337   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:37.440507   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:34.701151   74828 pod_ready.go:82] duration metric: took 4m0.000965442s for pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace to be "Ready" ...
	E0816 18:18:34.701178   74828 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace to be "Ready" (will not retry!)
	I0816 18:18:34.701196   74828 pod_ready.go:39] duration metric: took 4m13.502588966s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:18:34.701228   74828 kubeadm.go:597] duration metric: took 4m21.306103533s to restartPrimaryControlPlane
	W0816 18:18:34.701293   74828 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0816 18:18:34.701330   74828 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 18:18:34.948583   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:37.447544   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:39.448942   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:38.358057   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:38.371128   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:38.371189   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:38.404812   75402 cri.go:89] found id: ""
	I0816 18:18:38.404844   75402 logs.go:276] 0 containers: []
	W0816 18:18:38.404855   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:38.404864   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:38.404926   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:38.437922   75402 cri.go:89] found id: ""
	I0816 18:18:38.437950   75402 logs.go:276] 0 containers: []
	W0816 18:18:38.437960   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:38.437967   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:38.438023   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:38.471474   75402 cri.go:89] found id: ""
	I0816 18:18:38.471509   75402 logs.go:276] 0 containers: []
	W0816 18:18:38.471519   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:38.471525   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:38.471582   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:38.510132   75402 cri.go:89] found id: ""
	I0816 18:18:38.510158   75402 logs.go:276] 0 containers: []
	W0816 18:18:38.510168   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:38.510184   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:38.510246   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:38.542212   75402 cri.go:89] found id: ""
	I0816 18:18:38.542251   75402 logs.go:276] 0 containers: []
	W0816 18:18:38.542262   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:38.542269   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:38.542341   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:38.579037   75402 cri.go:89] found id: ""
	I0816 18:18:38.579068   75402 logs.go:276] 0 containers: []
	W0816 18:18:38.579076   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:38.579082   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:38.579129   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:38.619219   75402 cri.go:89] found id: ""
	I0816 18:18:38.619252   75402 logs.go:276] 0 containers: []
	W0816 18:18:38.619263   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:38.619272   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:38.619335   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:38.655124   75402 cri.go:89] found id: ""
	I0816 18:18:38.655149   75402 logs.go:276] 0 containers: []
	W0816 18:18:38.655169   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:38.655180   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:38.655194   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:38.737857   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:38.737894   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:38.779777   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:38.779806   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:38.831556   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:38.831590   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:38.844496   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:38.844523   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:38.914543   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:41.415612   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:41.428187   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:41.428251   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:41.462932   75402 cri.go:89] found id: ""
	I0816 18:18:41.462964   75402 logs.go:276] 0 containers: []
	W0816 18:18:41.462975   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:41.462983   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:41.463043   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:41.497712   75402 cri.go:89] found id: ""
	I0816 18:18:41.497739   75402 logs.go:276] 0 containers: []
	W0816 18:18:41.497748   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:41.497754   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:41.497804   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:41.528430   75402 cri.go:89] found id: ""
	I0816 18:18:41.528455   75402 logs.go:276] 0 containers: []
	W0816 18:18:41.528463   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:41.528468   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:41.528527   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:41.560048   75402 cri.go:89] found id: ""
	I0816 18:18:41.560071   75402 logs.go:276] 0 containers: []
	W0816 18:18:41.560081   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:41.560088   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:41.560142   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:41.592536   75402 cri.go:89] found id: ""
	I0816 18:18:41.592566   75402 logs.go:276] 0 containers: []
	W0816 18:18:41.592577   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:41.592585   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:41.592663   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:41.626850   75402 cri.go:89] found id: ""
	I0816 18:18:41.626884   75402 logs.go:276] 0 containers: []
	W0816 18:18:41.626894   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:41.626902   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:41.626965   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:41.660452   75402 cri.go:89] found id: ""
	I0816 18:18:41.660478   75402 logs.go:276] 0 containers: []
	W0816 18:18:41.660486   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:41.660491   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:41.660542   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:41.695990   75402 cri.go:89] found id: ""
	I0816 18:18:41.696012   75402 logs.go:276] 0 containers: []
	W0816 18:18:41.696020   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:41.696028   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:41.696039   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:41.733107   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:41.733134   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:41.782812   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:41.782843   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:41.795954   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:41.795984   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:41.867473   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:41.867526   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:41.867545   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:39.442037   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:41.940088   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:41.948682   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:43.942215   75006 pod_ready.go:82] duration metric: took 4m0.000164284s for pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace to be "Ready" ...
	E0816 18:18:43.942239   75006 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace to be "Ready" (will not retry!)
	I0816 18:18:43.942255   75006 pod_ready.go:39] duration metric: took 4m12.163955241s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:18:43.942279   75006 kubeadm.go:597] duration metric: took 4m21.898271101s to restartPrimaryControlPlane
	W0816 18:18:43.942326   75006 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0816 18:18:43.942352   75006 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 18:18:44.450340   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:44.463299   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:44.463361   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:44.495068   75402 cri.go:89] found id: ""
	I0816 18:18:44.495098   75402 logs.go:276] 0 containers: []
	W0816 18:18:44.495108   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:44.495116   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:44.495221   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:44.529615   75402 cri.go:89] found id: ""
	I0816 18:18:44.529638   75402 logs.go:276] 0 containers: []
	W0816 18:18:44.529646   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:44.529651   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:44.529701   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:44.565275   75402 cri.go:89] found id: ""
	I0816 18:18:44.565298   75402 logs.go:276] 0 containers: []
	W0816 18:18:44.565306   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:44.565321   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:44.565384   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:44.598554   75402 cri.go:89] found id: ""
	I0816 18:18:44.598590   75402 logs.go:276] 0 containers: []
	W0816 18:18:44.598601   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:44.598609   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:44.598673   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:44.631389   75402 cri.go:89] found id: ""
	I0816 18:18:44.631422   75402 logs.go:276] 0 containers: []
	W0816 18:18:44.631436   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:44.631446   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:44.631519   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:44.663986   75402 cri.go:89] found id: ""
	I0816 18:18:44.664013   75402 logs.go:276] 0 containers: []
	W0816 18:18:44.664023   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:44.664031   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:44.664095   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:44.700238   75402 cri.go:89] found id: ""
	I0816 18:18:44.700263   75402 logs.go:276] 0 containers: []
	W0816 18:18:44.700272   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:44.700277   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:44.700330   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:44.732737   75402 cri.go:89] found id: ""
	I0816 18:18:44.732766   75402 logs.go:276] 0 containers: []
	W0816 18:18:44.732779   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:44.732790   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:44.732807   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:44.806427   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:44.806462   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:44.842965   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:44.842994   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:44.895745   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:44.895781   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:44.909850   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:44.909885   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:44.979315   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:47.479563   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:47.491876   75402 kubeadm.go:597] duration metric: took 4m4.431091965s to restartPrimaryControlPlane
	W0816 18:18:47.491939   75402 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0816 18:18:47.491962   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 18:18:43.941047   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:46.440592   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:48.441208   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:51.168302   75402 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.676317513s)
	I0816 18:18:51.168387   75402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 18:18:51.182492   75402 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 18:18:51.192403   75402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 18:18:51.202058   75402 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 18:18:51.202075   75402 kubeadm.go:157] found existing configuration files:
	
	I0816 18:18:51.202115   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 18:18:51.210661   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 18:18:51.210721   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 18:18:51.219979   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 18:18:51.228422   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 18:18:51.228488   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 18:18:51.237159   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 18:18:51.245555   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 18:18:51.245622   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 18:18:51.253986   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 18:18:51.261885   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 18:18:51.261927   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 18:18:51.270479   75402 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 18:18:51.335784   75402 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0816 18:18:51.335883   75402 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 18:18:51.482910   75402 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 18:18:51.483069   75402 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 18:18:51.483228   75402 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0816 18:18:51.652730   75402 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 18:18:51.655077   75402 out.go:235]   - Generating certificates and keys ...
	I0816 18:18:51.655185   75402 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 18:18:51.655304   75402 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 18:18:51.655425   75402 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 18:18:51.655521   75402 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 18:18:51.657408   75402 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 18:18:51.657485   75402 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 18:18:51.657561   75402 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 18:18:51.657645   75402 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 18:18:51.657748   75402 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 18:18:51.657854   75402 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 18:18:51.657911   75402 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 18:18:51.657984   75402 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 18:18:51.720786   75402 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 18:18:51.991165   75402 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 18:18:52.140983   75402 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 18:18:52.453361   75402 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 18:18:52.467210   75402 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 18:18:52.469222   75402 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 18:18:52.469338   75402 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 18:18:52.590938   75402 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 18:18:52.592875   75402 out.go:235]   - Booting up control plane ...
	I0816 18:18:52.592987   75402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 18:18:52.602597   75402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 18:18:52.603616   75402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 18:18:52.604417   75402 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 18:18:52.606669   75402 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0816 18:18:50.939639   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:52.940202   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:54.940917   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:57.439382   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:59.443139   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:01.940496   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:00.803654   74828 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.102297191s)
	I0816 18:19:00.803740   74828 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 18:19:00.818126   74828 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 18:19:00.827602   74828 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 18:19:00.836389   74828 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 18:19:00.836410   74828 kubeadm.go:157] found existing configuration files:
	
	I0816 18:19:00.836455   74828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 18:19:00.844830   74828 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 18:19:00.844880   74828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 18:19:00.853736   74828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 18:19:00.862795   74828 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 18:19:00.862855   74828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 18:19:00.872056   74828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 18:19:00.880410   74828 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 18:19:00.880461   74828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 18:19:00.889000   74828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 18:19:00.897508   74828 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 18:19:00.897568   74828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 18:19:00.906256   74828 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 18:19:00.953336   74828 kubeadm.go:310] W0816 18:19:00.929461    3053 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 18:19:00.955337   74828 kubeadm.go:310] W0816 18:19:00.931382    3053 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 18:19:01.068247   74828 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 18:19:03.940545   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:06.439727   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:08.440027   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:09.225829   74828 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0816 18:19:09.225908   74828 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 18:19:09.226014   74828 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 18:19:09.226126   74828 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 18:19:09.226242   74828 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0816 18:19:09.226329   74828 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 18:19:09.228065   74828 out.go:235]   - Generating certificates and keys ...
	I0816 18:19:09.228133   74828 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 18:19:09.228183   74828 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 18:19:09.228252   74828 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 18:19:09.228315   74828 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 18:19:09.228403   74828 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 18:19:09.228489   74828 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 18:19:09.228584   74828 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 18:19:09.228686   74828 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 18:19:09.228787   74828 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 18:19:09.228864   74828 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 18:19:09.228903   74828 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 18:19:09.228983   74828 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 18:19:09.229052   74828 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 18:19:09.229147   74828 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0816 18:19:09.229234   74828 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 18:19:09.229332   74828 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 18:19:09.229410   74828 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 18:19:09.229532   74828 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 18:19:09.229607   74828 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 18:19:09.230874   74828 out.go:235]   - Booting up control plane ...
	I0816 18:19:09.230948   74828 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 18:19:09.231032   74828 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 18:19:09.231090   74828 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 18:19:09.231202   74828 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 18:19:09.231321   74828 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 18:19:09.231381   74828 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 18:19:09.231572   74828 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0816 18:19:09.231662   74828 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0816 18:19:09.231711   74828 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.32263ms
	I0816 18:19:09.231774   74828 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0816 18:19:09.231824   74828 kubeadm.go:310] [api-check] The API server is healthy after 5.002367118s
	I0816 18:19:09.231923   74828 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0816 18:19:09.232091   74828 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0816 18:19:09.232166   74828 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0816 18:19:09.232419   74828 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-864476 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0816 18:19:09.232497   74828 kubeadm.go:310] [bootstrap-token] Using token: 6m1jus.xr9uhx26t28q092p
	I0816 18:19:09.233962   74828 out.go:235]   - Configuring RBAC rules ...
	I0816 18:19:09.234068   74828 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0816 18:19:09.234164   74828 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0816 18:19:09.234315   74828 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0816 18:19:09.234425   74828 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0816 18:19:09.234522   74828 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0816 18:19:09.234615   74828 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0816 18:19:09.234775   74828 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0816 18:19:09.234830   74828 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0816 18:19:09.234892   74828 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0816 18:19:09.234901   74828 kubeadm.go:310] 
	I0816 18:19:09.234971   74828 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0816 18:19:09.234980   74828 kubeadm.go:310] 
	I0816 18:19:09.235067   74828 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0816 18:19:09.235076   74828 kubeadm.go:310] 
	I0816 18:19:09.235115   74828 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0816 18:19:09.235194   74828 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0816 18:19:09.235271   74828 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0816 18:19:09.235280   74828 kubeadm.go:310] 
	I0816 18:19:09.235367   74828 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0816 18:19:09.235376   74828 kubeadm.go:310] 
	I0816 18:19:09.235448   74828 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0816 18:19:09.235459   74828 kubeadm.go:310] 
	I0816 18:19:09.235533   74828 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0816 18:19:09.235607   74828 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0816 18:19:09.235677   74828 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0816 18:19:09.235683   74828 kubeadm.go:310] 
	I0816 18:19:09.235795   74828 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0816 18:19:09.235907   74828 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0816 18:19:09.235916   74828 kubeadm.go:310] 
	I0816 18:19:09.235986   74828 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 6m1jus.xr9uhx26t28q092p \
	I0816 18:19:09.236080   74828 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:340cd7e6f4afd769ffe0b97bb40a910498795aaf29fab06cd6b8c9a3957ccd57 \
	I0816 18:19:09.236099   74828 kubeadm.go:310] 	--control-plane 
	I0816 18:19:09.236105   74828 kubeadm.go:310] 
	I0816 18:19:09.236177   74828 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0816 18:19:09.236185   74828 kubeadm.go:310] 
	I0816 18:19:09.236268   74828 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 6m1jus.xr9uhx26t28q092p \
	I0816 18:19:09.236403   74828 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:340cd7e6f4afd769ffe0b97bb40a910498795aaf29fab06cd6b8c9a3957ccd57 
	I0816 18:19:09.236416   74828 cni.go:84] Creating CNI manager for ""
	I0816 18:19:09.236422   74828 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 18:19:09.237971   74828 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 18:19:10.069497   75006 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.127122656s)
	I0816 18:19:10.069585   75006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 18:19:10.085322   75006 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 18:19:10.098736   75006 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 18:19:10.108163   75006 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 18:19:10.108183   75006 kubeadm.go:157] found existing configuration files:
	
	I0816 18:19:10.108224   75006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0816 18:19:10.117330   75006 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 18:19:10.117382   75006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 18:19:10.127090   75006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0816 18:19:10.135574   75006 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 18:19:10.135648   75006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 18:19:10.146127   75006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0816 18:19:10.154474   75006 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 18:19:10.154533   75006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 18:19:10.163245   75006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0816 18:19:10.171315   75006 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 18:19:10.171375   75006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 18:19:10.181088   75006 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 18:19:10.225495   75006 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0816 18:19:10.225571   75006 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 18:19:10.327332   75006 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 18:19:10.327442   75006 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 18:19:10.327586   75006 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0816 18:19:10.335739   75006 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 18:19:10.337610   75006 out.go:235]   - Generating certificates and keys ...
	I0816 18:19:10.337730   75006 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 18:19:10.337818   75006 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 18:19:10.337935   75006 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 18:19:10.338054   75006 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 18:19:10.338174   75006 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 18:19:10.338254   75006 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 18:19:10.338359   75006 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 18:19:10.338452   75006 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 18:19:10.338562   75006 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 18:19:10.338668   75006 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 18:19:10.338718   75006 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 18:19:10.338796   75006 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 18:19:10.437447   75006 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 18:19:10.868191   75006 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0816 18:19:10.961497   75006 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 18:19:11.363158   75006 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 18:19:11.963929   75006 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 18:19:11.964410   75006 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 18:19:11.967675   75006 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 18:19:09.239250   74828 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 18:19:09.250270   74828 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 18:19:09.267205   74828 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 18:19:09.267346   74828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:09.267366   74828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-864476 minikube.k8s.io/updated_at=2024_08_16T18_19_09_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd minikube.k8s.io/name=no-preload-864476 minikube.k8s.io/primary=true
	I0816 18:19:09.282111   74828 ops.go:34] apiserver oom_adj: -16
	I0816 18:19:09.471160   74828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:09.971453   74828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:10.471576   74828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:10.971748   74828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:11.471954   74828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:11.971371   74828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:12.471626   74828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:12.972021   74828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:13.472254   74828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:13.588350   74828 kubeadm.go:1113] duration metric: took 4.321062687s to wait for elevateKubeSystemPrivileges
	I0816 18:19:13.588392   74828 kubeadm.go:394] duration metric: took 5m0.245036951s to StartCluster
	I0816 18:19:13.588413   74828 settings.go:142] acquiring lock: {Name:mk7d6bbff611e99a9222156e58c7ea3b0a5541f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:19:13.588500   74828 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19461-9545/kubeconfig
	I0816 18:19:13.591118   74828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/kubeconfig: {Name:mk7d5abb3a1b9b654c5d7490d32aeae2c9a1bdd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:19:13.591418   74828 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.50 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 18:19:13.591683   74828 config.go:182] Loaded profile config "no-preload-864476": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 18:19:13.591744   74828 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 18:19:13.591809   74828 addons.go:69] Setting storage-provisioner=true in profile "no-preload-864476"
	I0816 18:19:13.591839   74828 addons.go:234] Setting addon storage-provisioner=true in "no-preload-864476"
	W0816 18:19:13.591851   74828 addons.go:243] addon storage-provisioner should already be in state true
	I0816 18:19:13.591882   74828 host.go:66] Checking if "no-preload-864476" exists ...
	I0816 18:19:13.592025   74828 addons.go:69] Setting default-storageclass=true in profile "no-preload-864476"
	I0816 18:19:13.592070   74828 addons.go:69] Setting metrics-server=true in profile "no-preload-864476"
	I0816 18:19:13.592135   74828 addons.go:234] Setting addon metrics-server=true in "no-preload-864476"
	W0816 18:19:13.592150   74828 addons.go:243] addon metrics-server should already be in state true
	I0816 18:19:13.592073   74828 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-864476"
	I0816 18:19:13.592272   74828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:13.592206   74828 host.go:66] Checking if "no-preload-864476" exists ...
	I0816 18:19:13.592326   74828 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:13.592654   74828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:13.592677   74828 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:13.592731   74828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:13.592753   74828 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:13.592790   74828 out.go:177] * Verifying Kubernetes components...
	I0816 18:19:13.594236   74828 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:19:13.613019   74828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42847
	I0816 18:19:13.613061   74828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44393
	I0816 18:19:13.613087   74828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40547
	I0816 18:19:13.613498   74828 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:13.613552   74828 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:13.613708   74828 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:13.614094   74828 main.go:141] libmachine: Using API Version  1
	I0816 18:19:13.614113   74828 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:13.614198   74828 main.go:141] libmachine: Using API Version  1
	I0816 18:19:13.614222   74828 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:13.614403   74828 main.go:141] libmachine: Using API Version  1
	I0816 18:19:13.614420   74828 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:13.614478   74828 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:13.614675   74828 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:13.614728   74828 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:13.614856   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetState
	I0816 18:19:13.615039   74828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:13.615068   74828 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:13.615401   74828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:13.615442   74828 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:13.619787   74828 addons.go:234] Setting addon default-storageclass=true in "no-preload-864476"
	W0816 18:19:13.619815   74828 addons.go:243] addon default-storageclass should already be in state true
	I0816 18:19:13.619848   74828 host.go:66] Checking if "no-preload-864476" exists ...
	I0816 18:19:13.620274   74828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:13.620438   74828 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:13.642013   74828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43679
	I0816 18:19:13.642196   74828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46207
	I0816 18:19:13.642654   74828 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:13.643201   74828 main.go:141] libmachine: Using API Version  1
	I0816 18:19:13.643227   74828 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:13.643304   74828 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:13.643888   74828 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:13.644065   74828 main.go:141] libmachine: Using API Version  1
	I0816 18:19:13.644086   74828 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:13.644537   74828 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:13.644548   74828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:13.644591   74828 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:13.645002   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetState
	I0816 18:19:13.646881   74828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40749
	I0816 18:19:13.647127   74828 main.go:141] libmachine: (no-preload-864476) Calling .DriverName
	I0816 18:19:13.647406   74828 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:13.648126   74828 main.go:141] libmachine: Using API Version  1
	I0816 18:19:13.648156   74828 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:13.648725   74828 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:13.648935   74828 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0816 18:19:13.649121   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetState
	I0816 18:19:13.649823   74828 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 18:19:13.649840   74828 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0816 18:19:13.649861   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:19:13.651524   74828 main.go:141] libmachine: (no-preload-864476) Calling .DriverName
	I0816 18:19:13.652917   74828 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:19:10.441027   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:12.939870   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:13.653916   74828 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 18:19:13.653933   74828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 18:19:13.653952   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:19:13.654035   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:19:13.654463   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:19:13.654482   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:19:13.654665   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHPort
	I0816 18:19:13.654883   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:19:13.655044   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHUsername
	I0816 18:19:13.655247   74828 sshutil.go:53] new ssh client: &{IP:192.168.50.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/no-preload-864476/id_rsa Username:docker}
	I0816 18:19:13.657315   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:19:13.657699   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:19:13.657783   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:19:13.657974   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHPort
	I0816 18:19:13.658125   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:19:13.658247   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHUsername
	I0816 18:19:13.658362   74828 sshutil.go:53] new ssh client: &{IP:192.168.50.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/no-preload-864476/id_rsa Username:docker}
	I0816 18:19:13.670111   74828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45349
	I0816 18:19:13.670711   74828 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:13.671220   74828 main.go:141] libmachine: Using API Version  1
	I0816 18:19:13.671239   74828 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:13.671585   74828 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:13.671778   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetState
	I0816 18:19:13.673274   74828 main.go:141] libmachine: (no-preload-864476) Calling .DriverName
	I0816 18:19:13.673480   74828 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 18:19:13.673493   74828 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 18:19:13.673511   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:19:13.677160   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:19:13.677542   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:19:13.677564   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:19:13.677854   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHPort
	I0816 18:19:13.678049   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:19:13.678170   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHUsername
	I0816 18:19:13.678263   74828 sshutil.go:53] new ssh client: &{IP:192.168.50.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/no-preload-864476/id_rsa Username:docker}
	I0816 18:19:11.970291   75006 out.go:235]   - Booting up control plane ...
	I0816 18:19:11.970385   75006 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 18:19:11.970516   75006 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 18:19:11.970617   75006 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 18:19:11.988374   75006 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 18:19:11.997980   75006 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 18:19:11.998045   75006 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 18:19:12.132297   75006 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0816 18:19:12.132447   75006 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0816 18:19:13.135489   75006 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.003222114s
	I0816 18:19:13.135584   75006 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0816 18:19:13.840111   74828 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 18:19:13.903130   74828 node_ready.go:35] waiting up to 6m0s for node "no-preload-864476" to be "Ready" ...
	I0816 18:19:13.915130   74828 node_ready.go:49] node "no-preload-864476" has status "Ready":"True"
	I0816 18:19:13.915163   74828 node_ready.go:38] duration metric: took 12.001127ms for node "no-preload-864476" to be "Ready" ...
	I0816 18:19:13.915174   74828 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:19:13.926756   74828 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-6zfgr" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:13.944598   74828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 18:19:13.971002   74828 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 18:19:13.971036   74828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0816 18:19:13.998897   74828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 18:19:14.015731   74828 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 18:19:14.015754   74828 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0816 18:19:14.080186   74828 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 18:19:14.080212   74828 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0816 18:19:14.187279   74828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 18:19:15.075984   74828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.077053329s)
	I0816 18:19:15.076058   74828 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:15.076071   74828 main.go:141] libmachine: (no-preload-864476) Calling .Close
	I0816 18:19:15.076364   74828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.131733705s)
	I0816 18:19:15.076478   74828 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:15.076495   74828 main.go:141] libmachine: (no-preload-864476) Calling .Close
	I0816 18:19:15.076405   74828 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:15.076567   74828 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:15.076591   74828 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:15.076600   74828 main.go:141] libmachine: (no-preload-864476) Calling .Close
	I0816 18:19:15.076436   74828 main.go:141] libmachine: (no-preload-864476) DBG | Closing plugin on server side
	I0816 18:19:15.076786   74828 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:15.076838   74828 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:15.076859   74828 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:15.076879   74828 main.go:141] libmachine: (no-preload-864476) Calling .Close
	I0816 18:19:15.076969   74828 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:15.076987   74828 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:15.077443   74828 main.go:141] libmachine: (no-preload-864476) DBG | Closing plugin on server side
	I0816 18:19:15.077517   74828 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:15.077535   74828 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:15.164872   74828 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:15.164903   74828 main.go:141] libmachine: (no-preload-864476) Calling .Close
	I0816 18:19:15.165218   74828 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:15.165238   74828 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:15.373294   74828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.1859614s)
	I0816 18:19:15.373399   74828 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:15.373417   74828 main.go:141] libmachine: (no-preload-864476) Calling .Close
	I0816 18:19:15.373716   74828 main.go:141] libmachine: (no-preload-864476) DBG | Closing plugin on server side
	I0816 18:19:15.373769   74828 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:15.373804   74828 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:15.373825   74828 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:15.373837   74828 main.go:141] libmachine: (no-preload-864476) Calling .Close
	I0816 18:19:15.374124   74828 main.go:141] libmachine: (no-preload-864476) DBG | Closing plugin on server side
	I0816 18:19:15.374130   74828 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:15.374181   74828 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:15.374192   74828 addons.go:475] Verifying addon metrics-server=true in "no-preload-864476"
	I0816 18:19:15.375801   74828 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0816 18:19:17.638005   75006 kubeadm.go:310] [api-check] The API server is healthy after 4.502130995s
	I0816 18:19:17.658334   75006 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0816 18:19:17.678882   75006 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0816 18:19:17.709612   75006 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0816 18:19:17.709881   75006 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-256678 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0816 18:19:17.724755   75006 kubeadm.go:310] [bootstrap-token] Using token: cdypho.k0vxtmnp4c93945s
	I0816 18:19:14.941895   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:17.440923   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:15.377611   74828 addons.go:510] duration metric: took 1.785861834s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0816 18:19:15.934515   74828 pod_ready.go:103] pod "coredns-6f6b679f8f-6zfgr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:18.435321   74828 pod_ready.go:103] pod "coredns-6f6b679f8f-6zfgr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:17.726222   75006 out.go:235]   - Configuring RBAC rules ...
	I0816 18:19:17.726361   75006 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0816 18:19:17.733325   75006 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0816 18:19:17.740707   75006 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0816 18:19:17.747325   75006 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0816 18:19:17.751554   75006 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0816 18:19:17.761084   75006 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0816 18:19:18.044607   75006 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0816 18:19:18.485134   75006 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0816 18:19:19.044481   75006 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0816 18:19:19.045968   75006 kubeadm.go:310] 
	I0816 18:19:19.046038   75006 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0816 18:19:19.046069   75006 kubeadm.go:310] 
	I0816 18:19:19.046185   75006 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0816 18:19:19.046198   75006 kubeadm.go:310] 
	I0816 18:19:19.046229   75006 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0816 18:19:19.046298   75006 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0816 18:19:19.046343   75006 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0816 18:19:19.046349   75006 kubeadm.go:310] 
	I0816 18:19:19.046396   75006 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0816 18:19:19.046413   75006 kubeadm.go:310] 
	I0816 18:19:19.046504   75006 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0816 18:19:19.046529   75006 kubeadm.go:310] 
	I0816 18:19:19.046614   75006 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0816 18:19:19.046718   75006 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0816 18:19:19.046813   75006 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0816 18:19:19.046828   75006 kubeadm.go:310] 
	I0816 18:19:19.046941   75006 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0816 18:19:19.047047   75006 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0816 18:19:19.047056   75006 kubeadm.go:310] 
	I0816 18:19:19.047153   75006 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token cdypho.k0vxtmnp4c93945s \
	I0816 18:19:19.047304   75006 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:340cd7e6f4afd769ffe0b97bb40a910498795aaf29fab06cd6b8c9a3957ccd57 \
	I0816 18:19:19.047346   75006 kubeadm.go:310] 	--control-plane 
	I0816 18:19:19.047358   75006 kubeadm.go:310] 
	I0816 18:19:19.047470   75006 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0816 18:19:19.047480   75006 kubeadm.go:310] 
	I0816 18:19:19.047596   75006 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token cdypho.k0vxtmnp4c93945s \
	I0816 18:19:19.047740   75006 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:340cd7e6f4afd769ffe0b97bb40a910498795aaf29fab06cd6b8c9a3957ccd57 
	I0816 18:19:19.048871   75006 kubeadm.go:310] W0816 18:19:10.202021    2564 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 18:19:19.049167   75006 kubeadm.go:310] W0816 18:19:10.202700    2564 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 18:19:19.049279   75006 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 18:19:19.049304   75006 cni.go:84] Creating CNI manager for ""
	I0816 18:19:19.049318   75006 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 18:19:19.051543   75006 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 18:19:19.052677   75006 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 18:19:19.063536   75006 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 18:19:19.084460   75006 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 18:19:19.084540   75006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:19.084608   75006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-256678 minikube.k8s.io/updated_at=2024_08_16T18_19_19_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd minikube.k8s.io/name=default-k8s-diff-port-256678 minikube.k8s.io/primary=true
	I0816 18:19:19.257760   75006 ops.go:34] apiserver oom_adj: -16
	I0816 18:19:19.258124   75006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:19.759000   75006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:19.940737   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:22.440273   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:20.934243   74828 pod_ready.go:103] pod "coredns-6f6b679f8f-6zfgr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:23.433046   74828 pod_ready.go:103] pod "coredns-6f6b679f8f-6zfgr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:20.258798   75006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:20.759112   75006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:21.258598   75006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:21.758433   75006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:22.258181   75006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:22.758276   75006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:23.258184   75006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:23.758168   75006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:23.846653   75006 kubeadm.go:1113] duration metric: took 4.762173901s to wait for elevateKubeSystemPrivileges
	I0816 18:19:23.846688   75006 kubeadm.go:394] duration metric: took 5m1.846731834s to StartCluster
	I0816 18:19:23.846708   75006 settings.go:142] acquiring lock: {Name:mk7d6bbff611e99a9222156e58c7ea3b0a5541f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:19:23.846784   75006 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19461-9545/kubeconfig
	I0816 18:19:23.848375   75006 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/kubeconfig: {Name:mk7d5abb3a1b9b654c5d7490d32aeae2c9a1bdd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:19:23.848662   75006 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.144 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 18:19:23.848750   75006 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 18:19:23.848814   75006 config.go:182] Loaded profile config "default-k8s-diff-port-256678": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 18:19:23.848840   75006 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-256678"
	I0816 18:19:23.848858   75006 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-256678"
	I0816 18:19:23.848866   75006 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-256678"
	I0816 18:19:23.848878   75006 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-256678"
	I0816 18:19:23.848882   75006 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-256678"
	W0816 18:19:23.848887   75006 addons.go:243] addon storage-provisioner should already be in state true
	W0816 18:19:23.848890   75006 addons.go:243] addon metrics-server should already be in state true
	I0816 18:19:23.848915   75006 host.go:66] Checking if "default-k8s-diff-port-256678" exists ...
	I0816 18:19:23.848918   75006 host.go:66] Checking if "default-k8s-diff-port-256678" exists ...
	I0816 18:19:23.848914   75006 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-256678"
	I0816 18:19:23.849232   75006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:23.849259   75006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:23.849271   75006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:23.849293   75006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:23.849362   75006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:23.849404   75006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:23.850478   75006 out.go:177] * Verifying Kubernetes components...
	I0816 18:19:23.852034   75006 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:19:23.865786   75006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32949
	I0816 18:19:23.865939   75006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32833
	I0816 18:19:23.866248   75006 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:23.866304   75006 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:23.866398   75006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46809
	I0816 18:19:23.866816   75006 main.go:141] libmachine: Using API Version  1
	I0816 18:19:23.866845   75006 main.go:141] libmachine: Using API Version  1
	I0816 18:19:23.866860   75006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:23.866863   75006 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:23.866935   75006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:23.867328   75006 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:23.867333   75006 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:23.867430   75006 main.go:141] libmachine: Using API Version  1
	I0816 18:19:23.867447   75006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:23.867517   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetState
	I0816 18:19:23.867742   75006 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:23.867871   75006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:23.867897   75006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:23.868227   75006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:23.868247   75006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:23.870993   75006 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-256678"
	W0816 18:19:23.871020   75006 addons.go:243] addon default-storageclass should already be in state true
	I0816 18:19:23.871051   75006 host.go:66] Checking if "default-k8s-diff-port-256678" exists ...
	I0816 18:19:23.871403   75006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:23.871433   75006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:23.885139   75006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42813
	I0816 18:19:23.885814   75006 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:23.886386   75006 main.go:141] libmachine: Using API Version  1
	I0816 18:19:23.886403   75006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:23.886814   75006 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:23.886856   75006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39359
	I0816 18:19:23.887024   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetState
	I0816 18:19:23.887202   75006 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:23.887542   75006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36361
	I0816 18:19:23.887784   75006 main.go:141] libmachine: Using API Version  1
	I0816 18:19:23.887797   75006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:23.887863   75006 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:23.888165   75006 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:23.888372   75006 main.go:141] libmachine: Using API Version  1
	I0816 18:19:23.888389   75006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:23.889026   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .DriverName
	I0816 18:19:23.889254   75006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:23.889268   75006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:23.889518   75006 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:23.889758   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetState
	I0816 18:19:23.890483   75006 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:19:23.891262   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .DriverName
	I0816 18:19:23.891838   75006 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 18:19:23.891859   75006 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 18:19:23.891877   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:19:23.892581   75006 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0816 18:19:23.893621   75006 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 18:19:23.893684   75006 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0816 18:19:23.893882   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:19:23.894413   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:19:23.894973   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:19:23.894994   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:19:23.895161   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHPort
	I0816 18:19:23.895322   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:19:23.895578   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHUsername
	I0816 18:19:23.895757   75006 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/default-k8s-diff-port-256678/id_rsa Username:docker}
	I0816 18:19:23.897167   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:19:23.897666   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:19:23.897685   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:19:23.897802   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHPort
	I0816 18:19:23.897972   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:19:23.898132   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHUsername
	I0816 18:19:23.898248   75006 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/default-k8s-diff-port-256678/id_rsa Username:docker}
	I0816 18:19:23.906377   75006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43895
	I0816 18:19:23.906708   75006 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:23.907497   75006 main.go:141] libmachine: Using API Version  1
	I0816 18:19:23.907513   75006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:23.907932   75006 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:23.908240   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetState
	I0816 18:19:23.909917   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .DriverName
	I0816 18:19:23.910141   75006 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 18:19:23.910159   75006 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 18:19:23.910177   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:19:23.912435   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:19:23.912678   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:19:23.912710   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:19:23.912858   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHPort
	I0816 18:19:23.912982   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:19:23.913066   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHUsername
	I0816 18:19:23.913138   75006 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/default-k8s-diff-port-256678/id_rsa Username:docker}
	I0816 18:19:24.062487   75006 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 18:19:24.083148   75006 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-256678" to be "Ready" ...
	I0816 18:19:24.092886   75006 node_ready.go:49] node "default-k8s-diff-port-256678" has status "Ready":"True"
	I0816 18:19:24.092907   75006 node_ready.go:38] duration metric: took 9.72996ms for node "default-k8s-diff-port-256678" to be "Ready" ...
	I0816 18:19:24.092916   75006 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:19:24.099123   75006 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-hx7sb" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:24.184211   75006 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 18:19:24.197461   75006 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 18:19:24.197491   75006 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0816 18:19:24.219263   75006 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 18:19:24.258463   75006 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 18:19:24.258498   75006 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0816 18:19:24.355822   75006 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 18:19:24.355902   75006 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0816 18:19:24.436401   75006 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 18:19:24.866038   75006 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:24.866125   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .Close
	I0816 18:19:24.866058   75006 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:24.866163   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .Close
	I0816 18:19:24.866478   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | Closing plugin on server side
	I0816 18:19:24.866517   75006 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:24.866526   75006 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:24.866536   75006 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:24.866546   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .Close
	I0816 18:19:24.866600   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | Closing plugin on server side
	I0816 18:19:24.866626   75006 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:24.866636   75006 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:24.866649   75006 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:24.866676   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .Close
	I0816 18:19:24.866778   75006 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:24.866793   75006 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:24.866810   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | Closing plugin on server side
	I0816 18:19:24.866888   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | Closing plugin on server side
	I0816 18:19:24.866923   75006 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:24.866932   75006 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:24.886041   75006 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:24.886065   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .Close
	I0816 18:19:24.886338   75006 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:24.886359   75006 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:24.886384   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | Closing plugin on server side
	I0816 18:19:25.225367   75006 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:25.225397   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .Close
	I0816 18:19:25.225704   75006 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:25.225720   75006 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:25.225730   75006 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:25.225739   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .Close
	I0816 18:19:25.225961   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | Closing plugin on server side
	I0816 18:19:25.226005   75006 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:25.226025   75006 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:25.226043   75006 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-256678"
	I0816 18:19:25.227605   75006 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0816 18:19:23.934167   74828 pod_ready.go:93] pod "coredns-6f6b679f8f-6zfgr" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:23.934191   74828 pod_ready.go:82] duration metric: took 10.007408518s for pod "coredns-6f6b679f8f-6zfgr" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:23.934200   74828 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-qr4q9" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:23.940226   74828 pod_ready.go:93] pod "coredns-6f6b679f8f-qr4q9" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:23.940249   74828 pod_ready.go:82] duration metric: took 6.040513ms for pod "coredns-6f6b679f8f-qr4q9" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:23.940260   74828 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:23.945330   74828 pod_ready.go:93] pod "etcd-no-preload-864476" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:23.945351   74828 pod_ready.go:82] duration metric: took 5.082362ms for pod "etcd-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:23.945361   74828 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:23.949772   74828 pod_ready.go:93] pod "kube-apiserver-no-preload-864476" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:23.949800   74828 pod_ready.go:82] duration metric: took 4.429575ms for pod "kube-apiserver-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:23.949810   74828 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:23.954308   74828 pod_ready.go:93] pod "kube-controller-manager-no-preload-864476" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:23.954328   74828 pod_ready.go:82] duration metric: took 4.510361ms for pod "kube-controller-manager-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:23.954338   74828 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6g6zx" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:24.331265   74828 pod_ready.go:93] pod "kube-proxy-6g6zx" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:24.331306   74828 pod_ready.go:82] duration metric: took 376.9609ms for pod "kube-proxy-6g6zx" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:24.331320   74828 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:24.730715   74828 pod_ready.go:93] pod "kube-scheduler-no-preload-864476" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:24.730740   74828 pod_ready.go:82] duration metric: took 399.412376ms for pod "kube-scheduler-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:24.730748   74828 pod_ready.go:39] duration metric: took 10.815561534s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:19:24.730761   74828 api_server.go:52] waiting for apiserver process to appear ...
	I0816 18:19:24.730820   74828 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:19:24.746674   74828 api_server.go:72] duration metric: took 11.155216371s to wait for apiserver process to appear ...
	I0816 18:19:24.746697   74828 api_server.go:88] waiting for apiserver healthz status ...
	I0816 18:19:24.746714   74828 api_server.go:253] Checking apiserver healthz at https://192.168.50.50:8443/healthz ...
	I0816 18:19:24.750801   74828 api_server.go:279] https://192.168.50.50:8443/healthz returned 200:
	ok
	I0816 18:19:24.751835   74828 api_server.go:141] control plane version: v1.31.0
	I0816 18:19:24.751864   74828 api_server.go:131] duration metric: took 5.159229ms to wait for apiserver health ...
	I0816 18:19:24.751872   74828 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 18:19:24.935471   74828 system_pods.go:59] 9 kube-system pods found
	I0816 18:19:24.935510   74828 system_pods.go:61] "coredns-6f6b679f8f-6zfgr" [99157766-5089-4abe-a888-ec5992e5720a] Running
	I0816 18:19:24.935520   74828 system_pods.go:61] "coredns-6f6b679f8f-qr4q9" [d20f51f3-6786-496b-a6bc-7457462e46e9] Running
	I0816 18:19:24.935539   74828 system_pods.go:61] "etcd-no-preload-864476" [246e2b57-dbfe-4fd2-bc9d-ef927d48ba0b] Running
	I0816 18:19:24.935548   74828 system_pods.go:61] "kube-apiserver-no-preload-864476" [0e386448-037f-4543-941a-63f07e0d3186] Running
	I0816 18:19:24.935555   74828 system_pods.go:61] "kube-controller-manager-no-preload-864476" [71617b5c-9968-4d49-ac6c-7728712ac880] Running
	I0816 18:19:24.935562   74828 system_pods.go:61] "kube-proxy-6g6zx" [71a027eb-99e3-4b48-b9f1-2fc80cad9d2e] Running
	I0816 18:19:24.935572   74828 system_pods.go:61] "kube-scheduler-no-preload-864476" [c9b6ef2a-41fa-408b-86b7-eae10db4bec6] Running
	I0816 18:19:24.935584   74828 system_pods.go:61] "metrics-server-6867b74b74-r6cph" [a842267c-2c75-4799-aefc-2fb92ccb9129] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 18:19:24.935596   74828 system_pods.go:61] "storage-provisioner" [c05cdb7c-d74e-4008-a0fc-5eb6df9595af] Running
	I0816 18:19:24.935607   74828 system_pods.go:74] duration metric: took 183.727841ms to wait for pod list to return data ...
	I0816 18:19:24.935621   74828 default_sa.go:34] waiting for default service account to be created ...
	I0816 18:19:25.132713   74828 default_sa.go:45] found service account: "default"
	I0816 18:19:25.132740   74828 default_sa.go:55] duration metric: took 197.112152ms for default service account to be created ...
	I0816 18:19:25.132750   74828 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 18:19:25.335012   74828 system_pods.go:86] 9 kube-system pods found
	I0816 18:19:25.335043   74828 system_pods.go:89] "coredns-6f6b679f8f-6zfgr" [99157766-5089-4abe-a888-ec5992e5720a] Running
	I0816 18:19:25.335048   74828 system_pods.go:89] "coredns-6f6b679f8f-qr4q9" [d20f51f3-6786-496b-a6bc-7457462e46e9] Running
	I0816 18:19:25.335052   74828 system_pods.go:89] "etcd-no-preload-864476" [246e2b57-dbfe-4fd2-bc9d-ef927d48ba0b] Running
	I0816 18:19:25.335057   74828 system_pods.go:89] "kube-apiserver-no-preload-864476" [0e386448-037f-4543-941a-63f07e0d3186] Running
	I0816 18:19:25.335061   74828 system_pods.go:89] "kube-controller-manager-no-preload-864476" [71617b5c-9968-4d49-ac6c-7728712ac880] Running
	I0816 18:19:25.335064   74828 system_pods.go:89] "kube-proxy-6g6zx" [71a027eb-99e3-4b48-b9f1-2fc80cad9d2e] Running
	I0816 18:19:25.335068   74828 system_pods.go:89] "kube-scheduler-no-preload-864476" [c9b6ef2a-41fa-408b-86b7-eae10db4bec6] Running
	I0816 18:19:25.335075   74828 system_pods.go:89] "metrics-server-6867b74b74-r6cph" [a842267c-2c75-4799-aefc-2fb92ccb9129] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 18:19:25.335081   74828 system_pods.go:89] "storage-provisioner" [c05cdb7c-d74e-4008-a0fc-5eb6df9595af] Running
	I0816 18:19:25.335089   74828 system_pods.go:126] duration metric: took 202.33381ms to wait for k8s-apps to be running ...
	I0816 18:19:25.335098   74828 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 18:19:25.335141   74828 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 18:19:25.349420   74828 system_svc.go:56] duration metric: took 14.310938ms WaitForService to wait for kubelet
	I0816 18:19:25.349457   74828 kubeadm.go:582] duration metric: took 11.758002576s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 18:19:25.349480   74828 node_conditions.go:102] verifying NodePressure condition ...
	I0816 18:19:25.532145   74828 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 18:19:25.532175   74828 node_conditions.go:123] node cpu capacity is 2
	I0816 18:19:25.532189   74828 node_conditions.go:105] duration metric: took 182.702662ms to run NodePressure ...
	I0816 18:19:25.532200   74828 start.go:241] waiting for startup goroutines ...
	I0816 18:19:25.532209   74828 start.go:246] waiting for cluster config update ...
	I0816 18:19:25.532222   74828 start.go:255] writing updated cluster config ...
	I0816 18:19:25.532529   74828 ssh_runner.go:195] Run: rm -f paused
	I0816 18:19:25.588070   74828 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0816 18:19:25.589615   74828 out.go:177] * Done! kubectl is now configured to use "no-preload-864476" cluster and "default" namespace by default
	I0816 18:19:24.440489   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:25.441683   74510 pod_ready.go:82] duration metric: took 4m0.007816418s for pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace to be "Ready" ...
	E0816 18:19:25.441706   74510 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0816 18:19:25.441714   74510 pod_ready.go:39] duration metric: took 4m6.551547163s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:19:25.441726   74510 api_server.go:52] waiting for apiserver process to appear ...
	I0816 18:19:25.441753   74510 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:19:25.441805   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:19:25.492207   74510 cri.go:89] found id: "8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b"
	I0816 18:19:25.492235   74510 cri.go:89] found id: ""
	I0816 18:19:25.492245   74510 logs.go:276] 1 containers: [8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b]
	I0816 18:19:25.492313   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:25.497307   74510 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:19:25.497388   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:19:25.537185   74510 cri.go:89] found id: "fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468"
	I0816 18:19:25.537211   74510 cri.go:89] found id: ""
	I0816 18:19:25.537220   74510 logs.go:276] 1 containers: [fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468]
	I0816 18:19:25.537422   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:25.546564   74510 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:19:25.546644   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:19:25.602794   74510 cri.go:89] found id: "3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d"
	I0816 18:19:25.602817   74510 cri.go:89] found id: ""
	I0816 18:19:25.602827   74510 logs.go:276] 1 containers: [3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d]
	I0816 18:19:25.602879   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:25.609018   74510 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:19:25.609097   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:19:25.657942   74510 cri.go:89] found id: "99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc"
	I0816 18:19:25.657970   74510 cri.go:89] found id: ""
	I0816 18:19:25.657980   74510 logs.go:276] 1 containers: [99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc]
	I0816 18:19:25.658044   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:25.663485   74510 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:19:25.663551   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:19:25.709526   74510 cri.go:89] found id: "92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf"
	I0816 18:19:25.709554   74510 cri.go:89] found id: ""
	I0816 18:19:25.709564   74510 logs.go:276] 1 containers: [92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf]
	I0816 18:19:25.709612   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:25.715845   74510 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:19:25.715898   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:19:25.766505   74510 cri.go:89] found id: "72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c"
	I0816 18:19:25.766522   74510 cri.go:89] found id: ""
	I0816 18:19:25.766529   74510 logs.go:276] 1 containers: [72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c]
	I0816 18:19:25.766573   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:25.771051   74510 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:19:25.771127   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:19:25.810669   74510 cri.go:89] found id: ""
	I0816 18:19:25.810699   74510 logs.go:276] 0 containers: []
	W0816 18:19:25.810711   74510 logs.go:278] No container was found matching "kindnet"
	I0816 18:19:25.810720   74510 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 18:19:25.810779   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 18:19:25.851412   74510 cri.go:89] found id: "08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970"
	I0816 18:19:25.851432   74510 cri.go:89] found id: "81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e"
	I0816 18:19:25.851438   74510 cri.go:89] found id: ""
	I0816 18:19:25.851454   74510 logs.go:276] 2 containers: [08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970 81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e]
	I0816 18:19:25.851507   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:25.856154   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:25.860812   74510 logs.go:123] Gathering logs for etcd [fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468] ...
	I0816 18:19:25.860837   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468"
	I0816 18:19:25.910929   74510 logs.go:123] Gathering logs for coredns [3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d] ...
	I0816 18:19:25.910957   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d"
	I0816 18:19:25.951932   74510 logs.go:123] Gathering logs for storage-provisioner [08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970] ...
	I0816 18:19:25.951959   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970"
	I0816 18:19:25.999861   74510 logs.go:123] Gathering logs for storage-provisioner [81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e] ...
	I0816 18:19:25.999894   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e"
	I0816 18:19:26.036535   74510 logs.go:123] Gathering logs for container status ...
	I0816 18:19:26.036559   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:19:26.089637   74510 logs.go:123] Gathering logs for kubelet ...
	I0816 18:19:26.089675   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:19:26.157679   74510 logs.go:123] Gathering logs for dmesg ...
	I0816 18:19:26.157714   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:19:26.171402   74510 logs.go:123] Gathering logs for kube-scheduler [99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc] ...
	I0816 18:19:26.171432   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc"
	I0816 18:19:26.209537   74510 logs.go:123] Gathering logs for kube-proxy [92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf] ...
	I0816 18:19:26.209564   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf"
	I0816 18:19:26.252702   74510 logs.go:123] Gathering logs for kube-controller-manager [72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c] ...
	I0816 18:19:26.252732   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c"
	I0816 18:19:26.303169   74510 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:19:26.303203   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:19:26.784058   74510 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:19:26.784090   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 18:19:26.904095   74510 logs.go:123] Gathering logs for kube-apiserver [8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b] ...
	I0816 18:19:26.904137   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b"
	I0816 18:19:25.228674   75006 addons.go:510] duration metric: took 1.37992722s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0816 18:19:26.105147   75006 pod_ready.go:103] pod "coredns-6f6b679f8f-hx7sb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:28.107202   75006 pod_ready.go:103] pod "coredns-6f6b679f8f-hx7sb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:32.607933   75402 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0816 18:19:32.608136   75402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:19:32.608430   75402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:19:29.459100   74510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:19:29.476158   74510 api_server.go:72] duration metric: took 4m17.827179017s to wait for apiserver process to appear ...
	I0816 18:19:29.476183   74510 api_server.go:88] waiting for apiserver healthz status ...
	I0816 18:19:29.476222   74510 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:19:29.476279   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:19:29.509739   74510 cri.go:89] found id: "8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b"
	I0816 18:19:29.509767   74510 cri.go:89] found id: ""
	I0816 18:19:29.509776   74510 logs.go:276] 1 containers: [8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b]
	I0816 18:19:29.509836   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:29.516078   74510 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:19:29.516150   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:19:29.553766   74510 cri.go:89] found id: "fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468"
	I0816 18:19:29.553795   74510 cri.go:89] found id: ""
	I0816 18:19:29.553805   74510 logs.go:276] 1 containers: [fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468]
	I0816 18:19:29.553857   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:29.558145   74510 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:19:29.558210   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:19:29.599559   74510 cri.go:89] found id: "3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d"
	I0816 18:19:29.599583   74510 cri.go:89] found id: ""
	I0816 18:19:29.599594   74510 logs.go:276] 1 containers: [3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d]
	I0816 18:19:29.599651   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:29.604108   74510 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:19:29.604187   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:19:29.641990   74510 cri.go:89] found id: "99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc"
	I0816 18:19:29.642009   74510 cri.go:89] found id: ""
	I0816 18:19:29.642016   74510 logs.go:276] 1 containers: [99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc]
	I0816 18:19:29.642062   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:29.645990   74510 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:19:29.646047   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:19:29.679480   74510 cri.go:89] found id: "92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf"
	I0816 18:19:29.679505   74510 cri.go:89] found id: ""
	I0816 18:19:29.679514   74510 logs.go:276] 1 containers: [92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf]
	I0816 18:19:29.679571   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:29.683361   74510 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:19:29.683425   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:19:29.733167   74510 cri.go:89] found id: "72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c"
	I0816 18:19:29.733197   74510 cri.go:89] found id: ""
	I0816 18:19:29.733208   74510 logs.go:276] 1 containers: [72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c]
	I0816 18:19:29.733266   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:29.737449   74510 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:19:29.737518   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:19:29.771597   74510 cri.go:89] found id: ""
	I0816 18:19:29.771628   74510 logs.go:276] 0 containers: []
	W0816 18:19:29.771639   74510 logs.go:278] No container was found matching "kindnet"
	I0816 18:19:29.771647   74510 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 18:19:29.771714   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 18:19:29.812346   74510 cri.go:89] found id: "08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970"
	I0816 18:19:29.812375   74510 cri.go:89] found id: "81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e"
	I0816 18:19:29.812381   74510 cri.go:89] found id: ""
	I0816 18:19:29.812390   74510 logs.go:276] 2 containers: [08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970 81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e]
	I0816 18:19:29.812447   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:29.817909   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:29.821575   74510 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:19:29.821602   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:19:30.288789   74510 logs.go:123] Gathering logs for container status ...
	I0816 18:19:30.288836   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:19:30.332874   74510 logs.go:123] Gathering logs for dmesg ...
	I0816 18:19:30.332904   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:19:30.347128   74510 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:19:30.347168   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 18:19:30.456809   74510 logs.go:123] Gathering logs for etcd [fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468] ...
	I0816 18:19:30.456845   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468"
	I0816 18:19:30.505332   74510 logs.go:123] Gathering logs for kube-scheduler [99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc] ...
	I0816 18:19:30.505362   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc"
	I0816 18:19:30.540765   74510 logs.go:123] Gathering logs for kube-proxy [92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf] ...
	I0816 18:19:30.540798   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf"
	I0816 18:19:30.576047   74510 logs.go:123] Gathering logs for storage-provisioner [81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e] ...
	I0816 18:19:30.576077   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e"
	I0816 18:19:30.611956   74510 logs.go:123] Gathering logs for kubelet ...
	I0816 18:19:30.611992   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:19:30.678135   74510 logs.go:123] Gathering logs for kube-apiserver [8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b] ...
	I0816 18:19:30.678177   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b"
	I0816 18:19:30.732409   74510 logs.go:123] Gathering logs for coredns [3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d] ...
	I0816 18:19:30.732437   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d"
	I0816 18:19:30.773306   74510 logs.go:123] Gathering logs for kube-controller-manager [72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c] ...
	I0816 18:19:30.773331   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c"
	I0816 18:19:30.827732   74510 logs.go:123] Gathering logs for storage-provisioner [08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970] ...
	I0816 18:19:30.827763   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970"
	I0816 18:19:33.367134   74510 api_server.go:253] Checking apiserver healthz at https://192.168.61.218:8443/healthz ...
	I0816 18:19:33.371523   74510 api_server.go:279] https://192.168.61.218:8443/healthz returned 200:
	ok
	I0816 18:19:33.372537   74510 api_server.go:141] control plane version: v1.31.0
	I0816 18:19:33.372560   74510 api_server.go:131] duration metric: took 3.896368169s to wait for apiserver health ...
	I0816 18:19:33.372568   74510 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 18:19:33.372589   74510 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:19:33.372653   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:19:33.409551   74510 cri.go:89] found id: "8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b"
	I0816 18:19:33.409579   74510 cri.go:89] found id: ""
	I0816 18:19:33.409590   74510 logs.go:276] 1 containers: [8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b]
	I0816 18:19:33.409648   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:33.413727   74510 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:19:33.413802   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:19:33.457246   74510 cri.go:89] found id: "fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468"
	I0816 18:19:33.457268   74510 cri.go:89] found id: ""
	I0816 18:19:33.457277   74510 logs.go:276] 1 containers: [fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468]
	I0816 18:19:33.457337   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:33.461490   74510 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:19:33.461556   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:19:33.497141   74510 cri.go:89] found id: "3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d"
	I0816 18:19:33.497169   74510 cri.go:89] found id: ""
	I0816 18:19:33.497180   74510 logs.go:276] 1 containers: [3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d]
	I0816 18:19:33.497241   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:33.501353   74510 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:19:33.501421   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:19:33.537797   74510 cri.go:89] found id: "99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc"
	I0816 18:19:33.537816   74510 cri.go:89] found id: ""
	I0816 18:19:33.537823   74510 logs.go:276] 1 containers: [99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc]
	I0816 18:19:33.537877   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:33.541727   74510 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:19:33.541784   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:19:33.575882   74510 cri.go:89] found id: "92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf"
	I0816 18:19:33.575905   74510 cri.go:89] found id: ""
	I0816 18:19:33.575913   74510 logs.go:276] 1 containers: [92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf]
	I0816 18:19:33.575964   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:33.579592   74510 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:19:33.579644   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:19:33.614425   74510 cri.go:89] found id: "72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c"
	I0816 18:19:33.614447   74510 cri.go:89] found id: ""
	I0816 18:19:33.614455   74510 logs.go:276] 1 containers: [72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c]
	I0816 18:19:33.614507   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:33.618130   74510 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:19:33.618178   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:19:33.652369   74510 cri.go:89] found id: ""
	I0816 18:19:33.652393   74510 logs.go:276] 0 containers: []
	W0816 18:19:33.652403   74510 logs.go:278] No container was found matching "kindnet"
	I0816 18:19:33.652410   74510 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 18:19:33.652463   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 18:19:33.687276   74510 cri.go:89] found id: "08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970"
	I0816 18:19:33.687295   74510 cri.go:89] found id: "81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e"
	I0816 18:19:33.687301   74510 cri.go:89] found id: ""
	I0816 18:19:33.687309   74510 logs.go:276] 2 containers: [08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970 81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e]
	I0816 18:19:33.687361   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:33.691100   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:33.695148   74510 logs.go:123] Gathering logs for kube-proxy [92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf] ...
	I0816 18:19:33.695179   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf"
	I0816 18:19:30.110901   75006 pod_ready.go:103] pod "coredns-6f6b679f8f-hx7sb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:32.606195   75006 pod_ready.go:103] pod "coredns-6f6b679f8f-hx7sb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:34.110732   75006 pod_ready.go:93] pod "coredns-6f6b679f8f-hx7sb" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:34.110764   75006 pod_ready.go:82] duration metric: took 10.011612904s for pod "coredns-6f6b679f8f-hx7sb" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.110778   75006 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-t74vf" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.116373   75006 pod_ready.go:93] pod "coredns-6f6b679f8f-t74vf" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:34.116392   75006 pod_ready.go:82] duration metric: took 5.607377ms for pod "coredns-6f6b679f8f-t74vf" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.116401   75006 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.124005   75006 pod_ready.go:93] pod "etcd-default-k8s-diff-port-256678" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:34.124027   75006 pod_ready.go:82] duration metric: took 7.618878ms for pod "etcd-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.124039   75006 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.129603   75006 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-256678" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:34.129623   75006 pod_ready.go:82] duration metric: took 5.575452ms for pod "kube-apiserver-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.129633   75006 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.145449   75006 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-256678" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:34.145474   75006 pod_ready.go:82] duration metric: took 15.831669ms for pod "kube-controller-manager-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.145486   75006 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qsskg" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.506455   75006 pod_ready.go:93] pod "kube-proxy-qsskg" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:34.506477   75006 pod_ready.go:82] duration metric: took 360.982998ms for pod "kube-proxy-qsskg" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.506486   75006 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.905345   75006 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-256678" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:34.905365   75006 pod_ready.go:82] duration metric: took 398.872303ms for pod "kube-scheduler-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.905373   75006 pod_ready.go:39] duration metric: took 10.812448791s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:19:34.905386   75006 api_server.go:52] waiting for apiserver process to appear ...
	I0816 18:19:34.905430   75006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:19:34.920554   75006 api_server.go:72] duration metric: took 11.071846456s to wait for apiserver process to appear ...
	I0816 18:19:34.920574   75006 api_server.go:88] waiting for apiserver healthz status ...
	I0816 18:19:34.920589   75006 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I0816 18:19:34.927194   75006 api_server.go:279] https://192.168.72.144:8444/healthz returned 200:
	ok
	I0816 18:19:34.928420   75006 api_server.go:141] control plane version: v1.31.0
	I0816 18:19:34.928437   75006 api_server.go:131] duration metric: took 7.857168ms to wait for apiserver health ...
	I0816 18:19:34.928443   75006 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 18:19:35.107220   75006 system_pods.go:59] 9 kube-system pods found
	I0816 18:19:35.107248   75006 system_pods.go:61] "coredns-6f6b679f8f-hx7sb" [4ebcdf34-c4e8-47bd-83f3-c56ea7bfb7d4] Running
	I0816 18:19:35.107254   75006 system_pods.go:61] "coredns-6f6b679f8f-t74vf" [41afd723-b034-460e-8e5f-197c8d8bcd7a] Running
	I0816 18:19:35.107258   75006 system_pods.go:61] "etcd-default-k8s-diff-port-256678" [46e68942-a5fc-433d-bf35-70f87a1b5962] Running
	I0816 18:19:35.107262   75006 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-256678" [0083826c-61fc-4597-84d9-a529df660696] Running
	I0816 18:19:35.107267   75006 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-256678" [e96435e2-1034-46d7-9f70-ba4435962528] Running
	I0816 18:19:35.107270   75006 system_pods.go:61] "kube-proxy-qsskg" [c863ca3c-8451-4fa7-b22d-c709e67bd26b] Running
	I0816 18:19:35.107274   75006 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-256678" [83bd764c-55ee-4fc4-8ebc-567b3fba1f95] Running
	I0816 18:19:35.107280   75006 system_pods.go:61] "metrics-server-6867b74b74-vmt5v" [8446e983-380f-42a8-ab5b-ce9b6d67ebad] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 18:19:35.107288   75006 system_pods.go:61] "storage-provisioner" [491e3d8e-5a8b-4187-a682-411c6fb9dd92] Running
	I0816 18:19:35.107296   75006 system_pods.go:74] duration metric: took 178.847431ms to wait for pod list to return data ...
	I0816 18:19:35.107302   75006 default_sa.go:34] waiting for default service account to be created ...
	I0816 18:19:35.303619   75006 default_sa.go:45] found service account: "default"
	I0816 18:19:35.303646   75006 default_sa.go:55] duration metric: took 196.337687ms for default service account to be created ...
	I0816 18:19:35.303655   75006 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 18:19:35.508401   75006 system_pods.go:86] 9 kube-system pods found
	I0816 18:19:35.508442   75006 system_pods.go:89] "coredns-6f6b679f8f-hx7sb" [4ebcdf34-c4e8-47bd-83f3-c56ea7bfb7d4] Running
	I0816 18:19:35.508452   75006 system_pods.go:89] "coredns-6f6b679f8f-t74vf" [41afd723-b034-460e-8e5f-197c8d8bcd7a] Running
	I0816 18:19:35.508460   75006 system_pods.go:89] "etcd-default-k8s-diff-port-256678" [46e68942-a5fc-433d-bf35-70f87a1b5962] Running
	I0816 18:19:35.508466   75006 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-256678" [0083826c-61fc-4597-84d9-a529df660696] Running
	I0816 18:19:35.508471   75006 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-256678" [e96435e2-1034-46d7-9f70-ba4435962528] Running
	I0816 18:19:35.508477   75006 system_pods.go:89] "kube-proxy-qsskg" [c863ca3c-8451-4fa7-b22d-c709e67bd26b] Running
	I0816 18:19:35.508483   75006 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-256678" [83bd764c-55ee-4fc4-8ebc-567b3fba1f95] Running
	I0816 18:19:35.508494   75006 system_pods.go:89] "metrics-server-6867b74b74-vmt5v" [8446e983-380f-42a8-ab5b-ce9b6d67ebad] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 18:19:35.508504   75006 system_pods.go:89] "storage-provisioner" [491e3d8e-5a8b-4187-a682-411c6fb9dd92] Running
	I0816 18:19:35.508521   75006 system_pods.go:126] duration metric: took 204.859728ms to wait for k8s-apps to be running ...
	I0816 18:19:35.508544   75006 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 18:19:35.508605   75006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 18:19:35.523660   75006 system_svc.go:56] duration metric: took 15.109288ms WaitForService to wait for kubelet
	I0816 18:19:35.523687   75006 kubeadm.go:582] duration metric: took 11.674985717s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 18:19:35.523704   75006 node_conditions.go:102] verifying NodePressure condition ...
	I0816 18:19:35.704770   75006 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 18:19:35.704797   75006 node_conditions.go:123] node cpu capacity is 2
	I0816 18:19:35.704808   75006 node_conditions.go:105] duration metric: took 181.099433ms to run NodePressure ...
	I0816 18:19:35.704818   75006 start.go:241] waiting for startup goroutines ...
	I0816 18:19:35.704824   75006 start.go:246] waiting for cluster config update ...
	I0816 18:19:35.704834   75006 start.go:255] writing updated cluster config ...
	I0816 18:19:35.705096   75006 ssh_runner.go:195] Run: rm -f paused
	I0816 18:19:35.753637   75006 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0816 18:19:35.755747   75006 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-256678" cluster and "default" namespace by default
	I0816 18:19:33.732856   74510 logs.go:123] Gathering logs for kube-controller-manager [72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c] ...
	I0816 18:19:33.732881   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c"
	I0816 18:19:33.796167   74510 logs.go:123] Gathering logs for storage-provisioner [08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970] ...
	I0816 18:19:33.796215   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970"
	I0816 18:19:33.835842   74510 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:19:33.835869   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 18:19:33.956412   74510 logs.go:123] Gathering logs for kube-apiserver [8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b] ...
	I0816 18:19:33.956450   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b"
	I0816 18:19:34.004102   74510 logs.go:123] Gathering logs for etcd [fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468] ...
	I0816 18:19:34.004137   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468"
	I0816 18:19:34.050504   74510 logs.go:123] Gathering logs for coredns [3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d] ...
	I0816 18:19:34.050548   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d"
	I0816 18:19:34.087815   74510 logs.go:123] Gathering logs for kube-scheduler [99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc] ...
	I0816 18:19:34.087850   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc"
	I0816 18:19:34.124096   74510 logs.go:123] Gathering logs for kubelet ...
	I0816 18:19:34.124127   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:19:34.193377   74510 logs.go:123] Gathering logs for dmesg ...
	I0816 18:19:34.193410   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:19:34.206480   74510 logs.go:123] Gathering logs for storage-provisioner [81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e] ...
	I0816 18:19:34.206505   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e"
	I0816 18:19:34.240262   74510 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:19:34.240305   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:19:34.591979   74510 logs.go:123] Gathering logs for container status ...
	I0816 18:19:34.592014   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:19:37.142552   74510 system_pods.go:59] 8 kube-system pods found
	I0816 18:19:37.142580   74510 system_pods.go:61] "coredns-6f6b679f8f-8njs2" [f29c31e1-4c2a-4dd8-ba60-62998504c55e] Running
	I0816 18:19:37.142585   74510 system_pods.go:61] "etcd-embed-certs-777541" [9cad9a1c-cea5-4271-9a68-3689ecdad607] Running
	I0816 18:19:37.142590   74510 system_pods.go:61] "kube-apiserver-embed-certs-777541" [a5105f98-368f-4687-8eca-ecc66ae59b42] Running
	I0816 18:19:37.142593   74510 system_pods.go:61] "kube-controller-manager-embed-certs-777541" [63cb72fb-167b-446b-874d-6ee665ad8a55] Running
	I0816 18:19:37.142596   74510 system_pods.go:61] "kube-proxy-j5rl7" [fcbc8903-6fa2-4f55-9ec0-92b77e21fb08] Running
	I0816 18:19:37.142600   74510 system_pods.go:61] "kube-scheduler-embed-certs-777541" [f2224375-d2f9-4e85-9eb8-26070a90642f] Running
	I0816 18:19:37.142605   74510 system_pods.go:61] "metrics-server-6867b74b74-6hkzb" [3e01da8d-7ddf-47cc-9079-5162cf2c2b53] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 18:19:37.142609   74510 system_pods.go:61] "storage-provisioner" [6fc6c4da-0e0f-45cc-84a6-bd4907f5e852] Running
	I0816 18:19:37.142616   74510 system_pods.go:74] duration metric: took 3.770043434s to wait for pod list to return data ...
	I0816 18:19:37.142625   74510 default_sa.go:34] waiting for default service account to be created ...
	I0816 18:19:37.145135   74510 default_sa.go:45] found service account: "default"
	I0816 18:19:37.145161   74510 default_sa.go:55] duration metric: took 2.530779ms for default service account to be created ...
	I0816 18:19:37.145169   74510 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 18:19:37.149397   74510 system_pods.go:86] 8 kube-system pods found
	I0816 18:19:37.149423   74510 system_pods.go:89] "coredns-6f6b679f8f-8njs2" [f29c31e1-4c2a-4dd8-ba60-62998504c55e] Running
	I0816 18:19:37.149431   74510 system_pods.go:89] "etcd-embed-certs-777541" [9cad9a1c-cea5-4271-9a68-3689ecdad607] Running
	I0816 18:19:37.149437   74510 system_pods.go:89] "kube-apiserver-embed-certs-777541" [a5105f98-368f-4687-8eca-ecc66ae59b42] Running
	I0816 18:19:37.149443   74510 system_pods.go:89] "kube-controller-manager-embed-certs-777541" [63cb72fb-167b-446b-874d-6ee665ad8a55] Running
	I0816 18:19:37.149451   74510 system_pods.go:89] "kube-proxy-j5rl7" [fcbc8903-6fa2-4f55-9ec0-92b77e21fb08] Running
	I0816 18:19:37.149458   74510 system_pods.go:89] "kube-scheduler-embed-certs-777541" [f2224375-d2f9-4e85-9eb8-26070a90642f] Running
	I0816 18:19:37.149471   74510 system_pods.go:89] "metrics-server-6867b74b74-6hkzb" [3e01da8d-7ddf-47cc-9079-5162cf2c2b53] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 18:19:37.149480   74510 system_pods.go:89] "storage-provisioner" [6fc6c4da-0e0f-45cc-84a6-bd4907f5e852] Running
	I0816 18:19:37.149491   74510 system_pods.go:126] duration metric: took 4.31556ms to wait for k8s-apps to be running ...
	I0816 18:19:37.149502   74510 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 18:19:37.149564   74510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 18:19:37.166663   74510 system_svc.go:56] duration metric: took 17.15398ms WaitForService to wait for kubelet
	I0816 18:19:37.166692   74510 kubeadm.go:582] duration metric: took 4m25.517719342s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 18:19:37.166711   74510 node_conditions.go:102] verifying NodePressure condition ...
	I0816 18:19:37.170081   74510 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 18:19:37.170102   74510 node_conditions.go:123] node cpu capacity is 2
	I0816 18:19:37.170112   74510 node_conditions.go:105] duration metric: took 3.396116ms to run NodePressure ...
	I0816 18:19:37.170122   74510 start.go:241] waiting for startup goroutines ...
	I0816 18:19:37.170129   74510 start.go:246] waiting for cluster config update ...
	I0816 18:19:37.170138   74510 start.go:255] writing updated cluster config ...
	I0816 18:19:37.170406   74510 ssh_runner.go:195] Run: rm -f paused
	I0816 18:19:37.218383   74510 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0816 18:19:37.220397   74510 out.go:177] * Done! kubectl is now configured to use "embed-certs-777541" cluster and "default" namespace by default
	I0816 18:19:37.609143   75402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:19:37.609401   75402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:19:47.609941   75402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:19:47.610185   75402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:20:07.611108   75402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:20:07.611350   75402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:20:47.613446   75402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:20:47.613708   75402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:20:47.613742   75402 kubeadm.go:310] 
	I0816 18:20:47.613809   75402 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0816 18:20:47.613902   75402 kubeadm.go:310] 		timed out waiting for the condition
	I0816 18:20:47.613926   75402 kubeadm.go:310] 
	I0816 18:20:47.613976   75402 kubeadm.go:310] 	This error is likely caused by:
	I0816 18:20:47.614028   75402 kubeadm.go:310] 		- The kubelet is not running
	I0816 18:20:47.614160   75402 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0816 18:20:47.614174   75402 kubeadm.go:310] 
	I0816 18:20:47.614323   75402 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0816 18:20:47.614383   75402 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0816 18:20:47.614432   75402 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0816 18:20:47.614441   75402 kubeadm.go:310] 
	I0816 18:20:47.614601   75402 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0816 18:20:47.614730   75402 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0816 18:20:47.614751   75402 kubeadm.go:310] 
	I0816 18:20:47.614875   75402 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0816 18:20:47.614982   75402 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0816 18:20:47.615101   75402 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0816 18:20:47.615217   75402 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0816 18:20:47.615230   75402 kubeadm.go:310] 
	I0816 18:20:47.616865   75402 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 18:20:47.616971   75402 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0816 18:20:47.617028   75402 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0816 18:20:47.617173   75402 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0816 18:20:47.617226   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 18:20:48.158066   75402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 18:20:48.172568   75402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 18:20:48.182445   75402 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 18:20:48.182468   75402 kubeadm.go:157] found existing configuration files:
	
	I0816 18:20:48.182527   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 18:20:48.191779   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 18:20:48.191847   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 18:20:48.201531   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 18:20:48.210495   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 18:20:48.210568   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 18:20:48.219701   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 18:20:48.228170   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 18:20:48.228242   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 18:20:48.237366   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 18:20:48.246335   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 18:20:48.246393   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 18:20:48.255655   75402 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 18:20:48.321873   75402 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0816 18:20:48.321930   75402 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 18:20:48.462199   75402 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 18:20:48.462324   75402 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 18:20:48.462448   75402 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0816 18:20:48.646565   75402 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 18:20:48.648485   75402 out.go:235]   - Generating certificates and keys ...
	I0816 18:20:48.648605   75402 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 18:20:48.648748   75402 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 18:20:48.648895   75402 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 18:20:48.648994   75402 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 18:20:48.649088   75402 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 18:20:48.649185   75402 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 18:20:48.649282   75402 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 18:20:48.649368   75402 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 18:20:48.649485   75402 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 18:20:48.649595   75402 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 18:20:48.649649   75402 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 18:20:48.649753   75402 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 18:20:48.864525   75402 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 18:20:49.035729   75402 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 18:20:49.086765   75402 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 18:20:49.222612   75402 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 18:20:49.239121   75402 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 18:20:49.240158   75402 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 18:20:49.240200   75402 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 18:20:49.366027   75402 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 18:20:49.367770   75402 out.go:235]   - Booting up control plane ...
	I0816 18:20:49.367907   75402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 18:20:49.373047   75402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 18:20:49.373886   75402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 18:20:49.374691   75402 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 18:20:49.379220   75402 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0816 18:21:29.381362   75402 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0816 18:21:29.381473   75402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:21:29.381700   75402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:21:34.381889   75402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:21:34.382065   75402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:21:44.382765   75402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:21:44.382964   75402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:22:04.383485   75402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:22:04.383748   75402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:22:44.382265   75402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:22:44.382558   75402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:22:44.382572   75402 kubeadm.go:310] 
	I0816 18:22:44.382628   75402 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0816 18:22:44.382715   75402 kubeadm.go:310] 		timed out waiting for the condition
	I0816 18:22:44.382741   75402 kubeadm.go:310] 
	I0816 18:22:44.382789   75402 kubeadm.go:310] 	This error is likely caused by:
	I0816 18:22:44.382837   75402 kubeadm.go:310] 		- The kubelet is not running
	I0816 18:22:44.382986   75402 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0816 18:22:44.382997   75402 kubeadm.go:310] 
	I0816 18:22:44.383149   75402 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0816 18:22:44.383202   75402 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0816 18:22:44.383246   75402 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0816 18:22:44.383258   75402 kubeadm.go:310] 
	I0816 18:22:44.383421   75402 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0816 18:22:44.383534   75402 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0816 18:22:44.383549   75402 kubeadm.go:310] 
	I0816 18:22:44.383743   75402 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0816 18:22:44.383877   75402 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0816 18:22:44.383993   75402 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0816 18:22:44.384092   75402 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0816 18:22:44.384103   75402 kubeadm.go:310] 
	I0816 18:22:44.384783   75402 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 18:22:44.384895   75402 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0816 18:22:44.384986   75402 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0816 18:22:44.385062   75402 kubeadm.go:394] duration metric: took 8m1.372176417s to StartCluster
	I0816 18:22:44.385108   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:22:44.385173   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:22:44.425862   75402 cri.go:89] found id: ""
	I0816 18:22:44.425892   75402 logs.go:276] 0 containers: []
	W0816 18:22:44.425901   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:22:44.425909   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:22:44.425982   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:22:44.461988   75402 cri.go:89] found id: ""
	I0816 18:22:44.462019   75402 logs.go:276] 0 containers: []
	W0816 18:22:44.462030   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:22:44.462038   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:22:44.462109   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:22:44.496063   75402 cri.go:89] found id: ""
	I0816 18:22:44.496095   75402 logs.go:276] 0 containers: []
	W0816 18:22:44.496106   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:22:44.496114   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:22:44.496175   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:22:44.529875   75402 cri.go:89] found id: ""
	I0816 18:22:44.529899   75402 logs.go:276] 0 containers: []
	W0816 18:22:44.529906   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:22:44.529912   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:22:44.529958   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:22:44.565745   75402 cri.go:89] found id: ""
	I0816 18:22:44.565781   75402 logs.go:276] 0 containers: []
	W0816 18:22:44.565791   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:22:44.565798   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:22:44.565860   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:22:44.604122   75402 cri.go:89] found id: ""
	I0816 18:22:44.604149   75402 logs.go:276] 0 containers: []
	W0816 18:22:44.604160   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:22:44.604168   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:22:44.604228   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:22:44.636607   75402 cri.go:89] found id: ""
	I0816 18:22:44.636658   75402 logs.go:276] 0 containers: []
	W0816 18:22:44.636669   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:22:44.636677   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:22:44.636736   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:22:44.670942   75402 cri.go:89] found id: ""
	I0816 18:22:44.670973   75402 logs.go:276] 0 containers: []
	W0816 18:22:44.670981   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:22:44.670989   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:22:44.671001   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:22:44.722403   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:22:44.722433   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:22:44.738587   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:22:44.738627   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:22:44.854530   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:22:44.854563   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:22:44.854579   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:22:44.957308   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:22:44.957342   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0816 18:22:44.997652   75402 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0816 18:22:44.997714   75402 out.go:270] * 
	W0816 18:22:44.997804   75402 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0816 18:22:44.997828   75402 out.go:270] * 
	W0816 18:22:44.998787   75402 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 18:22:45.002189   75402 out.go:201] 
	W0816 18:22:45.003254   75402 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0816 18:22:45.003310   75402 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0816 18:22:45.003340   75402 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0816 18:22:45.004826   75402 out.go:201] 
	
	
	==> CRI-O <==
	Aug 16 18:22:46 old-k8s-version-783465 crio[653]: time="2024-08-16 18:22:46.844717381Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723832566844673198,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=018a6d80-3478-4b3f-a7a5-174c2be7bc50 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 18:22:46 old-k8s-version-783465 crio[653]: time="2024-08-16 18:22:46.845259602Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1b192e5d-5029-41ca-83c3-7cfcc9b2e2d8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:22:46 old-k8s-version-783465 crio[653]: time="2024-08-16 18:22:46.845341826Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1b192e5d-5029-41ca-83c3-7cfcc9b2e2d8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:22:46 old-k8s-version-783465 crio[653]: time="2024-08-16 18:22:46.845382133Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=1b192e5d-5029-41ca-83c3-7cfcc9b2e2d8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:22:46 old-k8s-version-783465 crio[653]: time="2024-08-16 18:22:46.875741207Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0da6b336-9803-47ab-86e5-cbc1a7db48c5 name=/runtime.v1.RuntimeService/Version
	Aug 16 18:22:46 old-k8s-version-783465 crio[653]: time="2024-08-16 18:22:46.875836284Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0da6b336-9803-47ab-86e5-cbc1a7db48c5 name=/runtime.v1.RuntimeService/Version
	Aug 16 18:22:46 old-k8s-version-783465 crio[653]: time="2024-08-16 18:22:46.877642794Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d6b5ed79-4fd9-42e9-9edd-2e67eca4fa7e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 18:22:46 old-k8s-version-783465 crio[653]: time="2024-08-16 18:22:46.878068838Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723832566878046402,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d6b5ed79-4fd9-42e9-9edd-2e67eca4fa7e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 18:22:46 old-k8s-version-783465 crio[653]: time="2024-08-16 18:22:46.878567062Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b2ba4622-7624-4611-9408-3a6f23e1fabb name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:22:46 old-k8s-version-783465 crio[653]: time="2024-08-16 18:22:46.878633173Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b2ba4622-7624-4611-9408-3a6f23e1fabb name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:22:46 old-k8s-version-783465 crio[653]: time="2024-08-16 18:22:46.878690670Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b2ba4622-7624-4611-9408-3a6f23e1fabb name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:22:46 old-k8s-version-783465 crio[653]: time="2024-08-16 18:22:46.910389642Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3a7869fc-71b6-473d-b00c-b809f958e01a name=/runtime.v1.RuntimeService/Version
	Aug 16 18:22:46 old-k8s-version-783465 crio[653]: time="2024-08-16 18:22:46.910482352Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3a7869fc-71b6-473d-b00c-b809f958e01a name=/runtime.v1.RuntimeService/Version
	Aug 16 18:22:46 old-k8s-version-783465 crio[653]: time="2024-08-16 18:22:46.911487191Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0aebcec1-93f2-4a45-8380-f088c0bf4883 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 18:22:46 old-k8s-version-783465 crio[653]: time="2024-08-16 18:22:46.911865473Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723832566911842627,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0aebcec1-93f2-4a45-8380-f088c0bf4883 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 18:22:46 old-k8s-version-783465 crio[653]: time="2024-08-16 18:22:46.912415775Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0bc55dcf-b75e-4a60-82e8-9f425a4f1d5c name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:22:46 old-k8s-version-783465 crio[653]: time="2024-08-16 18:22:46.912479204Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0bc55dcf-b75e-4a60-82e8-9f425a4f1d5c name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:22:46 old-k8s-version-783465 crio[653]: time="2024-08-16 18:22:46.912518370Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=0bc55dcf-b75e-4a60-82e8-9f425a4f1d5c name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:22:46 old-k8s-version-783465 crio[653]: time="2024-08-16 18:22:46.942099943Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3faaf176-715e-4b8a-ab5d-c7e85a159123 name=/runtime.v1.RuntimeService/Version
	Aug 16 18:22:46 old-k8s-version-783465 crio[653]: time="2024-08-16 18:22:46.942251316Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3faaf176-715e-4b8a-ab5d-c7e85a159123 name=/runtime.v1.RuntimeService/Version
	Aug 16 18:22:46 old-k8s-version-783465 crio[653]: time="2024-08-16 18:22:46.947835132Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2c89ade4-e76e-4111-ae72-1d6ed9de9abb name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 18:22:46 old-k8s-version-783465 crio[653]: time="2024-08-16 18:22:46.948288542Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723832566948264475,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2c89ade4-e76e-4111-ae72-1d6ed9de9abb name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 18:22:46 old-k8s-version-783465 crio[653]: time="2024-08-16 18:22:46.948721407Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bca83c0c-90ca-4e1c-813d-94484aa8894f name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:22:46 old-k8s-version-783465 crio[653]: time="2024-08-16 18:22:46.948787825Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bca83c0c-90ca-4e1c-813d-94484aa8894f name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:22:46 old-k8s-version-783465 crio[653]: time="2024-08-16 18:22:46.948826547Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=bca83c0c-90ca-4e1c-813d-94484aa8894f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Aug16 18:14] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.064977] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.045169] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.997853] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.853876] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.352877] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.345481] systemd-fstab-generator[572]: Ignoring "noauto" option for root device
	[  +0.064693] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054338] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.181344] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.146416] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.232451] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +6.280356] systemd-fstab-generator[902]: Ignoring "noauto" option for root device
	[  +0.058572] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.868893] systemd-fstab-generator[1027]: Ignoring "noauto" option for root device
	[ +13.997238] kauditd_printk_skb: 46 callbacks suppressed
	[Aug16 18:18] systemd-fstab-generator[5183]: Ignoring "noauto" option for root device
	[Aug16 18:20] systemd-fstab-generator[5458]: Ignoring "noauto" option for root device
	[  +0.064746] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 18:22:47 up 8 min,  0 users,  load average: 0.01, 0.11, 0.08
	Linux old-k8s-version-783465 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Aug 16 18:22:44 old-k8s-version-783465 kubelet[5638]:         /usr/local/go/src/net/tcpsock_posix.go:61 +0xd7
	Aug 16 18:22:44 old-k8s-version-783465 kubelet[5638]: net.(*sysDialer).dialSingle(0xc000b48c00, 0x4f7fe40, 0xc0001ebec0, 0x4f1ff00, 0xc0007e7f80, 0x0, 0x0, 0x0, 0x0)
	Aug 16 18:22:44 old-k8s-version-783465 kubelet[5638]:         /usr/local/go/src/net/dial.go:580 +0x5e5
	Aug 16 18:22:44 old-k8s-version-783465 kubelet[5638]: net.(*sysDialer).dialSerial(0xc000b48c00, 0x4f7fe40, 0xc0001ebec0, 0xc0008f4010, 0x1, 0x1, 0x0, 0x0, 0x0, 0x0)
	Aug 16 18:22:44 old-k8s-version-783465 kubelet[5638]:         /usr/local/go/src/net/dial.go:548 +0x152
	Aug 16 18:22:44 old-k8s-version-783465 kubelet[5638]: net.(*Dialer).DialContext(0xc0001b3aa0, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000762bd0, 0x24, 0x0, 0x0, 0x0, ...)
	Aug 16 18:22:44 old-k8s-version-783465 kubelet[5638]:         /usr/local/go/src/net/dial.go:425 +0x6e5
	Aug 16 18:22:44 old-k8s-version-783465 kubelet[5638]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc0001e6480, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000762bd0, 0x24, 0x60, 0x7f1d1d5ca4b8, 0x118, ...)
	Aug 16 18:22:44 old-k8s-version-783465 kubelet[5638]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Aug 16 18:22:44 old-k8s-version-783465 kubelet[5638]: net/http.(*Transport).dial(0xc000870dc0, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000762bd0, 0x24, 0x0, 0x0, 0x0, ...)
	Aug 16 18:22:44 old-k8s-version-783465 kubelet[5638]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Aug 16 18:22:44 old-k8s-version-783465 kubelet[5638]: net/http.(*Transport).dialConn(0xc000870dc0, 0x4f7fe00, 0xc000120018, 0x0, 0xc0003e43c0, 0x5, 0xc000762bd0, 0x24, 0x0, 0xc0008da360, ...)
	Aug 16 18:22:44 old-k8s-version-783465 kubelet[5638]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Aug 16 18:22:44 old-k8s-version-783465 kubelet[5638]: net/http.(*Transport).dialConnFor(0xc000870dc0, 0xc000b6bb80)
	Aug 16 18:22:44 old-k8s-version-783465 kubelet[5638]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Aug 16 18:22:44 old-k8s-version-783465 kubelet[5638]: created by net/http.(*Transport).queueForDial
	Aug 16 18:22:44 old-k8s-version-783465 kubelet[5638]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Aug 16 18:22:44 old-k8s-version-783465 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Aug 16 18:22:44 old-k8s-version-783465 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Aug 16 18:22:44 old-k8s-version-783465 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Aug 16 18:22:44 old-k8s-version-783465 kubelet[5689]: I0816 18:22:44.826215    5689 server.go:416] Version: v1.20.0
	Aug 16 18:22:44 old-k8s-version-783465 kubelet[5689]: I0816 18:22:44.826470    5689 server.go:837] Client rotation is on, will bootstrap in background
	Aug 16 18:22:44 old-k8s-version-783465 kubelet[5689]: I0816 18:22:44.828441    5689 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Aug 16 18:22:44 old-k8s-version-783465 kubelet[5689]: W0816 18:22:44.829457    5689 manager.go:159] Cannot detect current cgroup on cgroup v2
	Aug 16 18:22:44 old-k8s-version-783465 kubelet[5689]: I0816 18:22:44.829614    5689 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-783465 -n old-k8s-version-783465
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-783465 -n old-k8s-version-783465: exit status 2 (224.844549ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-783465" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (715.54s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0816 18:19:32.865980   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/kindnet-791304/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-864476 -n no-preload-864476
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-08-16 18:28:26.157843232 +0000 UTC m=+6006.476388253
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-864476 -n no-preload-864476
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-864476 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-864476 logs -n 25: (2.087505144s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p custom-flannel-791304 sudo cat                      | custom-flannel-791304        | jenkins | v1.33.1 | 16 Aug 24 18:05 UTC | 16 Aug 24 18:05 UTC |
	|         | /lib/systemd/system/containerd.service                 |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-791304                               | custom-flannel-791304        | jenkins | v1.33.1 | 16 Aug 24 18:05 UTC | 16 Aug 24 18:05 UTC |
	|         | sudo cat                                               |                              |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-791304 sudo                          | custom-flannel-791304        | jenkins | v1.33.1 | 16 Aug 24 18:05 UTC | 16 Aug 24 18:05 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-791304 sudo                          | custom-flannel-791304        | jenkins | v1.33.1 | 16 Aug 24 18:05 UTC | 16 Aug 24 18:05 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-791304 sudo                          | custom-flannel-791304        | jenkins | v1.33.1 | 16 Aug 24 18:05 UTC | 16 Aug 24 18:05 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-791304 sudo                          | custom-flannel-791304        | jenkins | v1.33.1 | 16 Aug 24 18:05 UTC | 16 Aug 24 18:05 UTC |
	|         | find /etc/crio -type f -exec                           |                              |         |         |                     |                     |
	|         | sh -c 'echo {}; cat {}' \;                             |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-791304 sudo                          | custom-flannel-791304        | jenkins | v1.33.1 | 16 Aug 24 18:05 UTC | 16 Aug 24 18:05 UTC |
	|         | crio config                                            |                              |         |         |                     |                     |
	| delete  | -p custom-flannel-791304                               | custom-flannel-791304        | jenkins | v1.33.1 | 16 Aug 24 18:05 UTC | 16 Aug 24 18:05 UTC |
	| start   | -p                                                     | default-k8s-diff-port-256678 | jenkins | v1.33.1 | 16 Aug 24 18:05 UTC | 16 Aug 24 18:07 UTC |
	|         | default-k8s-diff-port-256678                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-777541            | embed-certs-777541           | jenkins | v1.33.1 | 16 Aug 24 18:06 UTC | 16 Aug 24 18:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-777541                                  | embed-certs-777541           | jenkins | v1.33.1 | 16 Aug 24 18:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-864476             | no-preload-864476            | jenkins | v1.33.1 | 16 Aug 24 18:07 UTC | 16 Aug 24 18:07 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-864476                                   | no-preload-864476            | jenkins | v1.33.1 | 16 Aug 24 18:07 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-256678  | default-k8s-diff-port-256678 | jenkins | v1.33.1 | 16 Aug 24 18:07 UTC | 16 Aug 24 18:07 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-256678 | jenkins | v1.33.1 | 16 Aug 24 18:07 UTC |                     |
	|         | default-k8s-diff-port-256678                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-777541                 | embed-certs-777541           | jenkins | v1.33.1 | 16 Aug 24 18:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-783465        | old-k8s-version-783465       | jenkins | v1.33.1 | 16 Aug 24 18:08 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-777541                                  | embed-certs-777541           | jenkins | v1.33.1 | 16 Aug 24 18:08 UTC | 16 Aug 24 18:19 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-864476                  | no-preload-864476            | jenkins | v1.33.1 | 16 Aug 24 18:09 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-864476                                   | no-preload-864476            | jenkins | v1.33.1 | 16 Aug 24 18:09 UTC | 16 Aug 24 18:19 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-256678       | default-k8s-diff-port-256678 | jenkins | v1.33.1 | 16 Aug 24 18:09 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-256678 | jenkins | v1.33.1 | 16 Aug 24 18:09 UTC | 16 Aug 24 18:19 UTC |
	|         | default-k8s-diff-port-256678                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-783465                              | old-k8s-version-783465       | jenkins | v1.33.1 | 16 Aug 24 18:10 UTC | 16 Aug 24 18:10 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-783465             | old-k8s-version-783465       | jenkins | v1.33.1 | 16 Aug 24 18:10 UTC | 16 Aug 24 18:10 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-783465                              | old-k8s-version-783465       | jenkins | v1.33.1 | 16 Aug 24 18:10 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 18:10:53
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 18:10:53.101149   75402 out.go:345] Setting OutFile to fd 1 ...
	I0816 18:10:53.101401   75402 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 18:10:53.101412   75402 out.go:358] Setting ErrFile to fd 2...
	I0816 18:10:53.101418   75402 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 18:10:53.101600   75402 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-9545/.minikube/bin
	I0816 18:10:53.102131   75402 out.go:352] Setting JSON to false
	I0816 18:10:53.103018   75402 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6751,"bootTime":1723825102,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0816 18:10:53.103076   75402 start.go:139] virtualization: kvm guest
	I0816 18:10:53.105216   75402 out.go:177] * [old-k8s-version-783465] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0816 18:10:53.106496   75402 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 18:10:53.106504   75402 notify.go:220] Checking for updates...
	I0816 18:10:53.109235   75402 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 18:10:53.110572   75402 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19461-9545/kubeconfig
	I0816 18:10:53.111747   75402 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19461-9545/.minikube
	I0816 18:10:53.113164   75402 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 18:10:53.114589   75402 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 18:10:53.116284   75402 config.go:182] Loaded profile config "old-k8s-version-783465": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0816 18:10:53.116746   75402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:10:53.116806   75402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:10:53.132445   75402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46631
	I0816 18:10:53.132886   75402 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:10:53.133456   75402 main.go:141] libmachine: Using API Version  1
	I0816 18:10:53.133494   75402 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:10:53.133836   75402 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:10:53.134015   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	I0816 18:10:53.135791   75402 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0816 18:10:53.136942   75402 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 18:10:53.137229   75402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:10:53.137260   75402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:10:53.151853   75402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34617
	I0816 18:10:53.152327   75402 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:10:53.152881   75402 main.go:141] libmachine: Using API Version  1
	I0816 18:10:53.152905   75402 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:10:53.153159   75402 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:10:53.153307   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	I0816 18:10:53.188002   75402 out.go:177] * Using the kvm2 driver based on existing profile
	I0816 18:10:53.189287   75402 start.go:297] selected driver: kvm2
	I0816 18:10:53.189309   75402 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-783465 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-783465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 18:10:53.189432   75402 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 18:10:53.190098   75402 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 18:10:53.190187   75402 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19461-9545/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 18:10:53.205024   75402 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0816 18:10:53.205386   75402 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 18:10:53.205417   75402 cni.go:84] Creating CNI manager for ""
	I0816 18:10:53.205425   75402 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 18:10:53.205458   75402 start.go:340] cluster config:
	{Name:old-k8s-version-783465 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-783465 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 18:10:53.205557   75402 iso.go:125] acquiring lock: {Name:mke35866c1d0f078e1f027c2d727692722810e04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 18:10:53.207241   75402 out.go:177] * Starting "old-k8s-version-783465" primary control-plane node in "old-k8s-version-783465" cluster
	I0816 18:10:53.208254   75402 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0816 18:10:53.208286   75402 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0816 18:10:53.208298   75402 cache.go:56] Caching tarball of preloaded images
	I0816 18:10:53.208386   75402 preload.go:172] Found /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 18:10:53.208400   75402 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0816 18:10:53.208510   75402 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/config.json ...
	I0816 18:10:53.208736   75402 start.go:360] acquireMachinesLock for old-k8s-version-783465: {Name:mke1ffa1f2d6d714bdd85e184816ba8f4dfd08f1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 18:10:54.604889   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:10:57.676891   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:03.756940   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:06.828911   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:12.908885   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:15.980925   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:22.060891   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:25.132961   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:31.212919   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:34.284876   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:40.365032   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:43.436910   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:49.516914   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:52.588969   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:58.668915   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:01.740965   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:07.820898   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:10.892922   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:16.972913   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:20.044913   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:26.124921   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:29.196968   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:35.276952   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:38.348971   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:44.428932   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:47.500897   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:53.580923   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:56.652927   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:13:02.732992   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:13:05.804929   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:13:11.884953   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:13:14.956943   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:13:21.036963   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:13:24.108915   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:13:30.188851   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:13:33.260936   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:13:36.264963   74828 start.go:364] duration metric: took 4m2.37855556s to acquireMachinesLock for "no-preload-864476"
	I0816 18:13:36.265020   74828 start.go:96] Skipping create...Using existing machine configuration
	I0816 18:13:36.265027   74828 fix.go:54] fixHost starting: 
	I0816 18:13:36.265379   74828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:13:36.265409   74828 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:13:36.280707   74828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35227
	I0816 18:13:36.281167   74828 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:13:36.281747   74828 main.go:141] libmachine: Using API Version  1
	I0816 18:13:36.281778   74828 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:13:36.282122   74828 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:13:36.282330   74828 main.go:141] libmachine: (no-preload-864476) Calling .DriverName
	I0816 18:13:36.282457   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetState
	I0816 18:13:36.284064   74828 fix.go:112] recreateIfNeeded on no-preload-864476: state=Stopped err=<nil>
	I0816 18:13:36.284084   74828 main.go:141] libmachine: (no-preload-864476) Calling .DriverName
	W0816 18:13:36.284217   74828 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 18:13:36.286749   74828 out.go:177] * Restarting existing kvm2 VM for "no-preload-864476" ...
	I0816 18:13:36.262619   74510 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 18:13:36.262654   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetMachineName
	I0816 18:13:36.262944   74510 buildroot.go:166] provisioning hostname "embed-certs-777541"
	I0816 18:13:36.262967   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetMachineName
	I0816 18:13:36.263222   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:13:36.264803   74510 machine.go:96] duration metric: took 4m37.429582668s to provisionDockerMachine
	I0816 18:13:36.264858   74510 fix.go:56] duration metric: took 4m37.449862851s for fixHost
	I0816 18:13:36.264867   74510 start.go:83] releasing machines lock for "embed-certs-777541", held for 4m37.449881856s
	W0816 18:13:36.264895   74510 start.go:714] error starting host: provision: host is not running
	W0816 18:13:36.264994   74510 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0816 18:13:36.265005   74510 start.go:729] Will try again in 5 seconds ...
	I0816 18:13:36.288329   74828 main.go:141] libmachine: (no-preload-864476) Calling .Start
	I0816 18:13:36.288484   74828 main.go:141] libmachine: (no-preload-864476) Ensuring networks are active...
	I0816 18:13:36.289285   74828 main.go:141] libmachine: (no-preload-864476) Ensuring network default is active
	I0816 18:13:36.289912   74828 main.go:141] libmachine: (no-preload-864476) Ensuring network mk-no-preload-864476 is active
	I0816 18:13:36.290318   74828 main.go:141] libmachine: (no-preload-864476) Getting domain xml...
	I0816 18:13:36.291176   74828 main.go:141] libmachine: (no-preload-864476) Creating domain...
	I0816 18:13:37.504191   74828 main.go:141] libmachine: (no-preload-864476) Waiting to get IP...
	I0816 18:13:37.505110   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:37.505575   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:37.505621   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:37.505543   75973 retry.go:31] will retry after 308.411866ms: waiting for machine to come up
	I0816 18:13:37.816219   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:37.816877   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:37.816931   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:37.816852   75973 retry.go:31] will retry after 321.445064ms: waiting for machine to come up
	I0816 18:13:38.140594   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:38.141059   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:38.141082   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:38.141018   75973 retry.go:31] will retry after 337.935433ms: waiting for machine to come up
	I0816 18:13:38.480699   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:38.481110   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:38.481135   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:38.481033   75973 retry.go:31] will retry after 449.775503ms: waiting for machine to come up
	I0816 18:13:41.266589   74510 start.go:360] acquireMachinesLock for embed-certs-777541: {Name:mke1ffa1f2d6d714bdd85e184816ba8f4dfd08f1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 18:13:38.932812   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:38.933232   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:38.933259   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:38.933171   75973 retry.go:31] will retry after 482.676832ms: waiting for machine to come up
	I0816 18:13:39.417939   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:39.418323   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:39.418350   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:39.418276   75973 retry.go:31] will retry after 740.37516ms: waiting for machine to come up
	I0816 18:13:40.160491   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:40.160917   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:40.160942   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:40.160867   75973 retry.go:31] will retry after 1.10464436s: waiting for machine to come up
	I0816 18:13:41.267213   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:41.267654   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:41.267680   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:41.267613   75973 retry.go:31] will retry after 1.395131164s: waiting for machine to come up
	I0816 18:13:42.664731   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:42.665229   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:42.665252   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:42.665181   75973 retry.go:31] will retry after 1.560403289s: waiting for machine to come up
	I0816 18:13:44.226847   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:44.227375   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:44.227404   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:44.227342   75973 retry.go:31] will retry after 1.647944685s: waiting for machine to come up
	I0816 18:13:45.876965   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:45.877411   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:45.877440   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:45.877366   75973 retry.go:31] will retry after 1.971325886s: waiting for machine to come up
	I0816 18:13:47.849950   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:47.850457   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:47.850490   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:47.850383   75973 retry.go:31] will retry after 2.95642392s: waiting for machine to come up
	I0816 18:13:50.810560   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:50.811013   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:50.811045   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:50.810930   75973 retry.go:31] will retry after 4.510008193s: waiting for machine to come up
	I0816 18:13:56.529339   75006 start.go:364] duration metric: took 4m6.515818295s to acquireMachinesLock for "default-k8s-diff-port-256678"
	I0816 18:13:56.529444   75006 start.go:96] Skipping create...Using existing machine configuration
	I0816 18:13:56.529459   75006 fix.go:54] fixHost starting: 
	I0816 18:13:56.529851   75006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:13:56.529890   75006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:13:56.547077   75006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45661
	I0816 18:13:56.547585   75006 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:13:56.548068   75006 main.go:141] libmachine: Using API Version  1
	I0816 18:13:56.548091   75006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:13:56.548421   75006 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:13:56.548610   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .DriverName
	I0816 18:13:56.548766   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetState
	I0816 18:13:56.550373   75006 fix.go:112] recreateIfNeeded on default-k8s-diff-port-256678: state=Stopped err=<nil>
	I0816 18:13:56.550414   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .DriverName
	W0816 18:13:56.550604   75006 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 18:13:56.552781   75006 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-256678" ...
	I0816 18:13:55.326062   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.326558   74828 main.go:141] libmachine: (no-preload-864476) Found IP for machine: 192.168.50.50
	I0816 18:13:55.326576   74828 main.go:141] libmachine: (no-preload-864476) Reserving static IP address...
	I0816 18:13:55.326593   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has current primary IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.327109   74828 main.go:141] libmachine: (no-preload-864476) Reserved static IP address: 192.168.50.50
	I0816 18:13:55.327142   74828 main.go:141] libmachine: (no-preload-864476) Waiting for SSH to be available...
	I0816 18:13:55.327167   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "no-preload-864476", mac: "52:54:00:f3:50:53", ip: "192.168.50.50"} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:55.327191   74828 main.go:141] libmachine: (no-preload-864476) DBG | skip adding static IP to network mk-no-preload-864476 - found existing host DHCP lease matching {name: "no-preload-864476", mac: "52:54:00:f3:50:53", ip: "192.168.50.50"}
	I0816 18:13:55.327205   74828 main.go:141] libmachine: (no-preload-864476) DBG | Getting to WaitForSSH function...
	I0816 18:13:55.329001   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.329350   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:55.329378   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.329534   74828 main.go:141] libmachine: (no-preload-864476) DBG | Using SSH client type: external
	I0816 18:13:55.329574   74828 main.go:141] libmachine: (no-preload-864476) DBG | Using SSH private key: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/no-preload-864476/id_rsa (-rw-------)
	I0816 18:13:55.329604   74828 main.go:141] libmachine: (no-preload-864476) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.50 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19461-9545/.minikube/machines/no-preload-864476/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 18:13:55.329622   74828 main.go:141] libmachine: (no-preload-864476) DBG | About to run SSH command:
	I0816 18:13:55.329636   74828 main.go:141] libmachine: (no-preload-864476) DBG | exit 0
	I0816 18:13:55.452553   74828 main.go:141] libmachine: (no-preload-864476) DBG | SSH cmd err, output: <nil>: 
	I0816 18:13:55.452964   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetConfigRaw
	I0816 18:13:55.453557   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetIP
	I0816 18:13:55.455951   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.456334   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:55.456370   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.456564   74828 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/no-preload-864476/config.json ...
	I0816 18:13:55.456782   74828 machine.go:93] provisionDockerMachine start ...
	I0816 18:13:55.456801   74828 main.go:141] libmachine: (no-preload-864476) Calling .DriverName
	I0816 18:13:55.456983   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:13:55.459149   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.459547   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:55.459570   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.459730   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHPort
	I0816 18:13:55.459918   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:55.460068   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:55.460207   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHUsername
	I0816 18:13:55.460418   74828 main.go:141] libmachine: Using SSH client type: native
	I0816 18:13:55.460603   74828 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.50 22 <nil> <nil>}
	I0816 18:13:55.460637   74828 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 18:13:55.564875   74828 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 18:13:55.564903   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetMachineName
	I0816 18:13:55.565203   74828 buildroot.go:166] provisioning hostname "no-preload-864476"
	I0816 18:13:55.565229   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetMachineName
	I0816 18:13:55.565455   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:13:55.568114   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.568578   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:55.568612   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.568777   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHPort
	I0816 18:13:55.568912   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:55.569023   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:55.569200   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHUsername
	I0816 18:13:55.569448   74828 main.go:141] libmachine: Using SSH client type: native
	I0816 18:13:55.569649   74828 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.50 22 <nil> <nil>}
	I0816 18:13:55.569667   74828 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-864476 && echo "no-preload-864476" | sudo tee /etc/hostname
	I0816 18:13:55.686349   74828 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-864476
	
	I0816 18:13:55.686378   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:13:55.689171   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.689572   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:55.689608   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.689792   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHPort
	I0816 18:13:55.690008   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:55.690183   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:55.690418   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHUsername
	I0816 18:13:55.690623   74828 main.go:141] libmachine: Using SSH client type: native
	I0816 18:13:55.690782   74828 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.50 22 <nil> <nil>}
	I0816 18:13:55.690798   74828 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-864476' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-864476/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-864476' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 18:13:55.800352   74828 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 18:13:55.800386   74828 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19461-9545/.minikube CaCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19461-9545/.minikube}
	I0816 18:13:55.800436   74828 buildroot.go:174] setting up certificates
	I0816 18:13:55.800452   74828 provision.go:84] configureAuth start
	I0816 18:13:55.800470   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetMachineName
	I0816 18:13:55.800793   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetIP
	I0816 18:13:55.803388   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.803786   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:55.803822   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.804025   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:13:55.806567   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.806977   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:55.807003   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.807129   74828 provision.go:143] copyHostCerts
	I0816 18:13:55.807178   74828 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem, removing ...
	I0816 18:13:55.807198   74828 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem
	I0816 18:13:55.807286   74828 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem (1082 bytes)
	I0816 18:13:55.807401   74828 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem, removing ...
	I0816 18:13:55.807412   74828 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem
	I0816 18:13:55.807439   74828 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem (1123 bytes)
	I0816 18:13:55.807554   74828 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem, removing ...
	I0816 18:13:55.807565   74828 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem
	I0816 18:13:55.807588   74828 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem (1679 bytes)
	I0816 18:13:55.807648   74828 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem org=jenkins.no-preload-864476 san=[127.0.0.1 192.168.50.50 localhost minikube no-preload-864476]
	I0816 18:13:55.881474   74828 provision.go:177] copyRemoteCerts
	I0816 18:13:55.881529   74828 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 18:13:55.881558   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:13:55.884424   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.884952   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:55.884983   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.885138   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHPort
	I0816 18:13:55.885335   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:55.885486   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHUsername
	I0816 18:13:55.885669   74828 sshutil.go:53] new ssh client: &{IP:192.168.50.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/no-preload-864476/id_rsa Username:docker}
	I0816 18:13:55.966915   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0816 18:13:55.989812   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 18:13:56.011744   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 18:13:56.032745   74828 provision.go:87] duration metric: took 232.276991ms to configureAuth
	I0816 18:13:56.032778   74828 buildroot.go:189] setting minikube options for container-runtime
	I0816 18:13:56.033001   74828 config.go:182] Loaded profile config "no-preload-864476": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 18:13:56.033096   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:13:56.035919   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:56.036283   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:56.036311   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:56.036499   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHPort
	I0816 18:13:56.036713   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:56.036861   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:56.036975   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHUsername
	I0816 18:13:56.037100   74828 main.go:141] libmachine: Using SSH client type: native
	I0816 18:13:56.037275   74828 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.50 22 <nil> <nil>}
	I0816 18:13:56.037294   74828 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 18:13:56.296112   74828 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 18:13:56.296140   74828 machine.go:96] duration metric: took 839.343895ms to provisionDockerMachine
	I0816 18:13:56.296152   74828 start.go:293] postStartSetup for "no-preload-864476" (driver="kvm2")
	I0816 18:13:56.296162   74828 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 18:13:56.296177   74828 main.go:141] libmachine: (no-preload-864476) Calling .DriverName
	I0816 18:13:56.296537   74828 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 18:13:56.296570   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:13:56.299838   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:56.300364   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:56.300396   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:56.300603   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHPort
	I0816 18:13:56.300833   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:56.300985   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHUsername
	I0816 18:13:56.301187   74828 sshutil.go:53] new ssh client: &{IP:192.168.50.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/no-preload-864476/id_rsa Username:docker}
	I0816 18:13:56.383095   74828 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 18:13:56.387172   74828 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 18:13:56.387200   74828 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/addons for local assets ...
	I0816 18:13:56.387286   74828 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/files for local assets ...
	I0816 18:13:56.387392   74828 filesync.go:149] local asset: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem -> 167532.pem in /etc/ssl/certs
	I0816 18:13:56.387550   74828 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 18:13:56.396072   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /etc/ssl/certs/167532.pem (1708 bytes)
	I0816 18:13:56.419470   74828 start.go:296] duration metric: took 123.306644ms for postStartSetup
	I0816 18:13:56.419509   74828 fix.go:56] duration metric: took 20.154482872s for fixHost
	I0816 18:13:56.419529   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:13:56.422047   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:56.422454   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:56.422503   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:56.422573   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHPort
	I0816 18:13:56.422764   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:56.422963   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:56.423150   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHUsername
	I0816 18:13:56.423388   74828 main.go:141] libmachine: Using SSH client type: native
	I0816 18:13:56.423597   74828 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.50 22 <nil> <nil>}
	I0816 18:13:56.423610   74828 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 18:13:56.529164   74828 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723832036.506687395
	
	I0816 18:13:56.529190   74828 fix.go:216] guest clock: 1723832036.506687395
	I0816 18:13:56.529200   74828 fix.go:229] Guest: 2024-08-16 18:13:56.506687395 +0000 UTC Remote: 2024-08-16 18:13:56.419513163 +0000 UTC m=+262.671840210 (delta=87.174232ms)
	I0816 18:13:56.529229   74828 fix.go:200] guest clock delta is within tolerance: 87.174232ms
	I0816 18:13:56.529246   74828 start.go:83] releasing machines lock for "no-preload-864476", held for 20.264231324s
	I0816 18:13:56.529276   74828 main.go:141] libmachine: (no-preload-864476) Calling .DriverName
	I0816 18:13:56.529645   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetIP
	I0816 18:13:56.532279   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:56.532599   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:56.532660   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:56.532824   74828 main.go:141] libmachine: (no-preload-864476) Calling .DriverName
	I0816 18:13:56.533348   74828 main.go:141] libmachine: (no-preload-864476) Calling .DriverName
	I0816 18:13:56.533522   74828 main.go:141] libmachine: (no-preload-864476) Calling .DriverName
	I0816 18:13:56.533604   74828 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 18:13:56.533663   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:13:56.533759   74828 ssh_runner.go:195] Run: cat /version.json
	I0816 18:13:56.533786   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:13:56.536427   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:56.536711   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:56.536822   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:56.536845   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:56.536996   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHPort
	I0816 18:13:56.537071   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:56.537105   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:56.537191   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:56.537334   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHUsername
	I0816 18:13:56.537430   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHPort
	I0816 18:13:56.537497   74828 sshutil.go:53] new ssh client: &{IP:192.168.50.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/no-preload-864476/id_rsa Username:docker}
	I0816 18:13:56.537582   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:56.537728   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHUsername
	I0816 18:13:56.537964   74828 sshutil.go:53] new ssh client: &{IP:192.168.50.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/no-preload-864476/id_rsa Username:docker}
	I0816 18:13:56.654319   74828 ssh_runner.go:195] Run: systemctl --version
	I0816 18:13:56.660640   74828 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 18:13:56.806359   74828 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 18:13:56.812415   74828 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 18:13:56.812489   74828 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 18:13:56.828095   74828 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 18:13:56.828122   74828 start.go:495] detecting cgroup driver to use...
	I0816 18:13:56.828186   74828 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 18:13:56.843041   74828 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 18:13:56.856322   74828 docker.go:217] disabling cri-docker service (if available) ...
	I0816 18:13:56.856386   74828 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 18:13:56.869899   74828 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 18:13:56.884609   74828 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 18:13:56.990986   74828 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 18:13:57.134218   74828 docker.go:233] disabling docker service ...
	I0816 18:13:57.134283   74828 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 18:13:57.156415   74828 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 18:13:57.172969   74828 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 18:13:57.328279   74828 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 18:13:57.448217   74828 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 18:13:57.461630   74828 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 18:13:57.478199   74828 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 18:13:57.478271   74828 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:13:57.487845   74828 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 18:13:57.487918   74828 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:13:57.497895   74828 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:13:57.509260   74828 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:13:57.519090   74828 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 18:13:57.529351   74828 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:13:57.539816   74828 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:13:57.559271   74828 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:13:57.573027   74828 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 18:13:57.583410   74828 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 18:13:57.583490   74828 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 18:13:57.598762   74828 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 18:13:57.609589   74828 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:13:57.727016   74828 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 18:13:57.876815   74828 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 18:13:57.876876   74828 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 18:13:57.882172   74828 start.go:563] Will wait 60s for crictl version
	I0816 18:13:57.882241   74828 ssh_runner.go:195] Run: which crictl
	I0816 18:13:57.885706   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 18:13:57.926981   74828 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 18:13:57.927070   74828 ssh_runner.go:195] Run: crio --version
	I0816 18:13:57.957802   74828 ssh_runner.go:195] Run: crio --version
	I0816 18:13:57.984920   74828 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 18:13:57.986450   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetIP
	I0816 18:13:57.989584   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:57.990205   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:57.990257   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:57.990552   74828 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0816 18:13:57.994584   74828 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 18:13:58.007996   74828 kubeadm.go:883] updating cluster {Name:no-preload-864476 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-864476 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.50 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 18:13:58.008137   74828 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 18:13:58.008184   74828 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 18:13:58.041643   74828 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0816 18:13:58.041672   74828 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0816 18:13:58.041751   74828 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:13:58.041778   74828 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0816 18:13:58.041794   74828 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0816 18:13:58.041741   74828 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0816 18:13:58.041779   74828 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 18:13:58.041899   74828 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0816 18:13:58.041918   74828 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0816 18:13:58.041798   74828 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0816 18:13:58.043387   74828 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0816 18:13:58.043471   74828 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0816 18:13:58.043386   74828 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:13:58.043471   74828 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 18:13:58.043388   74828 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0816 18:13:58.043387   74828 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0816 18:13:58.043386   74828 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0816 18:13:58.043394   74828 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0816 18:13:58.289223   74828 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0816 18:13:58.299125   74828 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0816 18:13:58.308703   74828 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0816 18:13:58.339031   74828 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 18:13:58.351467   74828 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0816 18:13:58.351514   74828 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0816 18:13:58.351572   74828 ssh_runner.go:195] Run: which crictl
	I0816 18:13:58.358019   74828 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0816 18:13:58.359198   74828 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0816 18:13:58.385487   74828 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0816 18:13:58.385529   74828 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0816 18:13:58.385571   74828 ssh_runner.go:195] Run: which crictl
	I0816 18:13:58.392417   74828 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0816 18:13:58.506834   74828 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0816 18:13:58.506886   74828 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 18:13:58.506896   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0816 18:13:58.506924   74828 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0816 18:13:58.506963   74828 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0816 18:13:58.507003   74828 ssh_runner.go:195] Run: which crictl
	I0816 18:13:58.506928   74828 ssh_runner.go:195] Run: which crictl
	I0816 18:13:58.507072   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0816 18:13:58.507004   74828 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0816 18:13:58.507099   74828 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0816 18:13:58.507124   74828 ssh_runner.go:195] Run: which crictl
	I0816 18:13:58.507160   74828 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0816 18:13:58.507181   74828 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0816 18:13:58.507228   74828 ssh_runner.go:195] Run: which crictl
	I0816 18:13:58.562410   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0816 18:13:58.562469   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0816 18:13:58.562481   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0816 18:13:58.562554   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 18:13:58.562590   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0816 18:13:58.562628   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0816 18:13:58.686069   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0816 18:13:58.690288   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0816 18:13:58.690352   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0816 18:13:58.692851   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 18:13:58.692911   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0816 18:13:58.693027   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0816 18:13:58.777263   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0816 18:13:56.554238   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .Start
	I0816 18:13:56.554426   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Ensuring networks are active...
	I0816 18:13:56.555221   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Ensuring network default is active
	I0816 18:13:56.555599   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Ensuring network mk-default-k8s-diff-port-256678 is active
	I0816 18:13:56.556004   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Getting domain xml...
	I0816 18:13:56.556809   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Creating domain...
	I0816 18:13:57.825641   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting to get IP...
	I0816 18:13:57.826681   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:13:57.827158   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:13:57.827219   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:13:57.827129   76107 retry.go:31] will retry after 267.923612ms: waiting for machine to come up
	I0816 18:13:58.096794   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:13:58.097184   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:13:58.097219   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:13:58.097158   76107 retry.go:31] will retry after 286.726817ms: waiting for machine to come up
	I0816 18:13:58.386213   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:13:58.386757   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:13:58.386782   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:13:58.386704   76107 retry.go:31] will retry after 386.697374ms: waiting for machine to come up
	I0816 18:13:58.775483   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:13:58.775989   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:13:58.776014   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:13:58.775949   76107 retry.go:31] will retry after 554.398617ms: waiting for machine to come up
	I0816 18:13:59.331517   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:13:59.332002   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:13:59.332024   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:13:59.331943   76107 retry.go:31] will retry after 589.24333ms: waiting for machine to come up
	I0816 18:13:58.823309   74828 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0816 18:13:58.823318   74828 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0816 18:13:58.823410   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 18:13:58.823434   74828 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0816 18:13:58.823437   74828 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0816 18:13:58.823549   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0816 18:13:58.836312   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0816 18:13:58.894363   74828 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0816 18:13:58.894428   74828 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0816 18:13:58.894447   74828 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0816 18:13:58.894495   74828 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0816 18:13:58.894495   74828 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0816 18:13:58.933183   74828 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0816 18:13:58.933290   74828 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0816 18:13:58.934389   74828 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0816 18:13:58.934456   74828 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0816 18:13:58.934491   74828 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0816 18:13:58.934550   74828 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0816 18:13:58.934569   74828 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0816 18:13:58.934682   74828 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:14:00.792156   74828 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (1.897633034s)
	I0816 18:14:00.792196   74828 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0816 18:14:00.792224   74828 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (1.89763588s)
	I0816 18:14:00.792257   74828 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0816 18:14:00.792230   74828 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0816 18:14:00.792281   74828 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.858968807s)
	I0816 18:14:00.792300   74828 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0816 18:14:00.792317   74828 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0816 18:14:00.792355   74828 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1: (1.85778817s)
	I0816 18:14:00.792370   74828 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0816 18:14:00.792415   74828 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.857704749s)
	I0816 18:14:00.792422   74828 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.857843473s)
	I0816 18:14:00.792436   74828 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0816 18:14:00.792457   74828 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0816 18:14:00.792491   74828 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:14:00.792528   74828 ssh_runner.go:195] Run: which crictl
	I0816 18:14:00.797103   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:14:03.171070   74828 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.378727123s)
	I0816 18:14:03.171118   74828 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0816 18:14:03.171149   74828 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.374004458s)
	I0816 18:14:03.171155   74828 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0816 18:14:03.171274   74828 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0816 18:14:03.171225   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:13:59.922834   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:13:59.923439   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:13:59.923467   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:13:59.923368   76107 retry.go:31] will retry after 779.656786ms: waiting for machine to come up
	I0816 18:14:00.704929   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:00.705395   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:14:00.705417   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:14:00.705344   76107 retry.go:31] will retry after 790.87115ms: waiting for machine to come up
	I0816 18:14:01.497557   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:01.497999   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:14:01.498052   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:14:01.497981   76107 retry.go:31] will retry after 919.825072ms: waiting for machine to come up
	I0816 18:14:02.419821   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:02.420280   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:14:02.420312   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:14:02.420227   76107 retry.go:31] will retry after 1.304504009s: waiting for machine to come up
	I0816 18:14:03.725928   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:03.726378   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:14:03.726400   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:14:03.726344   76107 retry.go:31] will retry after 2.105251359s: waiting for machine to come up
	I0816 18:14:06.879864   74828 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.708558161s)
	I0816 18:14:06.879904   74828 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0816 18:14:06.879905   74828 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.708563338s)
	I0816 18:14:06.879935   74828 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0816 18:14:06.879981   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:14:06.879991   74828 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0816 18:14:08.769077   74828 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.889063218s)
	I0816 18:14:08.769114   74828 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0816 18:14:08.769145   74828 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0816 18:14:08.769231   74828 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0816 18:14:08.769146   74828 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.889146748s)
	I0816 18:14:08.769343   74828 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0816 18:14:08.769431   74828 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0816 18:14:05.833605   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:05.834078   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:14:05.834109   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:14:05.834025   76107 retry.go:31] will retry after 2.042421539s: waiting for machine to come up
	I0816 18:14:07.878000   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:07.878510   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:14:07.878541   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:14:07.878432   76107 retry.go:31] will retry after 2.777402825s: waiting for machine to come up
	I0816 18:14:10.627286   74828 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.858028746s)
	I0816 18:14:10.627331   74828 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0816 18:14:10.627346   74828 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.857891086s)
	I0816 18:14:10.627358   74828 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0816 18:14:10.627378   74828 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0816 18:14:10.627402   74828 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0816 18:14:11.977277   74828 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.349851948s)
	I0816 18:14:11.977314   74828 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0816 18:14:11.977339   74828 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0816 18:14:11.977389   74828 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0816 18:14:12.630939   74828 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0816 18:14:12.630999   74828 cache_images.go:123] Successfully loaded all cached images
	I0816 18:14:12.631004   74828 cache_images.go:92] duration metric: took 14.589319022s to LoadCachedImages
	I0816 18:14:12.631016   74828 kubeadm.go:934] updating node { 192.168.50.50 8443 v1.31.0 crio true true} ...
	I0816 18:14:12.631132   74828 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-864476 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.50
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-864476 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 18:14:12.631207   74828 ssh_runner.go:195] Run: crio config
	I0816 18:14:12.683072   74828 cni.go:84] Creating CNI manager for ""
	I0816 18:14:12.683094   74828 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 18:14:12.683107   74828 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 18:14:12.683129   74828 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.50 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-864476 NodeName:no-preload-864476 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.50"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.50 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 18:14:12.683276   74828 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.50
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-864476"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.50
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.50"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 18:14:12.683345   74828 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 18:14:12.693879   74828 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 18:14:12.693941   74828 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 18:14:12.702601   74828 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0816 18:14:12.718235   74828 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 18:14:12.733455   74828 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0816 18:14:12.748878   74828 ssh_runner.go:195] Run: grep 192.168.50.50	control-plane.minikube.internal$ /etc/hosts
	I0816 18:14:12.752276   74828 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.50	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 18:14:12.763390   74828 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:14:12.872450   74828 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 18:14:12.888531   74828 certs.go:68] Setting up /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/no-preload-864476 for IP: 192.168.50.50
	I0816 18:14:12.888569   74828 certs.go:194] generating shared ca certs ...
	I0816 18:14:12.888589   74828 certs.go:226] acquiring lock for ca certs: {Name:mk1e000575c94c138513704c2900b8a68810eb65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:14:12.888783   74828 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key
	I0816 18:14:12.888845   74828 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key
	I0816 18:14:12.888860   74828 certs.go:256] generating profile certs ...
	I0816 18:14:12.888971   74828 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/no-preload-864476/client.key
	I0816 18:14:12.889070   74828 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/no-preload-864476/apiserver.key.30cf6dcb
	I0816 18:14:12.889136   74828 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/no-preload-864476/proxy-client.key
	I0816 18:14:12.889298   74828 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem (1338 bytes)
	W0816 18:14:12.889339   74828 certs.go:480] ignoring /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753_empty.pem, impossibly tiny 0 bytes
	I0816 18:14:12.889351   74828 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 18:14:12.889391   74828 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem (1082 bytes)
	I0816 18:14:12.889421   74828 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem (1123 bytes)
	I0816 18:14:12.889452   74828 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem (1679 bytes)
	I0816 18:14:12.889507   74828 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem (1708 bytes)
	I0816 18:14:12.890441   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 18:14:12.919571   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 18:14:12.947375   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 18:14:12.975197   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 18:14:13.007308   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/no-preload-864476/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0816 18:14:13.056151   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/no-preload-864476/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 18:14:13.080317   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/no-preload-864476/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 18:14:13.102231   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/no-preload-864476/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 18:14:13.124045   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 18:14:13.145312   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem --> /usr/share/ca-certificates/16753.pem (1338 bytes)
	I0816 18:14:13.166806   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /usr/share/ca-certificates/167532.pem (1708 bytes)
	I0816 18:14:13.188173   74828 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 18:14:13.203594   74828 ssh_runner.go:195] Run: openssl version
	I0816 18:14:13.209148   74828 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16753.pem && ln -fs /usr/share/ca-certificates/16753.pem /etc/ssl/certs/16753.pem"
	I0816 18:14:13.220266   74828 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16753.pem
	I0816 18:14:13.224569   74828 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 17:00 /usr/share/ca-certificates/16753.pem
	I0816 18:14:13.224635   74828 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16753.pem
	I0816 18:14:13.230141   74828 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16753.pem /etc/ssl/certs/51391683.0"
	I0816 18:14:13.241362   74828 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167532.pem && ln -fs /usr/share/ca-certificates/167532.pem /etc/ssl/certs/167532.pem"
	I0816 18:14:13.252437   74828 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167532.pem
	I0816 18:14:13.256658   74828 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 17:00 /usr/share/ca-certificates/167532.pem
	I0816 18:14:13.256712   74828 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167532.pem
	I0816 18:14:13.262006   74828 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167532.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 18:14:13.273168   74828 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 18:14:13.284518   74828 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:14:13.288566   74828 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:49 /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:14:13.288611   74828 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:14:13.293944   74828 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 18:14:13.305148   74828 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 18:14:13.309460   74828 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 18:14:13.315123   74828 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 18:14:13.320854   74828 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 18:14:13.326676   74828 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 18:14:13.332183   74828 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 18:14:13.337794   74828 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 18:14:13.343369   74828 kubeadm.go:392] StartCluster: {Name:no-preload-864476 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-864476 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.50 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 18:14:13.343470   74828 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 18:14:13.343527   74828 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 18:14:13.384490   74828 cri.go:89] found id: ""
	I0816 18:14:13.384567   74828 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 18:14:13.395094   74828 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 18:14:13.395116   74828 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 18:14:13.395183   74828 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 18:14:13.406605   74828 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 18:14:13.407898   74828 kubeconfig.go:125] found "no-preload-864476" server: "https://192.168.50.50:8443"
	I0816 18:14:13.410808   74828 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 18:14:13.420516   74828 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.50
	I0816 18:14:13.420541   74828 kubeadm.go:1160] stopping kube-system containers ...
	I0816 18:14:13.420554   74828 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 18:14:13.420589   74828 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 18:14:13.459174   74828 cri.go:89] found id: ""
	I0816 18:14:13.459242   74828 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 18:14:13.475598   74828 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 18:14:13.484685   74828 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 18:14:13.484707   74828 kubeadm.go:157] found existing configuration files:
	
	I0816 18:14:13.484756   74828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 18:14:13.493092   74828 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 18:14:13.493147   74828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 18:14:13.501649   74828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 18:14:13.509987   74828 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 18:14:13.510028   74828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 18:14:13.518500   74828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 18:14:13.526689   74828 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 18:14:13.526737   74828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 18:14:13.535606   74828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 18:14:13.545130   74828 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 18:14:13.545185   74828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 18:14:13.553947   74828 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 18:14:13.562763   74828 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:13.663383   74828 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:10.657652   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:10.658062   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:14:10.658105   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:14:10.657999   76107 retry.go:31] will retry after 3.856225979s: waiting for machine to come up
	I0816 18:14:14.518358   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:14.518875   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Found IP for machine: 192.168.72.144
	I0816 18:14:14.518896   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Reserving static IP address...
	I0816 18:14:14.518915   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has current primary IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:14.519296   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Reserved static IP address: 192.168.72.144
	I0816 18:14:14.519334   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-256678", mac: "52:54:00:76:32:d8", ip: "192.168.72.144"} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:14.519346   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for SSH to be available...
	I0816 18:14:14.519377   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | skip adding static IP to network mk-default-k8s-diff-port-256678 - found existing host DHCP lease matching {name: "default-k8s-diff-port-256678", mac: "52:54:00:76:32:d8", ip: "192.168.72.144"}
	I0816 18:14:14.519391   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | Getting to WaitForSSH function...
	I0816 18:14:14.521566   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:14.521926   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:14.521969   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:14.522133   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | Using SSH client type: external
	I0816 18:14:14.522160   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | Using SSH private key: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/default-k8s-diff-port-256678/id_rsa (-rw-------)
	I0816 18:14:14.522202   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.144 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19461-9545/.minikube/machines/default-k8s-diff-port-256678/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 18:14:14.522221   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | About to run SSH command:
	I0816 18:14:14.522235   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | exit 0
	I0816 18:14:14.648603   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | SSH cmd err, output: <nil>: 
	I0816 18:14:14.649005   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetConfigRaw
	I0816 18:14:14.649616   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetIP
	I0816 18:14:14.652340   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:14.652767   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:14.652796   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:14.653116   75006 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/default-k8s-diff-port-256678/config.json ...
	I0816 18:14:14.653337   75006 machine.go:93] provisionDockerMachine start ...
	I0816 18:14:14.653361   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .DriverName
	I0816 18:14:14.653598   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:14:14.656062   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:14.656412   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:14.656442   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:14.656565   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHPort
	I0816 18:14:14.656757   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:14.656895   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:14.657015   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHUsername
	I0816 18:14:14.657128   75006 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:14.657312   75006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I0816 18:14:14.657321   75006 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 18:14:14.768721   75006 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 18:14:14.768749   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetMachineName
	I0816 18:14:14.768990   75006 buildroot.go:166] provisioning hostname "default-k8s-diff-port-256678"
	I0816 18:14:14.769021   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetMachineName
	I0816 18:14:14.769246   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:14:14.772310   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:14.772675   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:14.772704   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:14.772922   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHPort
	I0816 18:14:14.773084   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:14.773242   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:14.773361   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHUsername
	I0816 18:14:14.773564   75006 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:14.773764   75006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I0816 18:14:14.773783   75006 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-256678 && echo "default-k8s-diff-port-256678" | sudo tee /etc/hostname
	I0816 18:14:14.894016   75006 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-256678
	
	I0816 18:14:14.894047   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:14:14.896797   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:14.897150   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:14.897184   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:14.897424   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHPort
	I0816 18:14:14.897613   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:14.897800   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:14.897933   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHUsername
	I0816 18:14:14.898124   75006 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:14.898286   75006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I0816 18:14:14.898303   75006 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-256678' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-256678/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-256678' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 18:14:15.814480   75402 start.go:364] duration metric: took 3m22.605706427s to acquireMachinesLock for "old-k8s-version-783465"
	I0816 18:14:15.814546   75402 start.go:96] Skipping create...Using existing machine configuration
	I0816 18:14:15.814554   75402 fix.go:54] fixHost starting: 
	I0816 18:14:15.815001   75402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:14:15.815062   75402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:14:15.834710   75402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46611
	I0816 18:14:15.835124   75402 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:14:15.835653   75402 main.go:141] libmachine: Using API Version  1
	I0816 18:14:15.835676   75402 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:14:15.836005   75402 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:14:15.836258   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	I0816 18:14:15.836392   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetState
	I0816 18:14:15.838010   75402 fix.go:112] recreateIfNeeded on old-k8s-version-783465: state=Stopped err=<nil>
	I0816 18:14:15.838043   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	W0816 18:14:15.838200   75402 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 18:14:15.840214   75402 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-783465" ...
	I0816 18:14:15.016150   75006 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 18:14:15.016176   75006 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19461-9545/.minikube CaCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19461-9545/.minikube}
	I0816 18:14:15.016200   75006 buildroot.go:174] setting up certificates
	I0816 18:14:15.016213   75006 provision.go:84] configureAuth start
	I0816 18:14:15.016231   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetMachineName
	I0816 18:14:15.016518   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetIP
	I0816 18:14:15.019132   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.019687   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:15.019725   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.019907   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:14:15.022758   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.023192   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:15.023233   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.023408   75006 provision.go:143] copyHostCerts
	I0816 18:14:15.023468   75006 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem, removing ...
	I0816 18:14:15.023489   75006 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem
	I0816 18:14:15.023552   75006 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem (1082 bytes)
	I0816 18:14:15.023649   75006 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem, removing ...
	I0816 18:14:15.023659   75006 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem
	I0816 18:14:15.023681   75006 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem (1123 bytes)
	I0816 18:14:15.023733   75006 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem, removing ...
	I0816 18:14:15.023740   75006 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem
	I0816 18:14:15.023756   75006 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem (1679 bytes)
	I0816 18:14:15.023802   75006 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-256678 san=[127.0.0.1 192.168.72.144 default-k8s-diff-port-256678 localhost minikube]
	I0816 18:14:15.142549   75006 provision.go:177] copyRemoteCerts
	I0816 18:14:15.142601   75006 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 18:14:15.142625   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:14:15.145515   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.145867   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:15.145903   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.146029   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHPort
	I0816 18:14:15.146250   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:15.146436   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHUsername
	I0816 18:14:15.146604   75006 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/default-k8s-diff-port-256678/id_rsa Username:docker}
	I0816 18:14:15.230785   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 18:14:15.258450   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0816 18:14:15.286008   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 18:14:15.308690   75006 provision.go:87] duration metric: took 292.45797ms to configureAuth
	I0816 18:14:15.308725   75006 buildroot.go:189] setting minikube options for container-runtime
	I0816 18:14:15.308927   75006 config.go:182] Loaded profile config "default-k8s-diff-port-256678": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 18:14:15.308996   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:14:15.311959   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.312310   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:15.312332   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.312492   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHPort
	I0816 18:14:15.312713   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:15.312890   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:15.313028   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHUsername
	I0816 18:14:15.313184   75006 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:15.313369   75006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I0816 18:14:15.313387   75006 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 18:14:15.574487   75006 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 18:14:15.574517   75006 machine.go:96] duration metric: took 921.166622ms to provisionDockerMachine
	I0816 18:14:15.574529   75006 start.go:293] postStartSetup for "default-k8s-diff-port-256678" (driver="kvm2")
	I0816 18:14:15.574538   75006 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 18:14:15.574552   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .DriverName
	I0816 18:14:15.574835   75006 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 18:14:15.574854   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:14:15.577944   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.578266   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:15.578295   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.578469   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHPort
	I0816 18:14:15.578651   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:15.578800   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHUsername
	I0816 18:14:15.578912   75006 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/default-k8s-diff-port-256678/id_rsa Username:docker}
	I0816 18:14:15.664404   75006 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 18:14:15.668362   75006 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 18:14:15.668389   75006 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/addons for local assets ...
	I0816 18:14:15.668459   75006 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/files for local assets ...
	I0816 18:14:15.668562   75006 filesync.go:149] local asset: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem -> 167532.pem in /etc/ssl/certs
	I0816 18:14:15.668705   75006 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 18:14:15.678830   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /etc/ssl/certs/167532.pem (1708 bytes)
	I0816 18:14:15.702087   75006 start.go:296] duration metric: took 127.545675ms for postStartSetup
	I0816 18:14:15.702129   75006 fix.go:56] duration metric: took 19.172678011s for fixHost
	I0816 18:14:15.702152   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:14:15.704680   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.705117   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:15.705154   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.705288   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHPort
	I0816 18:14:15.705479   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:15.705643   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:15.705766   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHUsername
	I0816 18:14:15.705922   75006 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:15.706084   75006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I0816 18:14:15.706095   75006 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 18:14:15.814313   75006 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723832055.788948458
	
	I0816 18:14:15.814337   75006 fix.go:216] guest clock: 1723832055.788948458
	I0816 18:14:15.814348   75006 fix.go:229] Guest: 2024-08-16 18:14:15.788948458 +0000 UTC Remote: 2024-08-16 18:14:15.702133997 +0000 UTC m=+265.826862410 (delta=86.814461ms)
	I0816 18:14:15.814372   75006 fix.go:200] guest clock delta is within tolerance: 86.814461ms
	I0816 18:14:15.814382   75006 start.go:83] releasing machines lock for "default-k8s-diff-port-256678", held for 19.284958633s
	I0816 18:14:15.814416   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .DriverName
	I0816 18:14:15.814723   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetIP
	I0816 18:14:15.817995   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.818426   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:15.818467   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.818620   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .DriverName
	I0816 18:14:15.819299   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .DriverName
	I0816 18:14:15.819518   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .DriverName
	I0816 18:14:15.819616   75006 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 18:14:15.819656   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:14:15.819840   75006 ssh_runner.go:195] Run: cat /version.json
	I0816 18:14:15.819869   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:14:15.822797   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.823189   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.823478   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:15.823521   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.823659   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHPort
	I0816 18:14:15.823804   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:15.823811   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:15.823828   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.823965   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHUsername
	I0816 18:14:15.824064   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHPort
	I0816 18:14:15.824177   75006 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/default-k8s-diff-port-256678/id_rsa Username:docker}
	I0816 18:14:15.824234   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:15.824368   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHUsername
	I0816 18:14:15.824486   75006 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/default-k8s-diff-port-256678/id_rsa Username:docker}
	I0816 18:14:15.948709   75006 ssh_runner.go:195] Run: systemctl --version
	I0816 18:14:15.956239   75006 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 18:14:16.103538   75006 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 18:14:16.109299   75006 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 18:14:16.109385   75006 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 18:14:16.125056   75006 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 18:14:16.125092   75006 start.go:495] detecting cgroup driver to use...
	I0816 18:14:16.125188   75006 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 18:14:16.141741   75006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 18:14:16.158917   75006 docker.go:217] disabling cri-docker service (if available) ...
	I0816 18:14:16.158993   75006 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 18:14:16.173256   75006 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 18:14:16.187026   75006 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 18:14:16.332452   75006 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 18:14:16.503181   75006 docker.go:233] disabling docker service ...
	I0816 18:14:16.503254   75006 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 18:14:16.517961   75006 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 18:14:16.535991   75006 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 18:14:16.667874   75006 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 18:14:16.799300   75006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 18:14:16.813852   75006 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 18:14:16.832891   75006 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 18:14:16.832953   75006 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:16.845621   75006 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 18:14:16.845716   75006 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:16.856045   75006 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:16.866117   75006 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:16.877586   75006 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 18:14:16.887643   75006 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:16.897164   75006 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:16.915247   75006 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:16.924887   75006 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 18:14:16.933645   75006 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 18:14:16.933709   75006 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 18:14:16.946920   75006 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 18:14:16.955928   75006 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:14:17.090148   75006 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 18:14:17.241434   75006 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 18:14:17.241531   75006 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 18:14:17.246730   75006 start.go:563] Will wait 60s for crictl version
	I0816 18:14:17.246796   75006 ssh_runner.go:195] Run: which crictl
	I0816 18:14:17.250397   75006 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 18:14:17.289194   75006 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 18:14:17.289295   75006 ssh_runner.go:195] Run: crio --version
	I0816 18:14:17.324401   75006 ssh_runner.go:195] Run: crio --version
	I0816 18:14:17.361220   75006 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 18:14:15.841411   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .Start
	I0816 18:14:15.841576   75402 main.go:141] libmachine: (old-k8s-version-783465) Ensuring networks are active...
	I0816 18:14:15.842263   75402 main.go:141] libmachine: (old-k8s-version-783465) Ensuring network default is active
	I0816 18:14:15.842609   75402 main.go:141] libmachine: (old-k8s-version-783465) Ensuring network mk-old-k8s-version-783465 is active
	I0816 18:14:15.843023   75402 main.go:141] libmachine: (old-k8s-version-783465) Getting domain xml...
	I0816 18:14:15.844141   75402 main.go:141] libmachine: (old-k8s-version-783465) Creating domain...
	I0816 18:14:17.215163   75402 main.go:141] libmachine: (old-k8s-version-783465) Waiting to get IP...
	I0816 18:14:17.216445   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:17.216933   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:17.217029   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:17.216922   76298 retry.go:31] will retry after 286.243503ms: waiting for machine to come up
	I0816 18:14:17.504645   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:17.505240   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:17.505262   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:17.505175   76298 retry.go:31] will retry after 275.715235ms: waiting for machine to come up
	I0816 18:14:17.782804   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:17.783365   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:17.783392   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:17.783292   76298 retry.go:31] will retry after 343.088129ms: waiting for machine to come up
	I0816 18:14:14.936549   74828 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.273126441s)
	I0816 18:14:14.936584   74828 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:15.139778   74828 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:15.201814   74828 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:15.270552   74828 api_server.go:52] waiting for apiserver process to appear ...
	I0816 18:14:15.270667   74828 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:15.771379   74828 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:16.271296   74828 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:16.335242   74828 api_server.go:72] duration metric: took 1.064710561s to wait for apiserver process to appear ...
	I0816 18:14:16.335265   74828 api_server.go:88] waiting for apiserver healthz status ...
	I0816 18:14:16.335282   74828 api_server.go:253] Checking apiserver healthz at https://192.168.50.50:8443/healthz ...
	I0816 18:14:16.335727   74828 api_server.go:269] stopped: https://192.168.50.50:8443/healthz: Get "https://192.168.50.50:8443/healthz": dial tcp 192.168.50.50:8443: connect: connection refused
	I0816 18:14:16.835361   74828 api_server.go:253] Checking apiserver healthz at https://192.168.50.50:8443/healthz ...
	I0816 18:14:17.362436   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetIP
	I0816 18:14:17.365728   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:17.366122   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:17.366154   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:17.366403   75006 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0816 18:14:17.370322   75006 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 18:14:17.383153   75006 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-256678 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-256678 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.144 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 18:14:17.383303   75006 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 18:14:17.383364   75006 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 18:14:17.420269   75006 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0816 18:14:17.420339   75006 ssh_runner.go:195] Run: which lz4
	I0816 18:14:17.424477   75006 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 18:14:17.428507   75006 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 18:14:17.428547   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0816 18:14:18.717202   75006 crio.go:462] duration metric: took 1.292754157s to copy over tarball
	I0816 18:14:18.717278   75006 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 18:14:19.241691   74828 api_server.go:279] https://192.168.50.50:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 18:14:19.241729   74828 api_server.go:103] status: https://192.168.50.50:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 18:14:19.241746   74828 api_server.go:253] Checking apiserver healthz at https://192.168.50.50:8443/healthz ...
	I0816 18:14:19.292883   74828 api_server.go:279] https://192.168.50.50:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 18:14:19.292924   74828 api_server.go:103] status: https://192.168.50.50:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 18:14:19.336097   74828 api_server.go:253] Checking apiserver healthz at https://192.168.50.50:8443/healthz ...
	I0816 18:14:19.363715   74828 api_server.go:279] https://192.168.50.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:14:19.363753   74828 api_server.go:103] status: https://192.168.50.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:14:19.835848   74828 api_server.go:253] Checking apiserver healthz at https://192.168.50.50:8443/healthz ...
	I0816 18:14:19.840615   74828 api_server.go:279] https://192.168.50.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:14:19.840666   74828 api_server.go:103] status: https://192.168.50.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:14:20.336291   74828 api_server.go:253] Checking apiserver healthz at https://192.168.50.50:8443/healthz ...
	I0816 18:14:20.343751   74828 api_server.go:279] https://192.168.50.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:14:20.343785   74828 api_server.go:103] status: https://192.168.50.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:14:20.835470   74828 api_server.go:253] Checking apiserver healthz at https://192.168.50.50:8443/healthz ...
	I0816 18:14:20.841217   74828 api_server.go:279] https://192.168.50.50:8443/healthz returned 200:
	ok
	I0816 18:14:20.849609   74828 api_server.go:141] control plane version: v1.31.0
	I0816 18:14:20.849642   74828 api_server.go:131] duration metric: took 4.514370955s to wait for apiserver health ...
	I0816 18:14:20.849653   74828 cni.go:84] Creating CNI manager for ""
	I0816 18:14:20.849662   74828 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 18:14:20.851828   74828 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 18:14:18.127538   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:18.128044   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:18.128077   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:18.127958   76298 retry.go:31] will retry after 543.91951ms: waiting for machine to come up
	I0816 18:14:18.673778   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:18.674328   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:18.674351   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:18.674274   76298 retry.go:31] will retry after 694.978788ms: waiting for machine to come up
	I0816 18:14:19.370976   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:19.371577   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:19.371605   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:19.371538   76298 retry.go:31] will retry after 578.640883ms: waiting for machine to come up
	I0816 18:14:19.952328   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:19.952917   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:19.952941   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:19.952863   76298 retry.go:31] will retry after 820.19233ms: waiting for machine to come up
	I0816 18:14:20.774767   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:20.775175   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:20.775200   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:20.775134   76298 retry.go:31] will retry after 1.262201815s: waiting for machine to come up
	I0816 18:14:22.038872   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:22.039357   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:22.039385   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:22.039302   76298 retry.go:31] will retry after 1.164593889s: waiting for machine to come up
	I0816 18:14:20.853121   74828 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 18:14:20.866117   74828 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 18:14:20.888451   74828 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 18:14:20.902482   74828 system_pods.go:59] 8 kube-system pods found
	I0816 18:14:20.902530   74828 system_pods.go:61] "coredns-6f6b679f8f-w9cbm" [9b50c913-f492-4432-a50a-e0f727a7b856] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 18:14:20.902545   74828 system_pods.go:61] "etcd-no-preload-864476" [e45a11b8-fa3e-4a6e-9d06-5d82fdaf20dc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 18:14:20.902557   74828 system_pods.go:61] "kube-apiserver-no-preload-864476" [1cf82575-b520-4bc0-9e90-d40c02b4468d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 18:14:20.902568   74828 system_pods.go:61] "kube-controller-manager-no-preload-864476" [8c9123e0-16a4-4940-8464-4bec383bac90] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 18:14:20.902577   74828 system_pods.go:61] "kube-proxy-vdqxz" [0332e87e-5c0c-41f5-88a9-31b7f8494eb6] Running
	I0816 18:14:20.902587   74828 system_pods.go:61] "kube-scheduler-no-preload-864476" [6139753f-b5cf-4af5-a9fa-03fb220e3dc9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 18:14:20.902606   74828 system_pods.go:61] "metrics-server-6867b74b74-rxtwg" [f0d04fc9-24c0-47e3-afdc-f250ef07900c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 18:14:20.902620   74828 system_pods.go:61] "storage-provisioner" [65303dd8-27d7-4bf3-ae58-ff5fe556f17f] Running
	I0816 18:14:20.902631   74828 system_pods.go:74] duration metric: took 14.150825ms to wait for pod list to return data ...
	I0816 18:14:20.902645   74828 node_conditions.go:102] verifying NodePressure condition ...
	I0816 18:14:20.909305   74828 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 18:14:20.909342   74828 node_conditions.go:123] node cpu capacity is 2
	I0816 18:14:20.909355   74828 node_conditions.go:105] duration metric: took 6.699359ms to run NodePressure ...
	I0816 18:14:20.909377   74828 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:21.193348   74828 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0816 18:14:21.198555   74828 kubeadm.go:739] kubelet initialised
	I0816 18:14:21.198585   74828 kubeadm.go:740] duration metric: took 5.20722ms waiting for restarted kubelet to initialise ...
	I0816 18:14:21.198595   74828 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:14:21.204695   74828 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-w9cbm" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:21.212855   74828 pod_ready.go:98] node "no-preload-864476" hosting pod "coredns-6f6b679f8f-w9cbm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-864476" has status "Ready":"False"
	I0816 18:14:21.212877   74828 pod_ready.go:82] duration metric: took 8.157781ms for pod "coredns-6f6b679f8f-w9cbm" in "kube-system" namespace to be "Ready" ...
	E0816 18:14:21.212889   74828 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-864476" hosting pod "coredns-6f6b679f8f-w9cbm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-864476" has status "Ready":"False"
	I0816 18:14:21.212899   74828 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:21.220125   74828 pod_ready.go:98] node "no-preload-864476" hosting pod "etcd-no-preload-864476" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-864476" has status "Ready":"False"
	I0816 18:14:21.220150   74828 pod_ready.go:82] duration metric: took 7.241861ms for pod "etcd-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	E0816 18:14:21.220158   74828 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-864476" hosting pod "etcd-no-preload-864476" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-864476" has status "Ready":"False"
	I0816 18:14:21.220166   74828 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:21.226930   74828 pod_ready.go:98] node "no-preload-864476" hosting pod "kube-apiserver-no-preload-864476" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-864476" has status "Ready":"False"
	I0816 18:14:21.226957   74828 pod_ready.go:82] duration metric: took 6.783402ms for pod "kube-apiserver-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	E0816 18:14:21.226967   74828 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-864476" hosting pod "kube-apiserver-no-preload-864476" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-864476" has status "Ready":"False"
	I0816 18:14:21.226976   74828 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:21.292011   74828 pod_ready.go:98] node "no-preload-864476" hosting pod "kube-controller-manager-no-preload-864476" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-864476" has status "Ready":"False"
	I0816 18:14:21.292054   74828 pod_ready.go:82] duration metric: took 65.066708ms for pod "kube-controller-manager-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	E0816 18:14:21.292066   74828 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-864476" hosting pod "kube-controller-manager-no-preload-864476" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-864476" has status "Ready":"False"
	I0816 18:14:21.292075   74828 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-vdqxz" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:21.692536   74828 pod_ready.go:93] pod "kube-proxy-vdqxz" in "kube-system" namespace has status "Ready":"True"
	I0816 18:14:21.692564   74828 pod_ready.go:82] duration metric: took 400.476293ms for pod "kube-proxy-vdqxz" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:21.692577   74828 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:21.155261   75006 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.437939279s)
	I0816 18:14:21.155296   75006 crio.go:469] duration metric: took 2.438065212s to extract the tarball
	I0816 18:14:21.155325   75006 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 18:14:21.199451   75006 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 18:14:21.249963   75006 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 18:14:21.249990   75006 cache_images.go:84] Images are preloaded, skipping loading
	I0816 18:14:21.250002   75006 kubeadm.go:934] updating node { 192.168.72.144 8444 v1.31.0 crio true true} ...
	I0816 18:14:21.250129   75006 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-256678 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.144
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-256678 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 18:14:21.250211   75006 ssh_runner.go:195] Run: crio config
	I0816 18:14:21.299619   75006 cni.go:84] Creating CNI manager for ""
	I0816 18:14:21.299644   75006 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 18:14:21.299663   75006 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 18:14:21.299684   75006 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.144 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-256678 NodeName:default-k8s-diff-port-256678 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.144"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.144 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 18:14:21.299813   75006 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.144
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-256678"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.144
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.144"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 18:14:21.299880   75006 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 18:14:21.310127   75006 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 18:14:21.310205   75006 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 18:14:21.319566   75006 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0816 18:14:21.337043   75006 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 18:14:21.352319   75006 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0816 18:14:21.370117   75006 ssh_runner.go:195] Run: grep 192.168.72.144	control-plane.minikube.internal$ /etc/hosts
	I0816 18:14:21.373986   75006 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.144	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 18:14:21.386518   75006 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:14:21.508855   75006 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 18:14:21.525184   75006 certs.go:68] Setting up /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/default-k8s-diff-port-256678 for IP: 192.168.72.144
	I0816 18:14:21.525209   75006 certs.go:194] generating shared ca certs ...
	I0816 18:14:21.525230   75006 certs.go:226] acquiring lock for ca certs: {Name:mk1e000575c94c138513704c2900b8a68810eb65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:14:21.525413   75006 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key
	I0816 18:14:21.525468   75006 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key
	I0816 18:14:21.525481   75006 certs.go:256] generating profile certs ...
	I0816 18:14:21.525604   75006 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/default-k8s-diff-port-256678/client.key
	I0816 18:14:21.525688   75006 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/default-k8s-diff-port-256678/apiserver.key.ac6d83aa
	I0816 18:14:21.525738   75006 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/default-k8s-diff-port-256678/proxy-client.key
	I0816 18:14:21.525888   75006 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem (1338 bytes)
	W0816 18:14:21.525931   75006 certs.go:480] ignoring /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753_empty.pem, impossibly tiny 0 bytes
	I0816 18:14:21.525944   75006 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 18:14:21.525991   75006 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem (1082 bytes)
	I0816 18:14:21.526028   75006 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem (1123 bytes)
	I0816 18:14:21.526052   75006 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem (1679 bytes)
	I0816 18:14:21.526101   75006 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem (1708 bytes)
	I0816 18:14:21.526719   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 18:14:21.556992   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 18:14:21.590311   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 18:14:21.624782   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 18:14:21.655118   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/default-k8s-diff-port-256678/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0816 18:14:21.695431   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/default-k8s-diff-port-256678/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 18:14:21.722575   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/default-k8s-diff-port-256678/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 18:14:21.744870   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/default-k8s-diff-port-256678/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 18:14:21.770850   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 18:14:21.793906   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem --> /usr/share/ca-certificates/16753.pem (1338 bytes)
	I0816 18:14:21.817643   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /usr/share/ca-certificates/167532.pem (1708 bytes)
	I0816 18:14:21.839584   75006 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 18:14:21.856447   75006 ssh_runner.go:195] Run: openssl version
	I0816 18:14:21.862104   75006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16753.pem && ln -fs /usr/share/ca-certificates/16753.pem /etc/ssl/certs/16753.pem"
	I0816 18:14:21.872584   75006 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16753.pem
	I0816 18:14:21.876886   75006 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 17:00 /usr/share/ca-certificates/16753.pem
	I0816 18:14:21.876945   75006 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16753.pem
	I0816 18:14:21.882424   75006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16753.pem /etc/ssl/certs/51391683.0"
	I0816 18:14:21.892761   75006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167532.pem && ln -fs /usr/share/ca-certificates/167532.pem /etc/ssl/certs/167532.pem"
	I0816 18:14:21.904506   75006 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167532.pem
	I0816 18:14:21.909624   75006 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 17:00 /usr/share/ca-certificates/167532.pem
	I0816 18:14:21.909687   75006 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167532.pem
	I0816 18:14:21.915765   75006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167532.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 18:14:21.927160   75006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 18:14:21.937381   75006 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:14:21.941423   75006 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:49 /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:14:21.941477   75006 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:14:21.946741   75006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 18:14:21.958082   75006 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 18:14:21.962431   75006 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 18:14:21.969889   75006 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 18:14:21.977302   75006 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 18:14:21.983468   75006 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 18:14:21.989115   75006 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 18:14:21.994569   75006 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 18:14:21.999962   75006 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-256678 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-256678 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.144 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 18:14:22.000090   75006 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 18:14:22.000139   75006 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 18:14:22.034063   75006 cri.go:89] found id: ""
	I0816 18:14:22.034158   75006 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 18:14:22.043988   75006 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 18:14:22.044003   75006 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 18:14:22.044040   75006 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 18:14:22.053276   75006 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 18:14:22.054255   75006 kubeconfig.go:125] found "default-k8s-diff-port-256678" server: "https://192.168.72.144:8444"
	I0816 18:14:22.056408   75006 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 18:14:22.065394   75006 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.144
	I0816 18:14:22.065429   75006 kubeadm.go:1160] stopping kube-system containers ...
	I0816 18:14:22.065443   75006 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 18:14:22.065496   75006 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 18:14:22.112797   75006 cri.go:89] found id: ""
	I0816 18:14:22.112889   75006 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 18:14:22.130231   75006 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 18:14:22.139432   75006 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 18:14:22.139451   75006 kubeadm.go:157] found existing configuration files:
	
	I0816 18:14:22.139493   75006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0816 18:14:22.148118   75006 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 18:14:22.148168   75006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 18:14:22.158088   75006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0816 18:14:22.166741   75006 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 18:14:22.166803   75006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 18:14:22.175578   75006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0816 18:14:22.185238   75006 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 18:14:22.185286   75006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 18:14:22.194074   75006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0816 18:14:22.205053   75006 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 18:14:22.205105   75006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 18:14:22.216506   75006 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 18:14:22.228754   75006 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:22.344597   75006 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:23.006750   75006 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:23.275587   75006 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:23.356515   75006 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:23.432890   75006 api_server.go:52] waiting for apiserver process to appear ...
	I0816 18:14:23.432991   75006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:23.933834   75006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:24.433736   75006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:23.205567   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:23.206051   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:23.206078   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:23.206007   76298 retry.go:31] will retry after 2.304886921s: waiting for machine to come up
	I0816 18:14:25.512748   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:25.513295   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:25.513321   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:25.513261   76298 retry.go:31] will retry after 2.603393394s: waiting for machine to come up
	I0816 18:14:23.801346   74828 pod_ready.go:103] pod "kube-scheduler-no-preload-864476" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:26.199045   74828 pod_ready.go:103] pod "kube-scheduler-no-preload-864476" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:28.205981   74828 pod_ready.go:103] pod "kube-scheduler-no-preload-864476" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:24.933846   75006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:24.954190   75006 api_server.go:72] duration metric: took 1.521307594s to wait for apiserver process to appear ...
	I0816 18:14:24.954219   75006 api_server.go:88] waiting for apiserver healthz status ...
	I0816 18:14:24.954242   75006 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I0816 18:14:27.835517   75006 api_server.go:279] https://192.168.72.144:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W0816 18:14:27.835552   75006 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I0816 18:14:27.835567   75006 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I0816 18:14:27.842961   75006 api_server.go:279] https://192.168.72.144:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W0816 18:14:27.842992   75006 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I0816 18:14:27.954290   75006 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I0816 18:14:27.963372   75006 api_server.go:279] https://192.168.72.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:14:27.963400   75006 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:14:28.455035   75006 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I0816 18:14:28.460244   75006 api_server.go:279] https://192.168.72.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:14:28.460279   75006 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:14:28.954475   75006 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I0816 18:14:28.962766   75006 api_server.go:279] https://192.168.72.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:14:28.962802   75006 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:14:29.454298   75006 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I0816 18:14:29.458650   75006 api_server.go:279] https://192.168.72.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:14:29.458681   75006 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:14:29.954582   75006 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I0816 18:14:29.959359   75006 api_server.go:279] https://192.168.72.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:14:29.959384   75006 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:14:30.455077   75006 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I0816 18:14:30.461068   75006 api_server.go:279] https://192.168.72.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:14:30.461099   75006 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:14:30.954772   75006 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I0816 18:14:30.960557   75006 api_server.go:279] https://192.168.72.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:14:30.960588   75006 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:14:31.455232   75006 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I0816 18:14:31.460157   75006 api_server.go:279] https://192.168.72.144:8444/healthz returned 200:
	ok
	I0816 18:14:31.471015   75006 api_server.go:141] control plane version: v1.31.0
	I0816 18:14:31.471046   75006 api_server.go:131] duration metric: took 6.516819341s to wait for apiserver health ...
	I0816 18:14:31.471056   75006 cni.go:84] Creating CNI manager for ""
	I0816 18:14:31.471064   75006 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 18:14:31.472930   75006 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 18:14:28.118105   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:28.118675   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:28.118706   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:28.118637   76298 retry.go:31] will retry after 2.400714985s: waiting for machine to come up
	I0816 18:14:30.521623   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:30.522157   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:30.522196   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:30.522111   76298 retry.go:31] will retry after 3.210603239s: waiting for machine to come up
	I0816 18:14:30.699930   74828 pod_ready.go:103] pod "kube-scheduler-no-preload-864476" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:33.200755   74828 pod_ready.go:103] pod "kube-scheduler-no-preload-864476" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:31.474388   75006 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 18:14:31.484723   75006 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 18:14:31.502094   75006 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 18:14:31.511169   75006 system_pods.go:59] 8 kube-system pods found
	I0816 18:14:31.511207   75006 system_pods.go:61] "coredns-6f6b679f8f-2sgmk" [3c98207c-ab70-435e-a725-3d6b108515d9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 18:14:31.511215   75006 system_pods.go:61] "etcd-default-k8s-diff-port-256678" [c6d0dbe2-8b80-4fb2-8408-7b2e668cf4cc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 18:14:31.511221   75006 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-256678" [4506e38e-6685-41f8-98b1-738b35476ad7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 18:14:31.511228   75006 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-256678" [14282ea5-2ebc-4ea6-8e06-829e86296333] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 18:14:31.511232   75006 system_pods.go:61] "kube-proxy-l4lr2" [880ceec6-c3d1-4934-b02a-7a175ded8a02] Running
	I0816 18:14:31.511236   75006 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-256678" [b122d1cd-12e8-4b87-a179-c50baf4c89d4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 18:14:31.511241   75006 system_pods.go:61] "metrics-server-6867b74b74-fc4h4" [3cb9624e-98b4-4edb-a2de-d6a971520cac] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 18:14:31.511244   75006 system_pods.go:61] "storage-provisioner" [79442d12-c28b-447e-ae96-e4c2ddb5c4da] Running
	I0816 18:14:31.511250   75006 system_pods.go:74] duration metric: took 9.137933ms to wait for pod list to return data ...
	I0816 18:14:31.511256   75006 node_conditions.go:102] verifying NodePressure condition ...
	I0816 18:14:31.515339   75006 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 18:14:31.515361   75006 node_conditions.go:123] node cpu capacity is 2
	I0816 18:14:31.515370   75006 node_conditions.go:105] duration metric: took 4.110442ms to run NodePressure ...
	I0816 18:14:31.515387   75006 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:31.774197   75006 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0816 18:14:31.778258   75006 kubeadm.go:739] kubelet initialised
	I0816 18:14:31.778276   75006 kubeadm.go:740] duration metric: took 4.052927ms waiting for restarted kubelet to initialise ...
	I0816 18:14:31.778283   75006 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:14:31.782595   75006 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-2sgmk" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:33.788205   75006 pod_ready.go:103] pod "coredns-6f6b679f8f-2sgmk" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:35.053312   74510 start.go:364] duration metric: took 53.786665535s to acquireMachinesLock for "embed-certs-777541"
	I0816 18:14:35.053367   74510 start.go:96] Skipping create...Using existing machine configuration
	I0816 18:14:35.053372   74510 fix.go:54] fixHost starting: 
	I0816 18:14:35.053687   74510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:14:35.053718   74510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:14:35.073509   74510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33223
	I0816 18:14:35.073935   74510 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:14:35.074396   74510 main.go:141] libmachine: Using API Version  1
	I0816 18:14:35.074420   74510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:14:35.074749   74510 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:14:35.074928   74510 main.go:141] libmachine: (embed-certs-777541) Calling .DriverName
	I0816 18:14:35.075102   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetState
	I0816 18:14:35.076710   74510 fix.go:112] recreateIfNeeded on embed-certs-777541: state=Stopped err=<nil>
	I0816 18:14:35.076738   74510 main.go:141] libmachine: (embed-certs-777541) Calling .DriverName
	W0816 18:14:35.076903   74510 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 18:14:35.078759   74510 out.go:177] * Restarting existing kvm2 VM for "embed-certs-777541" ...
	I0816 18:14:33.735394   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:33.735898   75402 main.go:141] libmachine: (old-k8s-version-783465) Found IP for machine: 192.168.39.211
	I0816 18:14:33.735925   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has current primary IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:33.735933   75402 main.go:141] libmachine: (old-k8s-version-783465) Reserving static IP address...
	I0816 18:14:33.736407   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "old-k8s-version-783465", mac: "52:54:00:d1:97:35", ip: "192.168.39.211"} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:33.736439   75402 main.go:141] libmachine: (old-k8s-version-783465) Reserved static IP address: 192.168.39.211
	I0816 18:14:33.736459   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | skip adding static IP to network mk-old-k8s-version-783465 - found existing host DHCP lease matching {name: "old-k8s-version-783465", mac: "52:54:00:d1:97:35", ip: "192.168.39.211"}
	I0816 18:14:33.736478   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | Getting to WaitForSSH function...
	I0816 18:14:33.736492   75402 main.go:141] libmachine: (old-k8s-version-783465) Waiting for SSH to be available...
	I0816 18:14:33.739028   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:33.739377   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:33.739397   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:33.739596   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | Using SSH client type: external
	I0816 18:14:33.739689   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | Using SSH private key: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/old-k8s-version-783465/id_rsa (-rw-------)
	I0816 18:14:33.739724   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.211 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19461-9545/.minikube/machines/old-k8s-version-783465/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 18:14:33.739747   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | About to run SSH command:
	I0816 18:14:33.739785   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | exit 0
	I0816 18:14:33.861036   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | SSH cmd err, output: <nil>: 
	I0816 18:14:33.861405   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetConfigRaw
	I0816 18:14:33.862105   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetIP
	I0816 18:14:33.864850   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:33.865245   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:33.865272   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:33.865542   75402 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/config.json ...
	I0816 18:14:33.865796   75402 machine.go:93] provisionDockerMachine start ...
	I0816 18:14:33.865820   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	I0816 18:14:33.866053   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:14:33.868422   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:33.868761   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:33.868795   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:33.868911   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHPort
	I0816 18:14:33.869095   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:33.869267   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:33.869415   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHUsername
	I0816 18:14:33.869579   75402 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:33.869796   75402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0816 18:14:33.869810   75402 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 18:14:33.972880   75402 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 18:14:33.972907   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetMachineName
	I0816 18:14:33.973141   75402 buildroot.go:166] provisioning hostname "old-k8s-version-783465"
	I0816 18:14:33.973172   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetMachineName
	I0816 18:14:33.973378   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:14:33.976198   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:33.976530   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:33.976563   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:33.976747   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHPort
	I0816 18:14:33.976945   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:33.977086   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:33.977228   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHUsername
	I0816 18:14:33.977369   75402 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:33.977529   75402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0816 18:14:33.977540   75402 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-783465 && echo "old-k8s-version-783465" | sudo tee /etc/hostname
	I0816 18:14:34.086092   75402 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-783465
	
	I0816 18:14:34.086123   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:14:34.088785   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.089107   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:34.089132   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.089285   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHPort
	I0816 18:14:34.089527   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:34.089684   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:34.089828   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHUsername
	I0816 18:14:34.089997   75402 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:34.090152   75402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0816 18:14:34.090168   75402 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-783465' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-783465/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-783465' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 18:14:34.200744   75402 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 18:14:34.200779   75402 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19461-9545/.minikube CaCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19461-9545/.minikube}
	I0816 18:14:34.200834   75402 buildroot.go:174] setting up certificates
	I0816 18:14:34.200848   75402 provision.go:84] configureAuth start
	I0816 18:14:34.200862   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetMachineName
	I0816 18:14:34.201175   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetIP
	I0816 18:14:34.203868   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.204297   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:34.204344   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.204506   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:14:34.207067   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.207441   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:34.207464   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.207810   75402 provision.go:143] copyHostCerts
	I0816 18:14:34.207869   75402 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem, removing ...
	I0816 18:14:34.207892   75402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem
	I0816 18:14:34.207951   75402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem (1082 bytes)
	I0816 18:14:34.208058   75402 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem, removing ...
	I0816 18:14:34.208069   75402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem
	I0816 18:14:34.208103   75402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem (1123 bytes)
	I0816 18:14:34.208180   75402 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem, removing ...
	I0816 18:14:34.208192   75402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem
	I0816 18:14:34.208220   75402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem (1679 bytes)
	I0816 18:14:34.208291   75402 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-783465 san=[127.0.0.1 192.168.39.211 localhost minikube old-k8s-version-783465]
	I0816 18:14:34.413800   75402 provision.go:177] copyRemoteCerts
	I0816 18:14:34.413857   75402 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 18:14:34.413881   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:14:34.416724   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.417138   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:34.417173   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.417345   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHPort
	I0816 18:14:34.417673   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:34.417894   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHUsername
	I0816 18:14:34.418089   75402 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/old-k8s-version-783465/id_rsa Username:docker}
	I0816 18:14:34.495519   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 18:14:34.517414   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0816 18:14:34.540423   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 18:14:34.563983   75402 provision.go:87] duration metric: took 363.122639ms to configureAuth
	I0816 18:14:34.564019   75402 buildroot.go:189] setting minikube options for container-runtime
	I0816 18:14:34.564229   75402 config.go:182] Loaded profile config "old-k8s-version-783465": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0816 18:14:34.564299   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:14:34.567149   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.567550   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:34.567580   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.567753   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHPort
	I0816 18:14:34.567935   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:34.568098   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:34.568255   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHUsername
	I0816 18:14:34.568448   75402 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:34.568659   75402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0816 18:14:34.568680   75402 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 18:14:34.824064   75402 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 18:14:34.824091   75402 machine.go:96] duration metric: took 958.278616ms to provisionDockerMachine
	I0816 18:14:34.824106   75402 start.go:293] postStartSetup for "old-k8s-version-783465" (driver="kvm2")
	I0816 18:14:34.824120   75402 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 18:14:34.824169   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	I0816 18:14:34.824556   75402 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 18:14:34.824599   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:14:34.827203   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.827517   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:34.827547   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.827677   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHPort
	I0816 18:14:34.827869   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:34.828033   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHUsername
	I0816 18:14:34.828171   75402 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/old-k8s-version-783465/id_rsa Username:docker}
	I0816 18:14:34.912148   75402 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 18:14:34.916652   75402 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 18:14:34.916681   75402 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/addons for local assets ...
	I0816 18:14:34.916755   75402 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/files for local assets ...
	I0816 18:14:34.916864   75402 filesync.go:149] local asset: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem -> 167532.pem in /etc/ssl/certs
	I0816 18:14:34.916989   75402 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 18:14:34.927061   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /etc/ssl/certs/167532.pem (1708 bytes)
	I0816 18:14:34.949703   75402 start.go:296] duration metric: took 125.581331ms for postStartSetup
	I0816 18:14:34.949743   75402 fix.go:56] duration metric: took 19.13519024s for fixHost
	I0816 18:14:34.949763   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:14:34.952740   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.953090   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:34.953124   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.953307   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHPort
	I0816 18:14:34.953532   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:34.953715   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:34.953861   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHUsername
	I0816 18:14:34.954029   75402 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:34.954229   75402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0816 18:14:34.954242   75402 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 18:14:35.053143   75402 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723832075.025252523
	
	I0816 18:14:35.053171   75402 fix.go:216] guest clock: 1723832075.025252523
	I0816 18:14:35.053180   75402 fix.go:229] Guest: 2024-08-16 18:14:35.025252523 +0000 UTC Remote: 2024-08-16 18:14:34.949747236 +0000 UTC m=+221.880938919 (delta=75.505287ms)
	I0816 18:14:35.053204   75402 fix.go:200] guest clock delta is within tolerance: 75.505287ms
	I0816 18:14:35.053211   75402 start.go:83] releasing machines lock for "old-k8s-version-783465", held for 19.238692888s
	I0816 18:14:35.053243   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	I0816 18:14:35.053549   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetIP
	I0816 18:14:35.056365   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:35.056792   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:35.056823   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:35.057009   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	I0816 18:14:35.057509   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	I0816 18:14:35.057731   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	I0816 18:14:35.057831   75402 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 18:14:35.057892   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:14:35.057951   75402 ssh_runner.go:195] Run: cat /version.json
	I0816 18:14:35.057972   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:14:35.060543   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:35.060733   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:35.060987   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:35.061016   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:35.061126   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:35.061148   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:35.061154   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHPort
	I0816 18:14:35.061319   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHPort
	I0816 18:14:35.061339   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:35.061456   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:35.061518   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHUsername
	I0816 18:14:35.061639   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHUsername
	I0816 18:14:35.061720   75402 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/old-k8s-version-783465/id_rsa Username:docker}
	I0816 18:14:35.061773   75402 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/old-k8s-version-783465/id_rsa Username:docker}
	I0816 18:14:35.174137   75402 ssh_runner.go:195] Run: systemctl --version
	I0816 18:14:35.181704   75402 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 18:14:35.323490   75402 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 18:14:35.330733   75402 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 18:14:35.330807   75402 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 18:14:35.350653   75402 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 18:14:35.350679   75402 start.go:495] detecting cgroup driver to use...
	I0816 18:14:35.350763   75402 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 18:14:35.372307   75402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 18:14:35.386513   75402 docker.go:217] disabling cri-docker service (if available) ...
	I0816 18:14:35.386598   75402 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 18:14:35.400406   75402 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 18:14:35.414761   75402 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 18:14:35.540356   75402 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 18:14:35.675726   75402 docker.go:233] disabling docker service ...
	I0816 18:14:35.675793   75402 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 18:14:35.691169   75402 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 18:14:35.707288   75402 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 18:14:35.858149   75402 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 18:14:35.981654   75402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 18:14:35.996396   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 18:14:36.013656   75402 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0816 18:14:36.013711   75402 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:36.023839   75402 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 18:14:36.023907   75402 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:36.033889   75402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:36.043727   75402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:36.053496   75402 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 18:14:36.063694   75402 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 18:14:36.072919   75402 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 18:14:36.072979   75402 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 18:14:36.085707   75402 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 18:14:36.095377   75402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:14:36.219235   75402 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 18:14:36.384915   75402 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 18:14:36.384990   75402 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 18:14:36.392122   75402 start.go:563] Will wait 60s for crictl version
	I0816 18:14:36.392196   75402 ssh_runner.go:195] Run: which crictl
	I0816 18:14:36.397589   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 18:14:36.443581   75402 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 18:14:36.443710   75402 ssh_runner.go:195] Run: crio --version
	I0816 18:14:36.473740   75402 ssh_runner.go:195] Run: crio --version
	I0816 18:14:36.512542   75402 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0816 18:14:36.513678   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetIP
	I0816 18:14:36.517404   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:36.517912   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:36.517948   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:36.518190   75402 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0816 18:14:36.523577   75402 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 18:14:36.536188   75402 kubeadm.go:883] updating cluster {Name:old-k8s-version-783465 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-783465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 18:14:36.536361   75402 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0816 18:14:36.536425   75402 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 18:14:36.587027   75402 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0816 18:14:36.587085   75402 ssh_runner.go:195] Run: which lz4
	I0816 18:14:36.590780   75402 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 18:14:36.594635   75402 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 18:14:36.594673   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0816 18:14:35.080033   74510 main.go:141] libmachine: (embed-certs-777541) Calling .Start
	I0816 18:14:35.080220   74510 main.go:141] libmachine: (embed-certs-777541) Ensuring networks are active...
	I0816 18:14:35.080971   74510 main.go:141] libmachine: (embed-certs-777541) Ensuring network default is active
	I0816 18:14:35.081366   74510 main.go:141] libmachine: (embed-certs-777541) Ensuring network mk-embed-certs-777541 is active
	I0816 18:14:35.081887   74510 main.go:141] libmachine: (embed-certs-777541) Getting domain xml...
	I0816 18:14:35.082634   74510 main.go:141] libmachine: (embed-certs-777541) Creating domain...
	I0816 18:14:36.459300   74510 main.go:141] libmachine: (embed-certs-777541) Waiting to get IP...
	I0816 18:14:36.460282   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:36.460801   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:36.460883   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:36.460778   76422 retry.go:31] will retry after 291.491491ms: waiting for machine to come up
	I0816 18:14:36.754548   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:36.755372   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:36.755412   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:36.755313   76422 retry.go:31] will retry after 356.347467ms: waiting for machine to come up
	I0816 18:14:37.113124   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:37.113704   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:37.113739   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:37.113676   76422 retry.go:31] will retry after 386.244375ms: waiting for machine to come up
	I0816 18:14:37.502241   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:37.502796   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:37.502826   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:37.502750   76422 retry.go:31] will retry after 437.69847ms: waiting for machine to come up
	I0816 18:14:37.942667   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:37.943423   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:37.943456   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:37.943378   76422 retry.go:31] will retry after 709.064032ms: waiting for machine to come up
	I0816 18:14:38.653840   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:38.654349   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:38.654386   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:38.654297   76422 retry.go:31] will retry after 594.417028ms: waiting for machine to come up
	I0816 18:14:34.700134   74828 pod_ready.go:93] pod "kube-scheduler-no-preload-864476" in "kube-system" namespace has status "Ready":"True"
	I0816 18:14:34.700158   74828 pod_ready.go:82] duration metric: took 13.007571631s for pod "kube-scheduler-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:34.700171   74828 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:36.707977   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:38.708527   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:35.790842   75006 pod_ready.go:103] pod "coredns-6f6b679f8f-2sgmk" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:37.791236   75006 pod_ready.go:93] pod "coredns-6f6b679f8f-2sgmk" in "kube-system" namespace has status "Ready":"True"
	I0816 18:14:37.791278   75006 pod_ready.go:82] duration metric: took 6.008656328s for pod "coredns-6f6b679f8f-2sgmk" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:37.791294   75006 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:39.798513   75006 pod_ready.go:93] pod "etcd-default-k8s-diff-port-256678" in "kube-system" namespace has status "Ready":"True"
	I0816 18:14:39.798543   75006 pod_ready.go:82] duration metric: took 2.007240233s for pod "etcd-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:39.798557   75006 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:38.127403   75402 crio.go:462] duration metric: took 1.536659915s to copy over tarball
	I0816 18:14:38.127504   75402 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 18:14:41.109575   75402 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.982013621s)
	I0816 18:14:41.109639   75402 crio.go:469] duration metric: took 2.982198625s to extract the tarball
	I0816 18:14:41.109650   75402 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 18:14:41.152940   75402 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 18:14:41.185863   75402 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0816 18:14:41.185892   75402 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0816 18:14:41.185982   75402 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:14:41.186003   75402 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0816 18:14:41.186036   75402 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0816 18:14:41.186044   75402 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 18:14:41.186103   75402 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 18:14:41.185993   75402 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0816 18:14:41.186171   75402 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 18:14:41.185993   75402 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 18:14:41.187521   75402 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0816 18:14:41.187532   75402 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 18:14:41.187542   75402 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0816 18:14:41.187527   75402 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 18:14:41.187595   75402 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:14:41.187605   75402 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 18:14:41.187688   75402 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 18:14:41.187840   75402 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0816 18:14:41.421551   75402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0816 18:14:41.462506   75402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0816 18:14:41.467716   75402 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0816 18:14:41.467758   75402 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0816 18:14:41.467810   75402 ssh_runner.go:195] Run: which crictl
	I0816 18:14:41.508571   75402 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0816 18:14:41.508638   75402 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 18:14:41.508687   75402 ssh_runner.go:195] Run: which crictl
	I0816 18:14:41.508691   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 18:14:41.514560   75402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 18:14:41.520003   75402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0816 18:14:41.526475   75402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0816 18:14:41.526892   75402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0816 18:14:41.533271   75402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0816 18:14:41.569269   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 18:14:41.569426   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 18:14:41.694043   75402 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0816 18:14:41.694100   75402 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 18:14:41.694049   75402 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0816 18:14:41.694210   75402 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 18:14:41.694173   75402 ssh_runner.go:195] Run: which crictl
	I0816 18:14:41.694268   75402 ssh_runner.go:195] Run: which crictl
	I0816 18:14:41.701292   75402 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0816 18:14:41.701337   75402 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0816 18:14:41.701389   75402 ssh_runner.go:195] Run: which crictl
	I0816 18:14:41.707345   75402 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0816 18:14:41.707415   75402 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 18:14:41.707467   75402 ssh_runner.go:195] Run: which crictl
	I0816 18:14:41.711820   75402 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0816 18:14:41.711854   75402 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0816 18:14:41.711896   75402 ssh_runner.go:195] Run: which crictl
	I0816 18:14:41.723813   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 18:14:41.723850   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 18:14:41.723814   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 18:14:41.723939   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 18:14:41.723951   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 18:14:41.724003   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 18:14:41.724060   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 18:14:41.872645   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 18:14:41.872674   75402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0816 18:14:41.873747   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 18:14:41.873786   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 18:14:41.873891   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 18:14:41.873899   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 18:14:41.873960   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 18:14:41.997519   75402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0816 18:14:42.002048   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 18:14:42.002091   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 18:14:42.002140   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 18:14:42.002178   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 18:14:42.002218   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 18:14:42.070993   75402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:14:42.115418   75402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0816 18:14:42.115527   75402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0816 18:14:42.115623   75402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0816 18:14:42.115631   75402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0816 18:14:42.115689   75402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0816 18:14:42.235706   75402 cache_images.go:92] duration metric: took 1.049784726s to LoadCachedImages
	W0816 18:14:42.235807   75402 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0816 18:14:42.235821   75402 kubeadm.go:934] updating node { 192.168.39.211 8443 v1.20.0 crio true true} ...
	I0816 18:14:42.235939   75402 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-783465 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.211
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-783465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 18:14:42.236024   75402 ssh_runner.go:195] Run: crio config
	I0816 18:14:42.286742   75402 cni.go:84] Creating CNI manager for ""
	I0816 18:14:42.286763   75402 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 18:14:42.286771   75402 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 18:14:42.286789   75402 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.211 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-783465 NodeName:old-k8s-version-783465 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.211"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.211 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0816 18:14:42.286904   75402 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.211
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-783465"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.211
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.211"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 18:14:42.286961   75402 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0816 18:14:42.297015   75402 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 18:14:42.297098   75402 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 18:14:42.306400   75402 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0816 18:14:42.322812   75402 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 18:14:42.339791   75402 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0816 18:14:42.356930   75402 ssh_runner.go:195] Run: grep 192.168.39.211	control-plane.minikube.internal$ /etc/hosts
	I0816 18:14:42.360578   75402 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.211	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 18:14:42.373248   75402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:14:42.495499   75402 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 18:14:42.511910   75402 certs.go:68] Setting up /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465 for IP: 192.168.39.211
	I0816 18:14:42.511942   75402 certs.go:194] generating shared ca certs ...
	I0816 18:14:42.511964   75402 certs.go:226] acquiring lock for ca certs: {Name:mk1e000575c94c138513704c2900b8a68810eb65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:14:42.512147   75402 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key
	I0816 18:14:42.512206   75402 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key
	I0816 18:14:42.512220   75402 certs.go:256] generating profile certs ...
	I0816 18:14:42.512361   75402 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/client.key
	I0816 18:14:42.512431   75402 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/apiserver.key.94c45fb6
	I0816 18:14:42.512483   75402 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/proxy-client.key
	I0816 18:14:42.512664   75402 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem (1338 bytes)
	W0816 18:14:42.512709   75402 certs.go:480] ignoring /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753_empty.pem, impossibly tiny 0 bytes
	I0816 18:14:42.512724   75402 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 18:14:42.512754   75402 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem (1082 bytes)
	I0816 18:14:42.512794   75402 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem (1123 bytes)
	I0816 18:14:42.512825   75402 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem (1679 bytes)
	I0816 18:14:42.512881   75402 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem (1708 bytes)
	I0816 18:14:42.513660   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 18:14:42.552291   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 18:14:42.585617   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 18:14:42.611017   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 18:14:42.638092   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0816 18:14:42.676877   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 18:14:42.710091   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 18:14:42.743734   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 18:14:42.779905   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /usr/share/ca-certificates/167532.pem (1708 bytes)
	I0816 18:14:42.802779   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 18:14:42.826432   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem --> /usr/share/ca-certificates/16753.pem (1338 bytes)
	I0816 18:14:42.849286   75402 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 18:14:42.866901   75402 ssh_runner.go:195] Run: openssl version
	I0816 18:14:42.872283   75402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167532.pem && ln -fs /usr/share/ca-certificates/167532.pem /etc/ssl/certs/167532.pem"
	I0816 18:14:42.882976   75402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167532.pem
	I0816 18:14:42.887432   75402 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 17:00 /usr/share/ca-certificates/167532.pem
	I0816 18:14:42.887504   75402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167532.pem
	I0816 18:14:42.893275   75402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167532.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 18:14:42.903687   75402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 18:14:42.915232   75402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:14:42.919669   75402 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:49 /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:14:42.919735   75402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:14:42.925282   75402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 18:14:42.937888   75402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16753.pem && ln -fs /usr/share/ca-certificates/16753.pem /etc/ssl/certs/16753.pem"
	I0816 18:14:42.949994   75402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16753.pem
	I0816 18:14:42.954495   75402 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 17:00 /usr/share/ca-certificates/16753.pem
	I0816 18:14:42.954548   75402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16753.pem
	I0816 18:14:42.960295   75402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16753.pem /etc/ssl/certs/51391683.0"
	I0816 18:14:42.972006   75402 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 18:14:42.976450   75402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 18:14:42.982741   75402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 18:14:42.988649   75402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 18:14:42.995021   75402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 18:14:43.000965   75402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 18:14:43.007030   75402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 18:14:43.012891   75402 kubeadm.go:392] StartCluster: {Name:old-k8s-version-783465 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-783465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 18:14:43.012983   75402 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 18:14:43.013071   75402 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 18:14:43.050670   75402 cri.go:89] found id: ""
	I0816 18:14:43.050741   75402 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 18:14:43.060748   75402 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 18:14:43.060773   75402 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 18:14:43.060825   75402 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 18:14:43.070299   75402 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 18:14:43.071251   75402 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-783465" does not appear in /home/jenkins/minikube-integration/19461-9545/kubeconfig
	I0816 18:14:43.071945   75402 kubeconfig.go:62] /home/jenkins/minikube-integration/19461-9545/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-783465" cluster setting kubeconfig missing "old-k8s-version-783465" context setting]
	I0816 18:14:43.072870   75402 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/kubeconfig: {Name:mk7d5abb3a1b9b654c5d7490d32aeae2c9a1bdd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:14:39.250064   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:39.250979   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:39.251028   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:39.250914   76422 retry.go:31] will retry after 1.014851653s: waiting for machine to come up
	I0816 18:14:40.266811   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:40.267287   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:40.267323   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:40.267238   76422 retry.go:31] will retry after 1.333311972s: waiting for machine to come up
	I0816 18:14:41.602031   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:41.602532   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:41.602565   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:41.602480   76422 retry.go:31] will retry after 1.525496469s: waiting for machine to come up
	I0816 18:14:43.130136   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:43.130620   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:43.130661   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:43.130563   76422 retry.go:31] will retry after 2.206344656s: waiting for machine to come up
	I0816 18:14:41.206879   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:43.706278   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:41.806382   75006 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-256678" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:43.927145   75006 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-256678" in "kube-system" namespace has status "Ready":"True"
	I0816 18:14:43.927173   75006 pod_ready.go:82] duration metric: took 4.128607781s for pod "kube-apiserver-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:43.927182   75006 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:43.932293   75006 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-256678" in "kube-system" namespace has status "Ready":"True"
	I0816 18:14:43.932314   75006 pod_ready.go:82] duration metric: took 5.122737ms for pod "kube-controller-manager-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:43.932326   75006 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-l4lr2" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:43.937128   75006 pod_ready.go:93] pod "kube-proxy-l4lr2" in "kube-system" namespace has status "Ready":"True"
	I0816 18:14:43.937146   75006 pod_ready.go:82] duration metric: took 4.812798ms for pod "kube-proxy-l4lr2" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:43.937154   75006 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:43.941992   75006 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-256678" in "kube-system" namespace has status "Ready":"True"
	I0816 18:14:43.942018   75006 pod_ready.go:82] duration metric: took 4.856588ms for pod "kube-scheduler-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:43.942030   75006 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:43.141753   75402 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 18:14:43.154269   75402 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.211
	I0816 18:14:43.154324   75402 kubeadm.go:1160] stopping kube-system containers ...
	I0816 18:14:43.154341   75402 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 18:14:43.154404   75402 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 18:14:43.192966   75402 cri.go:89] found id: ""
	I0816 18:14:43.193035   75402 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 18:14:43.213101   75402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 18:14:43.222811   75402 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 18:14:43.222826   75402 kubeadm.go:157] found existing configuration files:
	
	I0816 18:14:43.222870   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 18:14:43.232196   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 18:14:43.232261   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 18:14:43.241633   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 18:14:43.250751   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 18:14:43.250800   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 18:14:43.260197   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 18:14:43.268943   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 18:14:43.269000   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 18:14:43.277887   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 18:14:43.286281   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 18:14:43.286391   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 18:14:43.295899   75402 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 18:14:43.306026   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:43.441487   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:44.213457   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:44.431649   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:44.553955   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:44.646817   75402 api_server.go:52] waiting for apiserver process to appear ...
	I0816 18:14:44.646923   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:45.147202   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:45.648050   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:46.147958   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:46.647398   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:47.147403   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:47.646992   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:45.338228   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:45.338729   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:45.338763   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:45.338660   76422 retry.go:31] will retry after 2.526891535s: waiting for machine to come up
	I0816 18:14:47.868326   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:47.868821   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:47.868853   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:47.868774   76422 retry.go:31] will retry after 2.866643935s: waiting for machine to come up
	I0816 18:14:45.706669   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:47.707062   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:45.948791   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:48.447930   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:48.147987   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:48.646974   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:49.147114   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:49.647020   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:50.147765   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:50.647135   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:51.147506   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:51.647568   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:52.147648   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:52.647865   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:50.736760   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:50.737295   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:50.737331   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:50.737245   76422 retry.go:31] will retry after 3.824271015s: waiting for machine to come up
	I0816 18:14:50.206249   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:52.206435   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:50.449586   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:52.948577   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:54.566285   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:54.566784   74510 main.go:141] libmachine: (embed-certs-777541) Found IP for machine: 192.168.61.218
	I0816 18:14:54.566809   74510 main.go:141] libmachine: (embed-certs-777541) Reserving static IP address...
	I0816 18:14:54.566825   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has current primary IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:54.567171   74510 main.go:141] libmachine: (embed-certs-777541) Reserved static IP address: 192.168.61.218
	I0816 18:14:54.567193   74510 main.go:141] libmachine: (embed-certs-777541) Waiting for SSH to be available...
	I0816 18:14:54.567211   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "embed-certs-777541", mac: "52:54:00:54:9a:0c", ip: "192.168.61.218"} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:54.567231   74510 main.go:141] libmachine: (embed-certs-777541) DBG | skip adding static IP to network mk-embed-certs-777541 - found existing host DHCP lease matching {name: "embed-certs-777541", mac: "52:54:00:54:9a:0c", ip: "192.168.61.218"}
	I0816 18:14:54.567245   74510 main.go:141] libmachine: (embed-certs-777541) DBG | Getting to WaitForSSH function...
	I0816 18:14:54.569546   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:54.569864   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:54.569890   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:54.570019   74510 main.go:141] libmachine: (embed-certs-777541) DBG | Using SSH client type: external
	I0816 18:14:54.570046   74510 main.go:141] libmachine: (embed-certs-777541) DBG | Using SSH private key: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/embed-certs-777541/id_rsa (-rw-------)
	I0816 18:14:54.570073   74510 main.go:141] libmachine: (embed-certs-777541) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.218 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19461-9545/.minikube/machines/embed-certs-777541/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 18:14:54.570082   74510 main.go:141] libmachine: (embed-certs-777541) DBG | About to run SSH command:
	I0816 18:14:54.570109   74510 main.go:141] libmachine: (embed-certs-777541) DBG | exit 0
	I0816 18:14:54.692450   74510 main.go:141] libmachine: (embed-certs-777541) DBG | SSH cmd err, output: <nil>: 
	I0816 18:14:54.692828   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetConfigRaw
	I0816 18:14:54.693486   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetIP
	I0816 18:14:54.696565   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:54.696943   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:54.696987   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:54.697248   74510 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/embed-certs-777541/config.json ...
	I0816 18:14:54.697455   74510 machine.go:93] provisionDockerMachine start ...
	I0816 18:14:54.697475   74510 main.go:141] libmachine: (embed-certs-777541) Calling .DriverName
	I0816 18:14:54.697686   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:14:54.700172   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:54.700491   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:54.700520   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:54.700716   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHPort
	I0816 18:14:54.700906   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:54.701102   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:54.701239   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHUsername
	I0816 18:14:54.701440   74510 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:54.701650   74510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.218 22 <nil> <nil>}
	I0816 18:14:54.701662   74510 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 18:14:54.800770   74510 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 18:14:54.800805   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetMachineName
	I0816 18:14:54.801047   74510 buildroot.go:166] provisioning hostname "embed-certs-777541"
	I0816 18:14:54.801079   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetMachineName
	I0816 18:14:54.801264   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:14:54.804313   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:54.804734   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:54.804761   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:54.804940   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHPort
	I0816 18:14:54.805132   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:54.805322   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:54.805485   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHUsername
	I0816 18:14:54.805711   74510 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:54.805869   74510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.218 22 <nil> <nil>}
	I0816 18:14:54.805886   74510 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-777541 && echo "embed-certs-777541" | sudo tee /etc/hostname
	I0816 18:14:54.918908   74510 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-777541
	
	I0816 18:14:54.918949   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:14:54.921760   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:54.922117   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:54.922146   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:54.922338   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHPort
	I0816 18:14:54.922511   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:54.922681   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:54.922843   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHUsername
	I0816 18:14:54.923033   74510 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:54.923243   74510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.218 22 <nil> <nil>}
	I0816 18:14:54.923261   74510 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-777541' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-777541/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-777541' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 18:14:55.028983   74510 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 18:14:55.029016   74510 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19461-9545/.minikube CaCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19461-9545/.minikube}
	I0816 18:14:55.029040   74510 buildroot.go:174] setting up certificates
	I0816 18:14:55.029051   74510 provision.go:84] configureAuth start
	I0816 18:14:55.029064   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetMachineName
	I0816 18:14:55.029320   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetIP
	I0816 18:14:55.032273   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.032693   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:55.032743   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.032983   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:14:55.035257   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.035581   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:55.035606   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.035742   74510 provision.go:143] copyHostCerts
	I0816 18:14:55.035797   74510 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem, removing ...
	I0816 18:14:55.035814   74510 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem
	I0816 18:14:55.035899   74510 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem (1123 bytes)
	I0816 18:14:55.035996   74510 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem, removing ...
	I0816 18:14:55.036004   74510 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem
	I0816 18:14:55.036024   74510 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem (1679 bytes)
	I0816 18:14:55.036081   74510 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem, removing ...
	I0816 18:14:55.036087   74510 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem
	I0816 18:14:55.036106   74510 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem (1082 bytes)
	I0816 18:14:55.036155   74510 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem org=jenkins.embed-certs-777541 san=[127.0.0.1 192.168.61.218 embed-certs-777541 localhost minikube]
	I0816 18:14:55.182540   74510 provision.go:177] copyRemoteCerts
	I0816 18:14:55.182606   74510 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 18:14:55.182633   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:14:55.185807   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.186179   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:55.186229   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.186429   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHPort
	I0816 18:14:55.186619   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:55.186770   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHUsername
	I0816 18:14:55.186884   74510 sshutil.go:53] new ssh client: &{IP:192.168.61.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/embed-certs-777541/id_rsa Username:docker}
	I0816 18:14:55.262494   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0816 18:14:55.285186   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 18:14:55.307082   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 18:14:55.328912   74510 provision.go:87] duration metric: took 299.848734ms to configureAuth
	I0816 18:14:55.328934   74510 buildroot.go:189] setting minikube options for container-runtime
	I0816 18:14:55.329140   74510 config.go:182] Loaded profile config "embed-certs-777541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 18:14:55.329215   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:14:55.331989   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.332366   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:55.332414   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.332594   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHPort
	I0816 18:14:55.332801   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:55.333006   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:55.333158   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHUsername
	I0816 18:14:55.333312   74510 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:55.333501   74510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.218 22 <nil> <nil>}
	I0816 18:14:55.333522   74510 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 18:14:55.579734   74510 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 18:14:55.579765   74510 machine.go:96] duration metric: took 882.296402ms to provisionDockerMachine
	I0816 18:14:55.579781   74510 start.go:293] postStartSetup for "embed-certs-777541" (driver="kvm2")
	I0816 18:14:55.579793   74510 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 18:14:55.579814   74510 main.go:141] libmachine: (embed-certs-777541) Calling .DriverName
	I0816 18:14:55.580182   74510 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 18:14:55.580216   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:14:55.582826   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.583250   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:55.583285   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.583374   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHPort
	I0816 18:14:55.583574   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:55.583739   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHUsername
	I0816 18:14:55.583972   74510 sshutil.go:53] new ssh client: &{IP:192.168.61.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/embed-certs-777541/id_rsa Username:docker}
	I0816 18:14:55.663379   74510 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 18:14:55.667205   74510 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 18:14:55.667231   74510 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/addons for local assets ...
	I0816 18:14:55.667321   74510 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/files for local assets ...
	I0816 18:14:55.667426   74510 filesync.go:149] local asset: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem -> 167532.pem in /etc/ssl/certs
	I0816 18:14:55.667560   74510 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 18:14:55.676427   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /etc/ssl/certs/167532.pem (1708 bytes)
	I0816 18:14:55.698188   74510 start.go:296] duration metric: took 118.396211ms for postStartSetup
	I0816 18:14:55.698226   74510 fix.go:56] duration metric: took 20.644852989s for fixHost
	I0816 18:14:55.698245   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:14:55.701014   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.701359   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:55.701390   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.701587   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHPort
	I0816 18:14:55.701755   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:55.701924   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:55.702070   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHUsername
	I0816 18:14:55.702241   74510 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:55.702452   74510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.218 22 <nil> <nil>}
	I0816 18:14:55.702464   74510 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 18:14:55.801397   74510 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723832095.756052952
	
	I0816 18:14:55.801431   74510 fix.go:216] guest clock: 1723832095.756052952
	I0816 18:14:55.801443   74510 fix.go:229] Guest: 2024-08-16 18:14:55.756052952 +0000 UTC Remote: 2024-08-16 18:14:55.698231489 +0000 UTC m=+357.018707788 (delta=57.821463ms)
	I0816 18:14:55.801492   74510 fix.go:200] guest clock delta is within tolerance: 57.821463ms
	I0816 18:14:55.801504   74510 start.go:83] releasing machines lock for "embed-certs-777541", held for 20.74815396s
	I0816 18:14:55.801528   74510 main.go:141] libmachine: (embed-certs-777541) Calling .DriverName
	I0816 18:14:55.801781   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetIP
	I0816 18:14:55.804216   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.804617   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:55.804659   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.804795   74510 main.go:141] libmachine: (embed-certs-777541) Calling .DriverName
	I0816 18:14:55.805395   74510 main.go:141] libmachine: (embed-certs-777541) Calling .DriverName
	I0816 18:14:55.805622   74510 main.go:141] libmachine: (embed-certs-777541) Calling .DriverName
	I0816 18:14:55.805730   74510 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 18:14:55.805781   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:14:55.805849   74510 ssh_runner.go:195] Run: cat /version.json
	I0816 18:14:55.805877   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:14:55.808587   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.808946   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:55.808978   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.809080   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.809249   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHPort
	I0816 18:14:55.809415   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:55.809417   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:55.809442   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.809575   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHPort
	I0816 18:14:55.809597   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHUsername
	I0816 18:14:55.809720   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:55.809766   74510 sshutil.go:53] new ssh client: &{IP:192.168.61.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/embed-certs-777541/id_rsa Username:docker}
	I0816 18:14:55.809857   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHUsername
	I0816 18:14:55.809970   74510 sshutil.go:53] new ssh client: &{IP:192.168.61.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/embed-certs-777541/id_rsa Username:docker}
	I0816 18:14:55.885026   74510 ssh_runner.go:195] Run: systemctl --version
	I0816 18:14:55.927940   74510 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 18:14:56.072936   74510 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 18:14:56.080952   74510 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 18:14:56.081029   74510 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 18:14:56.100709   74510 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 18:14:56.100734   74510 start.go:495] detecting cgroup driver to use...
	I0816 18:14:56.100791   74510 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 18:14:56.115759   74510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 18:14:56.129714   74510 docker.go:217] disabling cri-docker service (if available) ...
	I0816 18:14:56.129774   74510 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 18:14:56.142909   74510 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 18:14:56.156413   74510 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 18:14:56.268818   74510 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 18:14:56.424536   74510 docker.go:233] disabling docker service ...
	I0816 18:14:56.424612   74510 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 18:14:56.438033   74510 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 18:14:56.450479   74510 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 18:14:56.560132   74510 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 18:14:56.683671   74510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 18:14:56.697636   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 18:14:56.716486   74510 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 18:14:56.716560   74510 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:56.726082   74510 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 18:14:56.726144   74510 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:56.735971   74510 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:56.745410   74510 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:56.754952   74510 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 18:14:56.764717   74510 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:56.774153   74510 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:56.789843   74510 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:56.799399   74510 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 18:14:56.807679   74510 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 18:14:56.807743   74510 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 18:14:56.819873   74510 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 18:14:56.829921   74510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:14:56.936372   74510 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 18:14:57.073931   74510 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 18:14:57.073998   74510 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 18:14:57.078254   74510 start.go:563] Will wait 60s for crictl version
	I0816 18:14:57.078327   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:14:57.081833   74510 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 18:14:57.121402   74510 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 18:14:57.121476   74510 ssh_runner.go:195] Run: crio --version
	I0816 18:14:57.149262   74510 ssh_runner.go:195] Run: crio --version
	I0816 18:14:57.183015   74510 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 18:14:53.146986   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:53.647279   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:54.147587   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:54.647911   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:55.147322   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:55.647765   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:56.147695   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:56.647296   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:57.147031   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:57.647108   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:57.184157   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetIP
	I0816 18:14:57.186758   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:57.187177   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:57.187206   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:57.187439   74510 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0816 18:14:57.191152   74510 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 18:14:57.203073   74510 kubeadm.go:883] updating cluster {Name:embed-certs-777541 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-777541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.218 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 18:14:57.203240   74510 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 18:14:57.203332   74510 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 18:14:57.238289   74510 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0816 18:14:57.238348   74510 ssh_runner.go:195] Run: which lz4
	I0816 18:14:57.242251   74510 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 18:14:57.246081   74510 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 18:14:57.246124   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0816 18:14:58.459887   74510 crio.go:462] duration metric: took 1.217672418s to copy over tarball
	I0816 18:14:58.459960   74510 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 18:14:54.707069   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:57.206750   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:55.449391   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:57.449830   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:59.451338   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:58.147661   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:58.647270   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:59.147355   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:59.647821   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:00.148023   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:00.647165   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:01.147669   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:01.647960   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:02.147721   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:02.647932   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:00.545989   74510 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.085985152s)
	I0816 18:15:00.546028   74510 crio.go:469] duration metric: took 2.086110527s to extract the tarball
	I0816 18:15:00.546039   74510 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 18:15:00.587096   74510 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 18:15:00.630366   74510 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 18:15:00.630394   74510 cache_images.go:84] Images are preloaded, skipping loading
	I0816 18:15:00.630405   74510 kubeadm.go:934] updating node { 192.168.61.218 8443 v1.31.0 crio true true} ...
	I0816 18:15:00.630540   74510 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-777541 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.218
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-777541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 18:15:00.630630   74510 ssh_runner.go:195] Run: crio config
	I0816 18:15:00.681196   74510 cni.go:84] Creating CNI manager for ""
	I0816 18:15:00.681224   74510 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 18:15:00.681235   74510 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 18:15:00.681262   74510 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.218 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-777541 NodeName:embed-certs-777541 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.218"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.218 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 18:15:00.681439   74510 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.218
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-777541"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.218
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.218"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 18:15:00.681534   74510 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 18:15:00.691239   74510 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 18:15:00.691294   74510 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 18:15:00.700059   74510 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0816 18:15:00.717826   74510 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 18:15:00.733475   74510 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0816 18:15:00.750175   74510 ssh_runner.go:195] Run: grep 192.168.61.218	control-plane.minikube.internal$ /etc/hosts
	I0816 18:15:00.753865   74510 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.218	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 18:15:00.765531   74510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:15:00.875234   74510 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 18:15:00.893095   74510 certs.go:68] Setting up /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/embed-certs-777541 for IP: 192.168.61.218
	I0816 18:15:00.893115   74510 certs.go:194] generating shared ca certs ...
	I0816 18:15:00.893131   74510 certs.go:226] acquiring lock for ca certs: {Name:mk1e000575c94c138513704c2900b8a68810eb65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:15:00.893274   74510 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key
	I0816 18:15:00.893318   74510 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key
	I0816 18:15:00.893327   74510 certs.go:256] generating profile certs ...
	I0816 18:15:00.893403   74510 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/embed-certs-777541/client.key
	I0816 18:15:00.893459   74510 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/embed-certs-777541/apiserver.key.dd0c1a01
	I0816 18:15:00.893503   74510 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/embed-certs-777541/proxy-client.key
	I0816 18:15:00.893617   74510 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem (1338 bytes)
	W0816 18:15:00.893645   74510 certs.go:480] ignoring /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753_empty.pem, impossibly tiny 0 bytes
	I0816 18:15:00.893655   74510 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 18:15:00.893675   74510 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem (1082 bytes)
	I0816 18:15:00.893698   74510 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem (1123 bytes)
	I0816 18:15:00.893721   74510 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem (1679 bytes)
	I0816 18:15:00.893759   74510 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem (1708 bytes)
	I0816 18:15:00.894445   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 18:15:00.936535   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 18:15:00.969775   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 18:15:01.013053   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 18:15:01.046087   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/embed-certs-777541/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0816 18:15:01.073290   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/embed-certs-777541/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 18:15:01.097033   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/embed-certs-777541/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 18:15:01.119859   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/embed-certs-777541/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 18:15:01.141943   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem --> /usr/share/ca-certificates/16753.pem (1338 bytes)
	I0816 18:15:01.168752   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /usr/share/ca-certificates/167532.pem (1708 bytes)
	I0816 18:15:01.191193   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 18:15:01.213691   74510 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 18:15:01.229374   74510 ssh_runner.go:195] Run: openssl version
	I0816 18:15:01.234563   74510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16753.pem && ln -fs /usr/share/ca-certificates/16753.pem /etc/ssl/certs/16753.pem"
	I0816 18:15:01.244301   74510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16753.pem
	I0816 18:15:01.248156   74510 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 17:00 /usr/share/ca-certificates/16753.pem
	I0816 18:15:01.248220   74510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16753.pem
	I0816 18:15:01.253468   74510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16753.pem /etc/ssl/certs/51391683.0"
	I0816 18:15:01.262917   74510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167532.pem && ln -fs /usr/share/ca-certificates/167532.pem /etc/ssl/certs/167532.pem"
	I0816 18:15:01.272577   74510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167532.pem
	I0816 18:15:01.276790   74510 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 17:00 /usr/share/ca-certificates/167532.pem
	I0816 18:15:01.276841   74510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167532.pem
	I0816 18:15:01.281847   74510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167532.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 18:15:01.291789   74510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 18:15:01.302422   74510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:15:01.306320   74510 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:49 /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:15:01.306364   74510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:15:01.311335   74510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 18:15:01.320713   74510 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 18:15:01.324442   74510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 18:15:01.330137   74510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 18:15:01.335693   74510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 18:15:01.340987   74510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 18:15:01.346071   74510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 18:15:01.351280   74510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 18:15:01.357275   74510 kubeadm.go:392] StartCluster: {Name:embed-certs-777541 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-777541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.218 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 18:15:01.357388   74510 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 18:15:01.357427   74510 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 18:15:01.400422   74510 cri.go:89] found id: ""
	I0816 18:15:01.400497   74510 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 18:15:01.410142   74510 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 18:15:01.410162   74510 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 18:15:01.410211   74510 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 18:15:01.419129   74510 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 18:15:01.420130   74510 kubeconfig.go:125] found "embed-certs-777541" server: "https://192.168.61.218:8443"
	I0816 18:15:01.422036   74510 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 18:15:01.430665   74510 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.218
	I0816 18:15:01.430694   74510 kubeadm.go:1160] stopping kube-system containers ...
	I0816 18:15:01.430705   74510 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 18:15:01.430762   74510 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 18:15:01.469108   74510 cri.go:89] found id: ""
	I0816 18:15:01.469182   74510 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 18:15:01.486125   74510 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 18:15:01.495311   74510 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 18:15:01.495335   74510 kubeadm.go:157] found existing configuration files:
	
	I0816 18:15:01.495384   74510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 18:15:01.504066   74510 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 18:15:01.504128   74510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 18:15:01.513222   74510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 18:15:01.521593   74510 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 18:15:01.521692   74510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 18:15:01.530413   74510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 18:15:01.539027   74510 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 18:15:01.539101   74510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 18:15:01.547802   74510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 18:15:01.557143   74510 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 18:15:01.557203   74510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 18:15:01.568616   74510 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 18:15:01.578091   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:15:01.700661   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:15:02.631047   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:15:02.833132   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:15:02.900476   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:15:02.972431   74510 api_server.go:52] waiting for apiserver process to appear ...
	I0816 18:15:02.972514   74510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:03.473296   74510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:59.707731   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:02.206825   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:01.948070   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:03.948398   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:03.147098   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:03.646983   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:04.147320   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:04.647649   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:05.147258   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:05.647999   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:06.147901   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:06.647340   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:07.147339   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:07.648033   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:03.973603   74510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:04.472779   74510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:04.972846   74510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:05.473594   74510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:05.487878   74510 api_server.go:72] duration metric: took 2.51545841s to wait for apiserver process to appear ...
	I0816 18:15:05.487914   74510 api_server.go:88] waiting for apiserver healthz status ...
	I0816 18:15:05.487937   74510 api_server.go:253] Checking apiserver healthz at https://192.168.61.218:8443/healthz ...
	I0816 18:15:08.450583   74510 api_server.go:279] https://192.168.61.218:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 18:15:08.450618   74510 api_server.go:103] status: https://192.168.61.218:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 18:15:08.450635   74510 api_server.go:253] Checking apiserver healthz at https://192.168.61.218:8443/healthz ...
	I0816 18:15:08.495625   74510 api_server.go:279] https://192.168.61.218:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 18:15:08.495656   74510 api_server.go:103] status: https://192.168.61.218:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 18:15:08.495669   74510 api_server.go:253] Checking apiserver healthz at https://192.168.61.218:8443/healthz ...
	I0816 18:15:08.516711   74510 api_server.go:279] https://192.168.61.218:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:15:08.516744   74510 api_server.go:103] status: https://192.168.61.218:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:15:04.836663   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:07.206999   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:06.447839   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:08.449939   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:08.988897   74510 api_server.go:253] Checking apiserver healthz at https://192.168.61.218:8443/healthz ...
	I0816 18:15:08.996347   74510 api_server.go:279] https://192.168.61.218:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:15:08.996374   74510 api_server.go:103] status: https://192.168.61.218:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:15:09.488013   74510 api_server.go:253] Checking apiserver healthz at https://192.168.61.218:8443/healthz ...
	I0816 18:15:09.499514   74510 api_server.go:279] https://192.168.61.218:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:15:09.499559   74510 api_server.go:103] status: https://192.168.61.218:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:15:09.988080   74510 api_server.go:253] Checking apiserver healthz at https://192.168.61.218:8443/healthz ...
	I0816 18:15:09.992106   74510 api_server.go:279] https://192.168.61.218:8443/healthz returned 200:
	ok
	I0816 18:15:09.998515   74510 api_server.go:141] control plane version: v1.31.0
	I0816 18:15:09.998542   74510 api_server.go:131] duration metric: took 4.510619176s to wait for apiserver health ...
	I0816 18:15:09.998555   74510 cni.go:84] Creating CNI manager for ""
	I0816 18:15:09.998563   74510 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 18:15:10.000470   74510 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 18:15:10.001870   74510 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 18:15:10.011805   74510 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 18:15:10.032349   74510 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 18:15:10.046765   74510 system_pods.go:59] 8 kube-system pods found
	I0816 18:15:10.046798   74510 system_pods.go:61] "coredns-6f6b679f8f-8njs2" [f29c31e1-4c2a-4dd8-ba60-62998504c55e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 18:15:10.046808   74510 system_pods.go:61] "etcd-embed-certs-777541" [9cad9a1c-cea5-4271-9a68-3689ecdad607] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 18:15:10.046817   74510 system_pods.go:61] "kube-apiserver-embed-certs-777541" [a5105f98-368f-4687-8eca-ecc66ae59b42] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 18:15:10.046829   74510 system_pods.go:61] "kube-controller-manager-embed-certs-777541" [63cb72fb-167b-446b-874d-6ee665ad8a55] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 18:15:10.046838   74510 system_pods.go:61] "kube-proxy-j5rl7" [fcbc8903-6fa2-4f55-9ec0-92b77e21fb08] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0816 18:15:10.046847   74510 system_pods.go:61] "kube-scheduler-embed-certs-777541" [f2224375-d2f9-4e85-9eb8-26070a90642f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 18:15:10.046855   74510 system_pods.go:61] "metrics-server-6867b74b74-6hkzb" [3e01da8d-7ddf-47cc-9079-5162cf2c2b53] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 18:15:10.046867   74510 system_pods.go:61] "storage-provisioner" [6fc6c4da-0e0f-45cc-84a6-bd4907f5e852] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0816 18:15:10.046876   74510 system_pods.go:74] duration metric: took 14.506593ms to wait for pod list to return data ...
	I0816 18:15:10.046889   74510 node_conditions.go:102] verifying NodePressure condition ...
	I0816 18:15:10.050663   74510 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 18:15:10.050686   74510 node_conditions.go:123] node cpu capacity is 2
	I0816 18:15:10.050699   74510 node_conditions.go:105] duration metric: took 3.805313ms to run NodePressure ...
	I0816 18:15:10.050717   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:15:10.344177   74510 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0816 18:15:10.348795   74510 kubeadm.go:739] kubelet initialised
	I0816 18:15:10.348820   74510 kubeadm.go:740] duration metric: took 4.612695ms waiting for restarted kubelet to initialise ...
	I0816 18:15:10.348830   74510 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:15:10.355270   74510 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-8njs2" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:10.361564   74510 pod_ready.go:98] node "embed-certs-777541" hosting pod "coredns-6f6b679f8f-8njs2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:10.361584   74510 pod_ready.go:82] duration metric: took 6.2936ms for pod "coredns-6f6b679f8f-8njs2" in "kube-system" namespace to be "Ready" ...
	E0816 18:15:10.361592   74510 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-777541" hosting pod "coredns-6f6b679f8f-8njs2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:10.361598   74510 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:10.367126   74510 pod_ready.go:98] node "embed-certs-777541" hosting pod "etcd-embed-certs-777541" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:10.367149   74510 pod_ready.go:82] duration metric: took 5.542782ms for pod "etcd-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	E0816 18:15:10.367159   74510 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-777541" hosting pod "etcd-embed-certs-777541" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:10.367166   74510 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:10.372241   74510 pod_ready.go:98] node "embed-certs-777541" hosting pod "kube-apiserver-embed-certs-777541" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:10.372262   74510 pod_ready.go:82] duration metric: took 5.086551ms for pod "kube-apiserver-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	E0816 18:15:10.372273   74510 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-777541" hosting pod "kube-apiserver-embed-certs-777541" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:10.372301   74510 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:10.436397   74510 pod_ready.go:98] node "embed-certs-777541" hosting pod "kube-controller-manager-embed-certs-777541" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:10.436423   74510 pod_ready.go:82] duration metric: took 64.108858ms for pod "kube-controller-manager-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	E0816 18:15:10.436432   74510 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-777541" hosting pod "kube-controller-manager-embed-certs-777541" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:10.436443   74510 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-j5rl7" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:10.836116   74510 pod_ready.go:98] node "embed-certs-777541" hosting pod "kube-proxy-j5rl7" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:10.836146   74510 pod_ready.go:82] duration metric: took 399.693364ms for pod "kube-proxy-j5rl7" in "kube-system" namespace to be "Ready" ...
	E0816 18:15:10.836158   74510 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-777541" hosting pod "kube-proxy-j5rl7" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:10.836165   74510 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:11.235403   74510 pod_ready.go:98] node "embed-certs-777541" hosting pod "kube-scheduler-embed-certs-777541" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:11.235426   74510 pod_ready.go:82] duration metric: took 399.255693ms for pod "kube-scheduler-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	E0816 18:15:11.235439   74510 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-777541" hosting pod "kube-scheduler-embed-certs-777541" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:11.235445   74510 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:11.635717   74510 pod_ready.go:98] node "embed-certs-777541" hosting pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:11.635746   74510 pod_ready.go:82] duration metric: took 400.29283ms for pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace to be "Ready" ...
	E0816 18:15:11.635756   74510 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-777541" hosting pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:11.635762   74510 pod_ready.go:39] duration metric: took 1.286923943s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:15:11.635784   74510 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 18:15:11.646221   74510 ops.go:34] apiserver oom_adj: -16
	I0816 18:15:11.646248   74510 kubeadm.go:597] duration metric: took 10.23607804s to restartPrimaryControlPlane
	I0816 18:15:11.646269   74510 kubeadm.go:394] duration metric: took 10.288999278s to StartCluster
	I0816 18:15:11.646322   74510 settings.go:142] acquiring lock: {Name:mk7d6bbff611e99a9222156e58c7ea3b0a5541f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:15:11.646405   74510 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19461-9545/kubeconfig
	I0816 18:15:11.648652   74510 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/kubeconfig: {Name:mk7d5abb3a1b9b654c5d7490d32aeae2c9a1bdd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:15:11.648939   74510 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.218 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 18:15:11.649056   74510 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 18:15:11.649124   74510 config.go:182] Loaded profile config "embed-certs-777541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 18:15:11.649155   74510 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-777541"
	I0816 18:15:11.649165   74510 addons.go:69] Setting metrics-server=true in profile "embed-certs-777541"
	I0816 18:15:11.649192   74510 addons.go:234] Setting addon metrics-server=true in "embed-certs-777541"
	I0816 18:15:11.649201   74510 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-777541"
	W0816 18:15:11.649205   74510 addons.go:243] addon metrics-server should already be in state true
	I0816 18:15:11.649193   74510 addons.go:69] Setting default-storageclass=true in profile "embed-certs-777541"
	I0816 18:15:11.649252   74510 host.go:66] Checking if "embed-certs-777541" exists ...
	I0816 18:15:11.649254   74510 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-777541"
	W0816 18:15:11.649209   74510 addons.go:243] addon storage-provisioner should already be in state true
	I0816 18:15:11.649332   74510 host.go:66] Checking if "embed-certs-777541" exists ...
	I0816 18:15:11.649702   74510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:15:11.649706   74510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:15:11.649742   74510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:15:11.649772   74510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:15:11.649877   74510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:15:11.649930   74510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:15:11.651580   74510 out.go:177] * Verifying Kubernetes components...
	I0816 18:15:11.652903   74510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:15:11.665975   74510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33631
	I0816 18:15:11.666041   74510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44231
	I0816 18:15:11.666404   74510 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:15:11.666439   74510 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:15:11.666986   74510 main.go:141] libmachine: Using API Version  1
	I0816 18:15:11.667005   74510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:15:11.667051   74510 main.go:141] libmachine: Using API Version  1
	I0816 18:15:11.667085   74510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:15:11.667312   74510 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:15:11.667517   74510 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:15:11.667846   74510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:15:11.667899   74510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:15:11.668039   74510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:15:11.668077   74510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:15:11.669328   74510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38849
	I0816 18:15:11.669765   74510 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:15:11.670270   74510 main.go:141] libmachine: Using API Version  1
	I0816 18:15:11.670301   74510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:15:11.670658   74510 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:15:11.670896   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetState
	I0816 18:15:11.674148   74510 addons.go:234] Setting addon default-storageclass=true in "embed-certs-777541"
	W0816 18:15:11.674165   74510 addons.go:243] addon default-storageclass should already be in state true
	I0816 18:15:11.674184   74510 host.go:66] Checking if "embed-certs-777541" exists ...
	I0816 18:15:11.674448   74510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:15:11.674482   74510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:15:11.683629   74510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39851
	I0816 18:15:11.683637   74510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42943
	I0816 18:15:11.684040   74510 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:15:11.684048   74510 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:15:11.684499   74510 main.go:141] libmachine: Using API Version  1
	I0816 18:15:11.684516   74510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:15:11.684653   74510 main.go:141] libmachine: Using API Version  1
	I0816 18:15:11.684670   74510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:15:11.684968   74510 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:15:11.685114   74510 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:15:11.685136   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetState
	I0816 18:15:11.685329   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetState
	I0816 18:15:11.687030   74510 main.go:141] libmachine: (embed-certs-777541) Calling .DriverName
	I0816 18:15:11.687130   74510 main.go:141] libmachine: (embed-certs-777541) Calling .DriverName
	I0816 18:15:11.688852   74510 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0816 18:15:11.688855   74510 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:15:08.147308   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:08.647669   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:09.147149   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:09.647072   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:10.147381   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:10.647567   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:11.147101   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:11.647587   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:12.146972   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:12.647842   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:11.689590   74510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46491
	I0816 18:15:11.690041   74510 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:15:11.690152   74510 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 18:15:11.690170   74510 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0816 18:15:11.690186   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:15:11.690223   74510 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 18:15:11.690238   74510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 18:15:11.690253   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:15:11.690606   74510 main.go:141] libmachine: Using API Version  1
	I0816 18:15:11.690627   74510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:15:11.691006   74510 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:15:11.691543   74510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:15:11.691575   74510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:15:11.693646   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:15:11.693669   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:15:11.693988   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:15:11.694007   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:15:11.694051   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:15:11.694064   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:15:11.694275   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHPort
	I0816 18:15:11.694322   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHPort
	I0816 18:15:11.694436   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:15:11.694468   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:15:11.694545   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHUsername
	I0816 18:15:11.694602   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHUsername
	I0816 18:15:11.694677   74510 sshutil.go:53] new ssh client: &{IP:192.168.61.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/embed-certs-777541/id_rsa Username:docker}
	I0816 18:15:11.694885   74510 sshutil.go:53] new ssh client: &{IP:192.168.61.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/embed-certs-777541/id_rsa Username:docker}
	I0816 18:15:11.709409   74510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36055
	I0816 18:15:11.709800   74510 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:15:11.710343   74510 main.go:141] libmachine: Using API Version  1
	I0816 18:15:11.710363   74510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:15:11.710700   74510 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:15:11.710874   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetState
	I0816 18:15:11.712484   74510 main.go:141] libmachine: (embed-certs-777541) Calling .DriverName
	I0816 18:15:11.712691   74510 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 18:15:11.712706   74510 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 18:15:11.712723   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:15:11.715590   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:15:11.716017   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:15:11.716050   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:15:11.716167   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHPort
	I0816 18:15:11.716379   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:15:11.716572   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHUsername
	I0816 18:15:11.716737   74510 sshutil.go:53] new ssh client: &{IP:192.168.61.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/embed-certs-777541/id_rsa Username:docker}
	I0816 18:15:11.864710   74510 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 18:15:11.885871   74510 node_ready.go:35] waiting up to 6m0s for node "embed-certs-777541" to be "Ready" ...
	I0816 18:15:11.985725   74510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 18:15:12.007635   74510 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 18:15:12.007669   74510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0816 18:15:12.040044   74510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 18:15:12.059661   74510 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 18:15:12.059687   74510 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0816 18:15:12.123787   74510 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 18:15:12.123812   74510 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0816 18:15:12.167249   74510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 18:15:12.457960   74510 main.go:141] libmachine: Making call to close driver server
	I0816 18:15:12.457985   74510 main.go:141] libmachine: (embed-certs-777541) Calling .Close
	I0816 18:15:12.458264   74510 main.go:141] libmachine: (embed-certs-777541) DBG | Closing plugin on server side
	I0816 18:15:12.458315   74510 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:15:12.458334   74510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:15:12.458348   74510 main.go:141] libmachine: Making call to close driver server
	I0816 18:15:12.458360   74510 main.go:141] libmachine: (embed-certs-777541) Calling .Close
	I0816 18:15:12.458577   74510 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:15:12.458590   74510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:15:12.468651   74510 main.go:141] libmachine: Making call to close driver server
	I0816 18:15:12.468675   74510 main.go:141] libmachine: (embed-certs-777541) Calling .Close
	I0816 18:15:12.468921   74510 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:15:12.468940   74510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:15:12.468963   74510 main.go:141] libmachine: (embed-certs-777541) DBG | Closing plugin on server side
	I0816 18:15:13.203995   74510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.163904081s)
	I0816 18:15:13.204048   74510 main.go:141] libmachine: Making call to close driver server
	I0816 18:15:13.204060   74510 main.go:141] libmachine: (embed-certs-777541) Calling .Close
	I0816 18:15:13.204309   74510 main.go:141] libmachine: (embed-certs-777541) DBG | Closing plugin on server side
	I0816 18:15:13.204350   74510 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:15:13.204359   74510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:15:13.204368   74510 main.go:141] libmachine: Making call to close driver server
	I0816 18:15:13.204376   74510 main.go:141] libmachine: (embed-certs-777541) Calling .Close
	I0816 18:15:13.204562   74510 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:15:13.204589   74510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:15:13.213068   74510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.045790147s)
	I0816 18:15:13.213101   74510 main.go:141] libmachine: Making call to close driver server
	I0816 18:15:13.213115   74510 main.go:141] libmachine: (embed-certs-777541) Calling .Close
	I0816 18:15:13.213533   74510 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:15:13.213551   74510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:15:13.213555   74510 main.go:141] libmachine: (embed-certs-777541) DBG | Closing plugin on server side
	I0816 18:15:13.213560   74510 main.go:141] libmachine: Making call to close driver server
	I0816 18:15:13.213595   74510 main.go:141] libmachine: (embed-certs-777541) Calling .Close
	I0816 18:15:13.213869   74510 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:15:13.213887   74510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:15:13.213897   74510 addons.go:475] Verifying addon metrics-server=true in "embed-certs-777541"
	I0816 18:15:13.213901   74510 main.go:141] libmachine: (embed-certs-777541) DBG | Closing plugin on server side
	I0816 18:15:13.215724   74510 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0816 18:15:13.217031   74510 addons.go:510] duration metric: took 1.567977779s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0816 18:15:09.706813   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:11.708577   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:10.947986   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:12.949227   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:13.147558   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:13.647755   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:14.147408   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:14.647810   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:15.147888   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:15.647476   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:16.147258   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:16.647785   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:17.147086   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:17.647852   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:13.889379   74510 node_ready.go:53] node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:15.889764   74510 node_ready.go:53] node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:18.390031   74510 node_ready.go:53] node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:14.207743   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:16.705831   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:15.448826   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:17.950756   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:18.147086   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:18.647013   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:19.147027   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:19.647100   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:20.147070   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:20.647097   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:21.147251   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:21.647856   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:22.147427   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:22.647231   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:18.890110   74510 node_ready.go:49] node "embed-certs-777541" has status "Ready":"True"
	I0816 18:15:18.890138   74510 node_ready.go:38] duration metric: took 7.004237799s for node "embed-certs-777541" to be "Ready" ...
	I0816 18:15:18.890156   74510 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:15:18.897124   74510 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-8njs2" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:18.902860   74510 pod_ready.go:93] pod "coredns-6f6b679f8f-8njs2" in "kube-system" namespace has status "Ready":"True"
	I0816 18:15:18.902878   74510 pod_ready.go:82] duration metric: took 5.73242ms for pod "coredns-6f6b679f8f-8njs2" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:18.902886   74510 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:20.909185   74510 pod_ready.go:103] pod "etcd-embed-certs-777541" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:21.909629   74510 pod_ready.go:93] pod "etcd-embed-certs-777541" in "kube-system" namespace has status "Ready":"True"
	I0816 18:15:21.909660   74510 pod_ready.go:82] duration metric: took 3.006768325s for pod "etcd-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:21.909670   74510 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:22.916066   74510 pod_ready.go:93] pod "kube-apiserver-embed-certs-777541" in "kube-system" namespace has status "Ready":"True"
	I0816 18:15:22.916090   74510 pod_ready.go:82] duration metric: took 1.006414177s for pod "kube-apiserver-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:22.916099   74510 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:22.920882   74510 pod_ready.go:93] pod "kube-controller-manager-embed-certs-777541" in "kube-system" namespace has status "Ready":"True"
	I0816 18:15:22.920908   74510 pod_ready.go:82] duration metric: took 4.802561ms for pod "kube-controller-manager-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:22.920918   74510 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-j5rl7" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:22.926952   74510 pod_ready.go:93] pod "kube-proxy-j5rl7" in "kube-system" namespace has status "Ready":"True"
	I0816 18:15:22.926975   74510 pod_ready.go:82] duration metric: took 6.0498ms for pod "kube-proxy-j5rl7" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:22.926984   74510 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:19.206127   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:21.206280   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:23.705588   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:20.448793   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:22.948798   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:23.147403   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:23.647030   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:24.147677   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:24.647324   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:25.147973   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:25.647097   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:26.147160   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:26.646963   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:27.147620   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:27.647918   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:24.933953   74510 pod_ready.go:103] pod "kube-scheduler-embed-certs-777541" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:25.433826   74510 pod_ready.go:93] pod "kube-scheduler-embed-certs-777541" in "kube-system" namespace has status "Ready":"True"
	I0816 18:15:25.433846   74510 pod_ready.go:82] duration metric: took 2.506855714s for pod "kube-scheduler-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:25.433855   74510 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:27.440119   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:25.707915   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:28.206580   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:25.447687   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:27.948700   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:28.146994   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:28.647364   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:29.147332   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:29.647773   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:30.147276   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:30.647794   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:31.147398   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:31.647565   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:32.147139   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:32.647961   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:29.440564   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:31.940747   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:30.706544   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:32.706852   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:29.948982   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:32.447920   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:34.448186   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:33.147648   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:33.647087   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:34.147881   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:34.646988   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:35.147118   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:35.647978   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:36.147541   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:36.647423   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:37.147051   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:37.647726   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:34.439692   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:36.439956   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:38.440315   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:35.206291   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:37.206902   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:36.948416   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:39.447952   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:38.147192   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:38.647318   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:39.147186   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:39.647662   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:40.147044   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:40.647787   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:41.147638   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:41.647490   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:42.147787   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:42.647959   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:40.440405   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:42.440727   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:39.207086   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:41.706048   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:43.706585   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:41.450069   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:43.948101   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:43.147938   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:43.647855   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:44.147781   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:44.647710   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:15:44.647796   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:15:44.682176   75402 cri.go:89] found id: ""
	I0816 18:15:44.682207   75402 logs.go:276] 0 containers: []
	W0816 18:15:44.682218   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:15:44.682226   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:15:44.682285   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:15:44.717500   75402 cri.go:89] found id: ""
	I0816 18:15:44.717530   75402 logs.go:276] 0 containers: []
	W0816 18:15:44.717540   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:15:44.717552   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:15:44.717620   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:15:44.751816   75402 cri.go:89] found id: ""
	I0816 18:15:44.751847   75402 logs.go:276] 0 containers: []
	W0816 18:15:44.751858   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:15:44.751865   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:15:44.751942   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:15:44.783236   75402 cri.go:89] found id: ""
	I0816 18:15:44.783260   75402 logs.go:276] 0 containers: []
	W0816 18:15:44.783267   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:15:44.783272   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:15:44.783337   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:15:44.813087   75402 cri.go:89] found id: ""
	I0816 18:15:44.813110   75402 logs.go:276] 0 containers: []
	W0816 18:15:44.813116   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:15:44.813122   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:15:44.813166   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:15:44.843568   75402 cri.go:89] found id: ""
	I0816 18:15:44.843599   75402 logs.go:276] 0 containers: []
	W0816 18:15:44.843609   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:15:44.843616   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:15:44.843679   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:15:44.873694   75402 cri.go:89] found id: ""
	I0816 18:15:44.873723   75402 logs.go:276] 0 containers: []
	W0816 18:15:44.873734   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:15:44.873741   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:15:44.873808   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:15:44.906183   75402 cri.go:89] found id: ""
	I0816 18:15:44.906212   75402 logs.go:276] 0 containers: []
	W0816 18:15:44.906222   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:15:44.906231   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:15:44.906241   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:15:44.958963   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:15:44.958993   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:15:44.972390   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:15:44.972415   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:15:45.091624   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:15:45.091645   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:15:45.091661   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:15:45.159927   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:15:45.159963   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:15:47.698398   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:47.711848   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:15:47.711917   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:15:47.744247   75402 cri.go:89] found id: ""
	I0816 18:15:47.744278   75402 logs.go:276] 0 containers: []
	W0816 18:15:47.744288   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:15:47.744295   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:15:47.744374   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:15:47.783188   75402 cri.go:89] found id: ""
	I0816 18:15:47.783211   75402 logs.go:276] 0 containers: []
	W0816 18:15:47.783219   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:15:47.783224   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:15:47.783270   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:15:47.829284   75402 cri.go:89] found id: ""
	I0816 18:15:47.829320   75402 logs.go:276] 0 containers: []
	W0816 18:15:47.829333   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:15:47.829341   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:15:47.829413   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:15:47.879482   75402 cri.go:89] found id: ""
	I0816 18:15:47.879514   75402 logs.go:276] 0 containers: []
	W0816 18:15:47.879525   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:15:47.879532   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:15:47.879606   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:15:47.913766   75402 cri.go:89] found id: ""
	I0816 18:15:47.913797   75402 logs.go:276] 0 containers: []
	W0816 18:15:47.913808   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:15:47.913815   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:15:47.913880   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:15:47.947262   75402 cri.go:89] found id: ""
	I0816 18:15:47.947340   75402 logs.go:276] 0 containers: []
	W0816 18:15:47.947353   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:15:47.947362   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:15:47.947427   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:15:47.979638   75402 cri.go:89] found id: ""
	I0816 18:15:47.979667   75402 logs.go:276] 0 containers: []
	W0816 18:15:47.979678   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:15:47.979685   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:15:47.979741   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:15:48.010246   75402 cri.go:89] found id: ""
	I0816 18:15:48.010277   75402 logs.go:276] 0 containers: []
	W0816 18:15:48.010288   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:15:48.010296   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:15:48.010310   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:15:48.083916   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:15:48.083953   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:15:44.940775   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:47.440356   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:46.207236   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:48.705791   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:45.948300   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:47.948501   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:48.120254   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:15:48.120285   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:15:48.169590   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:15:48.169628   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:15:48.182821   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:15:48.182850   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:15:48.254088   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:15:50.755114   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:50.768167   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:15:50.768250   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:15:50.800881   75402 cri.go:89] found id: ""
	I0816 18:15:50.800906   75402 logs.go:276] 0 containers: []
	W0816 18:15:50.800913   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:15:50.800918   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:15:50.800969   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:15:50.833538   75402 cri.go:89] found id: ""
	I0816 18:15:50.833567   75402 logs.go:276] 0 containers: []
	W0816 18:15:50.833578   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:15:50.833586   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:15:50.833649   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:15:50.867306   75402 cri.go:89] found id: ""
	I0816 18:15:50.867336   75402 logs.go:276] 0 containers: []
	W0816 18:15:50.867347   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:15:50.867353   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:15:50.867400   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:15:50.900029   75402 cri.go:89] found id: ""
	I0816 18:15:50.900055   75402 logs.go:276] 0 containers: []
	W0816 18:15:50.900064   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:15:50.900072   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:15:50.900135   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:15:50.933604   75402 cri.go:89] found id: ""
	I0816 18:15:50.933630   75402 logs.go:276] 0 containers: []
	W0816 18:15:50.933638   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:15:50.933643   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:15:50.933707   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:15:50.966102   75402 cri.go:89] found id: ""
	I0816 18:15:50.966131   75402 logs.go:276] 0 containers: []
	W0816 18:15:50.966141   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:15:50.966149   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:15:50.966210   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:15:50.998007   75402 cri.go:89] found id: ""
	I0816 18:15:50.998036   75402 logs.go:276] 0 containers: []
	W0816 18:15:50.998047   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:15:50.998054   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:15:50.998115   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:15:51.032306   75402 cri.go:89] found id: ""
	I0816 18:15:51.032342   75402 logs.go:276] 0 containers: []
	W0816 18:15:51.032349   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:15:51.032357   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:15:51.032369   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:15:51.083186   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:15:51.083222   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:15:51.096072   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:15:51.096153   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:15:51.162667   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:15:51.162693   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:15:51.162709   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:15:51.241913   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:15:51.241954   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:15:49.440546   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:51.940026   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:50.706662   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:53.206075   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:50.447947   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:52.448340   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:54.448431   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:53.779323   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:53.793358   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:15:53.793433   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:15:53.827380   75402 cri.go:89] found id: ""
	I0816 18:15:53.827414   75402 logs.go:276] 0 containers: []
	W0816 18:15:53.827424   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:15:53.827430   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:15:53.827489   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:15:53.867331   75402 cri.go:89] found id: ""
	I0816 18:15:53.867370   75402 logs.go:276] 0 containers: []
	W0816 18:15:53.867380   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:15:53.867386   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:15:53.867438   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:15:53.899445   75402 cri.go:89] found id: ""
	I0816 18:15:53.899477   75402 logs.go:276] 0 containers: []
	W0816 18:15:53.899489   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:15:53.899498   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:15:53.899588   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:15:53.936527   75402 cri.go:89] found id: ""
	I0816 18:15:53.936556   75402 logs.go:276] 0 containers: []
	W0816 18:15:53.936568   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:15:53.936576   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:15:53.936653   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:15:53.970739   75402 cri.go:89] found id: ""
	I0816 18:15:53.970765   75402 logs.go:276] 0 containers: []
	W0816 18:15:53.970773   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:15:53.970780   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:15:53.970842   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:15:54.004119   75402 cri.go:89] found id: ""
	I0816 18:15:54.004150   75402 logs.go:276] 0 containers: []
	W0816 18:15:54.004159   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:15:54.004164   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:15:54.004217   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:15:54.038370   75402 cri.go:89] found id: ""
	I0816 18:15:54.038400   75402 logs.go:276] 0 containers: []
	W0816 18:15:54.038411   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:15:54.038416   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:15:54.038472   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:15:54.079346   75402 cri.go:89] found id: ""
	I0816 18:15:54.079375   75402 logs.go:276] 0 containers: []
	W0816 18:15:54.079383   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:15:54.079392   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:15:54.079403   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:15:54.116551   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:15:54.116586   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:15:54.169930   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:15:54.169970   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:15:54.182416   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:15:54.182448   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:15:54.253516   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:15:54.253539   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:15:54.253559   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:15:56.833124   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:56.846139   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:15:56.846211   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:15:56.880899   75402 cri.go:89] found id: ""
	I0816 18:15:56.880928   75402 logs.go:276] 0 containers: []
	W0816 18:15:56.880939   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:15:56.880945   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:15:56.880994   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:15:56.913362   75402 cri.go:89] found id: ""
	I0816 18:15:56.913393   75402 logs.go:276] 0 containers: []
	W0816 18:15:56.913406   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:15:56.913415   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:15:56.913507   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:15:56.951876   75402 cri.go:89] found id: ""
	I0816 18:15:56.951904   75402 logs.go:276] 0 containers: []
	W0816 18:15:56.951914   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:15:56.951919   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:15:56.951988   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:15:56.986335   75402 cri.go:89] found id: ""
	I0816 18:15:56.986358   75402 logs.go:276] 0 containers: []
	W0816 18:15:56.986366   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:15:56.986372   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:15:56.986423   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:15:57.022485   75402 cri.go:89] found id: ""
	I0816 18:15:57.022511   75402 logs.go:276] 0 containers: []
	W0816 18:15:57.022522   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:15:57.022529   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:15:57.022641   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:15:57.055436   75402 cri.go:89] found id: ""
	I0816 18:15:57.055463   75402 logs.go:276] 0 containers: []
	W0816 18:15:57.055470   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:15:57.055476   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:15:57.055536   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:15:57.085930   75402 cri.go:89] found id: ""
	I0816 18:15:57.085965   75402 logs.go:276] 0 containers: []
	W0816 18:15:57.085975   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:15:57.085981   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:15:57.086032   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:15:57.120436   75402 cri.go:89] found id: ""
	I0816 18:15:57.120466   75402 logs.go:276] 0 containers: []
	W0816 18:15:57.120477   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:15:57.120488   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:15:57.120501   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:15:57.202161   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:15:57.202218   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:15:57.243766   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:15:57.243805   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:15:57.295552   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:15:57.295585   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:15:57.307769   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:15:57.307802   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:15:57.390480   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:15:53.941399   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:56.439763   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:58.440357   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:55.206970   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:57.207312   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:56.948085   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:59.448174   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:59.891480   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:59.904766   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:15:59.904836   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:15:59.939209   75402 cri.go:89] found id: ""
	I0816 18:15:59.939241   75402 logs.go:276] 0 containers: []
	W0816 18:15:59.939252   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:15:59.939260   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:15:59.939324   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:15:59.971782   75402 cri.go:89] found id: ""
	I0816 18:15:59.971812   75402 logs.go:276] 0 containers: []
	W0816 18:15:59.971822   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:15:59.971832   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:15:59.971894   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:00.018585   75402 cri.go:89] found id: ""
	I0816 18:16:00.018630   75402 logs.go:276] 0 containers: []
	W0816 18:16:00.018643   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:00.018654   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:00.018722   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:00.050484   75402 cri.go:89] found id: ""
	I0816 18:16:00.050520   75402 logs.go:276] 0 containers: []
	W0816 18:16:00.050532   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:00.050540   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:00.050603   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:00.082900   75402 cri.go:89] found id: ""
	I0816 18:16:00.082930   75402 logs.go:276] 0 containers: []
	W0816 18:16:00.082942   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:00.082951   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:00.083025   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:00.115330   75402 cri.go:89] found id: ""
	I0816 18:16:00.115363   75402 logs.go:276] 0 containers: []
	W0816 18:16:00.115372   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:00.115378   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:00.115442   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:00.150898   75402 cri.go:89] found id: ""
	I0816 18:16:00.150935   75402 logs.go:276] 0 containers: []
	W0816 18:16:00.150952   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:00.150960   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:00.151033   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:00.193304   75402 cri.go:89] found id: ""
	I0816 18:16:00.193338   75402 logs.go:276] 0 containers: []
	W0816 18:16:00.193349   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:00.193359   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:00.193370   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:00.247340   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:00.247376   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:00.260470   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:00.260500   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:00.336483   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:00.336506   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:00.336521   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:00.421251   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:00.421289   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:02.964042   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:02.977284   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:02.977381   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:03.009533   75402 cri.go:89] found id: ""
	I0816 18:16:03.009574   75402 logs.go:276] 0 containers: []
	W0816 18:16:03.009586   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:03.009594   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:03.009673   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:03.043756   75402 cri.go:89] found id: ""
	I0816 18:16:03.043784   75402 logs.go:276] 0 containers: []
	W0816 18:16:03.043794   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:03.043802   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:03.043867   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:03.078817   75402 cri.go:89] found id: ""
	I0816 18:16:03.078840   75402 logs.go:276] 0 containers: []
	W0816 18:16:03.078848   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:03.078853   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:03.078906   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:00.440728   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:02.440788   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:59.706129   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:01.707967   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:01.948193   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:04.448504   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:03.112874   75402 cri.go:89] found id: ""
	I0816 18:16:03.112903   75402 logs.go:276] 0 containers: []
	W0816 18:16:03.112912   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:03.112918   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:03.112985   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:03.152008   75402 cri.go:89] found id: ""
	I0816 18:16:03.152040   75402 logs.go:276] 0 containers: []
	W0816 18:16:03.152052   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:03.152059   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:03.152125   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:03.187353   75402 cri.go:89] found id: ""
	I0816 18:16:03.187386   75402 logs.go:276] 0 containers: []
	W0816 18:16:03.187396   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:03.187404   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:03.187467   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:03.220860   75402 cri.go:89] found id: ""
	I0816 18:16:03.220895   75402 logs.go:276] 0 containers: []
	W0816 18:16:03.220903   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:03.220909   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:03.220958   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:03.252202   75402 cri.go:89] found id: ""
	I0816 18:16:03.252240   75402 logs.go:276] 0 containers: []
	W0816 18:16:03.252247   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:03.252256   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:03.252268   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:03.286907   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:03.286934   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:03.338212   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:03.338249   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:03.352548   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:03.352585   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:03.427580   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:03.427610   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:03.427626   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:06.011792   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:06.024201   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:06.024277   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:06.058328   75402 cri.go:89] found id: ""
	I0816 18:16:06.058356   75402 logs.go:276] 0 containers: []
	W0816 18:16:06.058367   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:06.058373   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:06.058433   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:06.091262   75402 cri.go:89] found id: ""
	I0816 18:16:06.091298   75402 logs.go:276] 0 containers: []
	W0816 18:16:06.091311   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:06.091318   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:06.091382   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:06.124114   75402 cri.go:89] found id: ""
	I0816 18:16:06.124146   75402 logs.go:276] 0 containers: []
	W0816 18:16:06.124154   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:06.124159   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:06.124220   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:06.155379   75402 cri.go:89] found id: ""
	I0816 18:16:06.155406   75402 logs.go:276] 0 containers: []
	W0816 18:16:06.155416   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:06.155422   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:06.155471   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:06.189442   75402 cri.go:89] found id: ""
	I0816 18:16:06.189472   75402 logs.go:276] 0 containers: []
	W0816 18:16:06.189480   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:06.189485   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:06.189538   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:06.228881   75402 cri.go:89] found id: ""
	I0816 18:16:06.228910   75402 logs.go:276] 0 containers: []
	W0816 18:16:06.228921   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:06.228929   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:06.229003   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:06.262272   75402 cri.go:89] found id: ""
	I0816 18:16:06.262299   75402 logs.go:276] 0 containers: []
	W0816 18:16:06.262310   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:06.262317   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:06.262386   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:06.295427   75402 cri.go:89] found id: ""
	I0816 18:16:06.295456   75402 logs.go:276] 0 containers: []
	W0816 18:16:06.295468   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:06.295478   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:06.295492   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:06.347569   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:06.347608   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:06.362786   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:06.362825   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:06.432020   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:06.432044   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:06.432059   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:06.512085   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:06.512120   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:04.940128   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:07.439708   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:04.206477   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:06.208125   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:08.706765   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:06.947599   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:08.948183   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:09.051957   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:09.066630   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:09.066690   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:09.101484   75402 cri.go:89] found id: ""
	I0816 18:16:09.101515   75402 logs.go:276] 0 containers: []
	W0816 18:16:09.101526   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:09.101536   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:09.101614   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:09.140645   75402 cri.go:89] found id: ""
	I0816 18:16:09.140677   75402 logs.go:276] 0 containers: []
	W0816 18:16:09.140689   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:09.140696   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:09.140758   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:09.174666   75402 cri.go:89] found id: ""
	I0816 18:16:09.174698   75402 logs.go:276] 0 containers: []
	W0816 18:16:09.174708   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:09.174717   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:09.174780   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:09.209715   75402 cri.go:89] found id: ""
	I0816 18:16:09.209748   75402 logs.go:276] 0 containers: []
	W0816 18:16:09.209758   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:09.209767   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:09.209845   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:09.243681   75402 cri.go:89] found id: ""
	I0816 18:16:09.243712   75402 logs.go:276] 0 containers: []
	W0816 18:16:09.243720   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:09.243726   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:09.243781   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:09.278058   75402 cri.go:89] found id: ""
	I0816 18:16:09.278090   75402 logs.go:276] 0 containers: []
	W0816 18:16:09.278102   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:09.278111   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:09.278178   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:09.313092   75402 cri.go:89] found id: ""
	I0816 18:16:09.313122   75402 logs.go:276] 0 containers: []
	W0816 18:16:09.313132   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:09.313137   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:09.313201   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:09.345203   75402 cri.go:89] found id: ""
	I0816 18:16:09.345229   75402 logs.go:276] 0 containers: []
	W0816 18:16:09.345236   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:09.345245   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:09.345259   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:09.358198   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:09.358225   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:09.422024   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:09.422047   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:09.422059   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:09.498684   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:09.498717   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:09.535349   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:09.535382   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:12.087472   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:12.100412   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:12.100477   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:12.133982   75402 cri.go:89] found id: ""
	I0816 18:16:12.134018   75402 logs.go:276] 0 containers: []
	W0816 18:16:12.134030   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:12.134038   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:12.134100   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:12.166466   75402 cri.go:89] found id: ""
	I0816 18:16:12.166497   75402 logs.go:276] 0 containers: []
	W0816 18:16:12.166507   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:12.166514   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:12.166589   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:12.197752   75402 cri.go:89] found id: ""
	I0816 18:16:12.197779   75402 logs.go:276] 0 containers: []
	W0816 18:16:12.197790   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:12.197797   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:12.197856   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:12.239759   75402 cri.go:89] found id: ""
	I0816 18:16:12.239789   75402 logs.go:276] 0 containers: []
	W0816 18:16:12.239801   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:12.239810   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:12.239871   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:12.273263   75402 cri.go:89] found id: ""
	I0816 18:16:12.273292   75402 logs.go:276] 0 containers: []
	W0816 18:16:12.273302   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:12.273310   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:12.273370   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:12.308788   75402 cri.go:89] found id: ""
	I0816 18:16:12.308820   75402 logs.go:276] 0 containers: []
	W0816 18:16:12.308831   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:12.308839   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:12.308897   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:12.345243   75402 cri.go:89] found id: ""
	I0816 18:16:12.345274   75402 logs.go:276] 0 containers: []
	W0816 18:16:12.345281   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:12.345288   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:12.345341   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:12.379939   75402 cri.go:89] found id: ""
	I0816 18:16:12.379968   75402 logs.go:276] 0 containers: []
	W0816 18:16:12.379978   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:12.379989   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:12.380004   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:12.436097   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:12.436130   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:12.449328   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:12.449357   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:12.518723   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:12.518749   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:12.518764   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:12.600228   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:12.600268   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:09.441051   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:11.441097   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:11.206853   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:13.705328   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:11.449793   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:13.948517   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:15.137940   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:15.150617   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:15.150694   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:15.186029   75402 cri.go:89] found id: ""
	I0816 18:16:15.186057   75402 logs.go:276] 0 containers: []
	W0816 18:16:15.186067   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:15.186074   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:15.186134   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:15.219812   75402 cri.go:89] found id: ""
	I0816 18:16:15.219840   75402 logs.go:276] 0 containers: []
	W0816 18:16:15.219851   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:15.219864   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:15.219927   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:15.253434   75402 cri.go:89] found id: ""
	I0816 18:16:15.253462   75402 logs.go:276] 0 containers: []
	W0816 18:16:15.253472   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:15.253479   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:15.253542   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:15.286697   75402 cri.go:89] found id: ""
	I0816 18:16:15.286729   75402 logs.go:276] 0 containers: []
	W0816 18:16:15.286745   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:15.286751   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:15.286810   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:15.319363   75402 cri.go:89] found id: ""
	I0816 18:16:15.319405   75402 logs.go:276] 0 containers: []
	W0816 18:16:15.319415   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:15.319422   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:15.319506   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:15.353900   75402 cri.go:89] found id: ""
	I0816 18:16:15.353924   75402 logs.go:276] 0 containers: []
	W0816 18:16:15.353931   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:15.353937   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:15.353991   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:15.389086   75402 cri.go:89] found id: ""
	I0816 18:16:15.389114   75402 logs.go:276] 0 containers: []
	W0816 18:16:15.389122   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:15.389127   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:15.389184   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:15.424069   75402 cri.go:89] found id: ""
	I0816 18:16:15.424099   75402 logs.go:276] 0 containers: []
	W0816 18:16:15.424110   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:15.424121   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:15.424136   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:15.482703   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:15.482738   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:15.496859   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:15.496886   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:15.562178   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:15.562196   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:15.562212   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:15.643484   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:15.643521   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:13.944174   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:16.439987   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:18.442569   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:15.706743   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:18.206088   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:16.448775   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:18.948447   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:18.180963   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:18.194705   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:18.194783   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:18.231302   75402 cri.go:89] found id: ""
	I0816 18:16:18.231337   75402 logs.go:276] 0 containers: []
	W0816 18:16:18.231348   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:18.231355   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:18.231413   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:18.264098   75402 cri.go:89] found id: ""
	I0816 18:16:18.264124   75402 logs.go:276] 0 containers: []
	W0816 18:16:18.264135   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:18.264155   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:18.264228   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:18.298133   75402 cri.go:89] found id: ""
	I0816 18:16:18.298165   75402 logs.go:276] 0 containers: []
	W0816 18:16:18.298178   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:18.298186   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:18.298252   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:18.331323   75402 cri.go:89] found id: ""
	I0816 18:16:18.331354   75402 logs.go:276] 0 containers: []
	W0816 18:16:18.331362   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:18.331367   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:18.331416   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:18.365677   75402 cri.go:89] found id: ""
	I0816 18:16:18.365709   75402 logs.go:276] 0 containers: []
	W0816 18:16:18.365718   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:18.365724   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:18.365774   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:18.399801   75402 cri.go:89] found id: ""
	I0816 18:16:18.399835   75402 logs.go:276] 0 containers: []
	W0816 18:16:18.399844   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:18.399850   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:18.399908   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:18.438148   75402 cri.go:89] found id: ""
	I0816 18:16:18.438179   75402 logs.go:276] 0 containers: []
	W0816 18:16:18.438189   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:18.438197   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:18.438257   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:18.472185   75402 cri.go:89] found id: ""
	I0816 18:16:18.472215   75402 logs.go:276] 0 containers: []
	W0816 18:16:18.472223   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:18.472232   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:18.472243   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:18.523369   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:18.523400   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:18.536152   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:18.536179   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:18.611539   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:18.611560   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:18.611571   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:18.688043   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:18.688079   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:21.229163   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:21.242641   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:21.242717   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:21.275188   75402 cri.go:89] found id: ""
	I0816 18:16:21.275213   75402 logs.go:276] 0 containers: []
	W0816 18:16:21.275220   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:21.275226   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:21.275275   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:21.308377   75402 cri.go:89] found id: ""
	I0816 18:16:21.308406   75402 logs.go:276] 0 containers: []
	W0816 18:16:21.308417   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:21.308424   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:21.308475   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:21.341067   75402 cri.go:89] found id: ""
	I0816 18:16:21.341098   75402 logs.go:276] 0 containers: []
	W0816 18:16:21.341106   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:21.341112   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:21.341170   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:21.372707   75402 cri.go:89] found id: ""
	I0816 18:16:21.372743   75402 logs.go:276] 0 containers: []
	W0816 18:16:21.372756   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:21.372763   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:21.372847   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:21.410210   75402 cri.go:89] found id: ""
	I0816 18:16:21.410241   75402 logs.go:276] 0 containers: []
	W0816 18:16:21.410252   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:21.410259   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:21.410323   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:21.444840   75402 cri.go:89] found id: ""
	I0816 18:16:21.444863   75402 logs.go:276] 0 containers: []
	W0816 18:16:21.444872   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:21.444879   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:21.444942   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:21.478278   75402 cri.go:89] found id: ""
	I0816 18:16:21.478319   75402 logs.go:276] 0 containers: []
	W0816 18:16:21.478327   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:21.478333   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:21.478395   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:21.512026   75402 cri.go:89] found id: ""
	I0816 18:16:21.512063   75402 logs.go:276] 0 containers: []
	W0816 18:16:21.512073   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:21.512090   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:21.512111   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:21.564800   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:21.564834   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:21.577343   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:21.577368   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:21.663216   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:21.663238   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:21.663251   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:21.741960   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:21.741994   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:20.939740   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:22.942844   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:20.706032   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:22.707112   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:21.449404   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:23.454804   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:24.282136   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:24.296452   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:24.296513   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:24.337173   75402 cri.go:89] found id: ""
	I0816 18:16:24.337200   75402 logs.go:276] 0 containers: []
	W0816 18:16:24.337210   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:24.337218   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:24.337282   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:24.374163   75402 cri.go:89] found id: ""
	I0816 18:16:24.374200   75402 logs.go:276] 0 containers: []
	W0816 18:16:24.374213   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:24.374222   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:24.374287   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:24.407823   75402 cri.go:89] found id: ""
	I0816 18:16:24.407854   75402 logs.go:276] 0 containers: []
	W0816 18:16:24.407866   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:24.407881   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:24.407953   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:24.444006   75402 cri.go:89] found id: ""
	I0816 18:16:24.444032   75402 logs.go:276] 0 containers: []
	W0816 18:16:24.444042   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:24.444049   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:24.444113   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:24.479082   75402 cri.go:89] found id: ""
	I0816 18:16:24.479110   75402 logs.go:276] 0 containers: []
	W0816 18:16:24.479119   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:24.479125   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:24.479174   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:24.524738   75402 cri.go:89] found id: ""
	I0816 18:16:24.524764   75402 logs.go:276] 0 containers: []
	W0816 18:16:24.524775   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:24.524782   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:24.524842   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:24.560298   75402 cri.go:89] found id: ""
	I0816 18:16:24.560326   75402 logs.go:276] 0 containers: []
	W0816 18:16:24.560335   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:24.560343   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:24.560406   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:24.597182   75402 cri.go:89] found id: ""
	I0816 18:16:24.597214   75402 logs.go:276] 0 containers: []
	W0816 18:16:24.597227   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:24.597239   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:24.597254   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:24.653063   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:24.653106   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:24.665940   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:24.665972   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:24.736599   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:24.736639   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:24.736657   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:24.821883   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:24.821939   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:27.359558   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:27.382980   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:27.383053   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:27.416766   75402 cri.go:89] found id: ""
	I0816 18:16:27.416793   75402 logs.go:276] 0 containers: []
	W0816 18:16:27.416802   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:27.416811   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:27.416873   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:27.452966   75402 cri.go:89] found id: ""
	I0816 18:16:27.452988   75402 logs.go:276] 0 containers: []
	W0816 18:16:27.452995   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:27.453001   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:27.453050   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:27.485850   75402 cri.go:89] found id: ""
	I0816 18:16:27.485885   75402 logs.go:276] 0 containers: []
	W0816 18:16:27.485896   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:27.485903   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:27.485960   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:27.517667   75402 cri.go:89] found id: ""
	I0816 18:16:27.517694   75402 logs.go:276] 0 containers: []
	W0816 18:16:27.517704   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:27.517711   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:27.517774   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:27.553547   75402 cri.go:89] found id: ""
	I0816 18:16:27.553574   75402 logs.go:276] 0 containers: []
	W0816 18:16:27.553582   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:27.553593   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:27.553653   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:27.586857   75402 cri.go:89] found id: ""
	I0816 18:16:27.586884   75402 logs.go:276] 0 containers: []
	W0816 18:16:27.586893   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:27.586898   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:27.586957   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:27.621739   75402 cri.go:89] found id: ""
	I0816 18:16:27.621766   75402 logs.go:276] 0 containers: []
	W0816 18:16:27.621776   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:27.621784   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:27.621844   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:27.657772   75402 cri.go:89] found id: ""
	I0816 18:16:27.657797   75402 logs.go:276] 0 containers: []
	W0816 18:16:27.657805   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:27.657819   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:27.657831   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:27.729769   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:27.729796   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:27.729810   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:27.813351   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:27.813403   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:27.852985   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:27.853010   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:27.908434   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:27.908476   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:25.439828   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:27.440749   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:25.207590   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:27.706496   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:25.948579   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:28.448590   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:30.422781   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:30.435987   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:30.436070   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:30.470878   75402 cri.go:89] found id: ""
	I0816 18:16:30.470907   75402 logs.go:276] 0 containers: []
	W0816 18:16:30.470918   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:30.470926   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:30.470983   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:30.504940   75402 cri.go:89] found id: ""
	I0816 18:16:30.504969   75402 logs.go:276] 0 containers: []
	W0816 18:16:30.504980   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:30.504988   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:30.505058   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:30.538680   75402 cri.go:89] found id: ""
	I0816 18:16:30.538708   75402 logs.go:276] 0 containers: []
	W0816 18:16:30.538716   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:30.538722   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:30.538788   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:30.574757   75402 cri.go:89] found id: ""
	I0816 18:16:30.574782   75402 logs.go:276] 0 containers: []
	W0816 18:16:30.574791   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:30.574797   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:30.574853   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:30.612500   75402 cri.go:89] found id: ""
	I0816 18:16:30.612529   75402 logs.go:276] 0 containers: []
	W0816 18:16:30.612539   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:30.612547   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:30.612613   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:30.644572   75402 cri.go:89] found id: ""
	I0816 18:16:30.644595   75402 logs.go:276] 0 containers: []
	W0816 18:16:30.644603   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:30.644609   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:30.644678   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:30.678199   75402 cri.go:89] found id: ""
	I0816 18:16:30.678232   75402 logs.go:276] 0 containers: []
	W0816 18:16:30.678243   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:30.678252   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:30.678331   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:30.709435   75402 cri.go:89] found id: ""
	I0816 18:16:30.709470   75402 logs.go:276] 0 containers: []
	W0816 18:16:30.709482   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:30.709494   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:30.709511   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:30.723430   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:30.723464   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:30.800340   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:30.800374   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:30.800390   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:30.883945   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:30.883986   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:30.922107   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:30.922139   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:29.940430   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:32.440198   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:29.706649   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:32.205271   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:30.949515   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:33.448456   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:33.480016   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:33.494178   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:33.494241   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:33.529497   75402 cri.go:89] found id: ""
	I0816 18:16:33.529527   75402 logs.go:276] 0 containers: []
	W0816 18:16:33.529546   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:33.529554   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:33.529614   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:33.566670   75402 cri.go:89] found id: ""
	I0816 18:16:33.566700   75402 logs.go:276] 0 containers: []
	W0816 18:16:33.566711   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:33.566718   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:33.566781   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:33.603898   75402 cri.go:89] found id: ""
	I0816 18:16:33.603926   75402 logs.go:276] 0 containers: []
	W0816 18:16:33.603937   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:33.603944   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:33.604003   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:33.636077   75402 cri.go:89] found id: ""
	I0816 18:16:33.636111   75402 logs.go:276] 0 containers: []
	W0816 18:16:33.636125   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:33.636134   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:33.636200   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:33.668974   75402 cri.go:89] found id: ""
	I0816 18:16:33.669002   75402 logs.go:276] 0 containers: []
	W0816 18:16:33.669011   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:33.669017   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:33.669070   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:33.700981   75402 cri.go:89] found id: ""
	I0816 18:16:33.701010   75402 logs.go:276] 0 containers: []
	W0816 18:16:33.701019   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:33.701026   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:33.701088   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:33.735430   75402 cri.go:89] found id: ""
	I0816 18:16:33.735463   75402 logs.go:276] 0 containers: []
	W0816 18:16:33.735474   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:33.735481   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:33.735539   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:33.779797   75402 cri.go:89] found id: ""
	I0816 18:16:33.779829   75402 logs.go:276] 0 containers: []
	W0816 18:16:33.779840   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:33.779851   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:33.779865   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:33.824873   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:33.824908   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:33.874177   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:33.874217   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:33.888535   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:33.888561   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:33.957590   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:33.957608   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:33.957627   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:36.533660   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:36.546542   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:36.546606   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:36.584056   75402 cri.go:89] found id: ""
	I0816 18:16:36.584085   75402 logs.go:276] 0 containers: []
	W0816 18:16:36.584094   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:36.584099   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:36.584149   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:36.622143   75402 cri.go:89] found id: ""
	I0816 18:16:36.622172   75402 logs.go:276] 0 containers: []
	W0816 18:16:36.622184   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:36.622193   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:36.622262   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:36.655479   75402 cri.go:89] found id: ""
	I0816 18:16:36.655509   75402 logs.go:276] 0 containers: []
	W0816 18:16:36.655520   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:36.655528   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:36.655603   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:36.688044   75402 cri.go:89] found id: ""
	I0816 18:16:36.688076   75402 logs.go:276] 0 containers: []
	W0816 18:16:36.688088   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:36.688096   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:36.688161   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:36.725831   75402 cri.go:89] found id: ""
	I0816 18:16:36.725861   75402 logs.go:276] 0 containers: []
	W0816 18:16:36.725868   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:36.725874   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:36.725925   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:36.758398   75402 cri.go:89] found id: ""
	I0816 18:16:36.758433   75402 logs.go:276] 0 containers: []
	W0816 18:16:36.758444   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:36.758453   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:36.758517   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:36.791097   75402 cri.go:89] found id: ""
	I0816 18:16:36.791126   75402 logs.go:276] 0 containers: []
	W0816 18:16:36.791136   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:36.791144   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:36.791207   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:36.829337   75402 cri.go:89] found id: ""
	I0816 18:16:36.829369   75402 logs.go:276] 0 containers: []
	W0816 18:16:36.829380   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:36.829391   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:36.829405   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:36.881898   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:36.881932   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:36.895584   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:36.895618   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:36.967175   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:36.967197   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:36.967213   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:37.046993   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:37.047025   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:34.440475   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:36.946369   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:34.206677   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:36.207893   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:38.706193   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:35.449611   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:37.947527   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:39.588683   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:39.607205   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:39.607287   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:39.640517   75402 cri.go:89] found id: ""
	I0816 18:16:39.640541   75402 logs.go:276] 0 containers: []
	W0816 18:16:39.640549   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:39.640554   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:39.640604   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:39.673777   75402 cri.go:89] found id: ""
	I0816 18:16:39.673805   75402 logs.go:276] 0 containers: []
	W0816 18:16:39.673813   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:39.673818   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:39.673899   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:39.709574   75402 cri.go:89] found id: ""
	I0816 18:16:39.709598   75402 logs.go:276] 0 containers: []
	W0816 18:16:39.709606   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:39.709611   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:39.709666   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:39.743946   75402 cri.go:89] found id: ""
	I0816 18:16:39.743971   75402 logs.go:276] 0 containers: []
	W0816 18:16:39.743979   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:39.743985   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:39.744041   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:39.776140   75402 cri.go:89] found id: ""
	I0816 18:16:39.776171   75402 logs.go:276] 0 containers: []
	W0816 18:16:39.776181   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:39.776187   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:39.776254   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:39.808697   75402 cri.go:89] found id: ""
	I0816 18:16:39.808719   75402 logs.go:276] 0 containers: []
	W0816 18:16:39.808728   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:39.808734   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:39.808793   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:39.840163   75402 cri.go:89] found id: ""
	I0816 18:16:39.840190   75402 logs.go:276] 0 containers: []
	W0816 18:16:39.840200   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:39.840206   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:39.840270   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:39.874396   75402 cri.go:89] found id: ""
	I0816 18:16:39.874419   75402 logs.go:276] 0 containers: []
	W0816 18:16:39.874426   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:39.874434   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:39.874448   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:39.927922   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:39.927963   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:39.942048   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:39.942076   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:40.012143   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:40.012166   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:40.012181   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:40.088798   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:40.088844   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:42.625875   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:42.640386   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:42.640448   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:42.675201   75402 cri.go:89] found id: ""
	I0816 18:16:42.675224   75402 logs.go:276] 0 containers: []
	W0816 18:16:42.675231   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:42.675236   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:42.675293   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:42.705156   75402 cri.go:89] found id: ""
	I0816 18:16:42.705182   75402 logs.go:276] 0 containers: []
	W0816 18:16:42.705192   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:42.705199   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:42.705258   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:42.738921   75402 cri.go:89] found id: ""
	I0816 18:16:42.738948   75402 logs.go:276] 0 containers: []
	W0816 18:16:42.738956   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:42.738962   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:42.739013   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:42.771130   75402 cri.go:89] found id: ""
	I0816 18:16:42.771160   75402 logs.go:276] 0 containers: []
	W0816 18:16:42.771168   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:42.771175   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:42.771231   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:42.805774   75402 cri.go:89] found id: ""
	I0816 18:16:42.805803   75402 logs.go:276] 0 containers: []
	W0816 18:16:42.805811   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:42.805817   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:42.805879   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:42.840248   75402 cri.go:89] found id: ""
	I0816 18:16:42.840277   75402 logs.go:276] 0 containers: []
	W0816 18:16:42.840293   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:42.840302   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:42.840360   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:42.873260   75402 cri.go:89] found id: ""
	I0816 18:16:42.873287   75402 logs.go:276] 0 containers: []
	W0816 18:16:42.873297   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:42.873322   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:42.873383   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:42.906205   75402 cri.go:89] found id: ""
	I0816 18:16:42.906230   75402 logs.go:276] 0 containers: []
	W0816 18:16:42.906238   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:42.906247   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:42.906257   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:42.959235   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:42.959272   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:42.972063   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:42.972090   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:43.039530   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:43.039558   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:43.039569   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:39.440219   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:41.441052   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:40.707059   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:43.210643   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:39.948907   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:42.448534   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:43.115486   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:43.115519   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:45.651040   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:45.663718   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:45.663812   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:45.696548   75402 cri.go:89] found id: ""
	I0816 18:16:45.696578   75402 logs.go:276] 0 containers: []
	W0816 18:16:45.696586   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:45.696591   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:45.696663   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:45.731032   75402 cri.go:89] found id: ""
	I0816 18:16:45.731059   75402 logs.go:276] 0 containers: []
	W0816 18:16:45.731068   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:45.731073   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:45.731126   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:45.764801   75402 cri.go:89] found id: ""
	I0816 18:16:45.764829   75402 logs.go:276] 0 containers: []
	W0816 18:16:45.764840   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:45.764846   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:45.764908   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:45.800768   75402 cri.go:89] found id: ""
	I0816 18:16:45.800795   75402 logs.go:276] 0 containers: []
	W0816 18:16:45.800803   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:45.800809   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:45.800858   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:45.841460   75402 cri.go:89] found id: ""
	I0816 18:16:45.841486   75402 logs.go:276] 0 containers: []
	W0816 18:16:45.841493   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:45.841505   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:45.841566   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:45.875230   75402 cri.go:89] found id: ""
	I0816 18:16:45.875254   75402 logs.go:276] 0 containers: []
	W0816 18:16:45.875261   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:45.875266   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:45.875319   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:45.907711   75402 cri.go:89] found id: ""
	I0816 18:16:45.907739   75402 logs.go:276] 0 containers: []
	W0816 18:16:45.907747   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:45.907753   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:45.907804   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:45.943147   75402 cri.go:89] found id: ""
	I0816 18:16:45.943171   75402 logs.go:276] 0 containers: []
	W0816 18:16:45.943182   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:45.943192   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:45.943206   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:45.998459   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:45.998491   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:46.013237   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:46.013267   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:46.079248   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:46.079273   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:46.079288   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:46.158842   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:46.158874   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:43.939212   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:45.939893   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:47.940331   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:45.706588   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:48.206342   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:44.948046   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:46.948752   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:49.448263   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:48.696728   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:48.710946   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:48.711041   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:48.746696   75402 cri.go:89] found id: ""
	I0816 18:16:48.746727   75402 logs.go:276] 0 containers: []
	W0816 18:16:48.746735   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:48.746741   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:48.746803   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:48.781496   75402 cri.go:89] found id: ""
	I0816 18:16:48.781522   75402 logs.go:276] 0 containers: []
	W0816 18:16:48.781532   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:48.781539   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:48.781603   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:48.815628   75402 cri.go:89] found id: ""
	I0816 18:16:48.815654   75402 logs.go:276] 0 containers: []
	W0816 18:16:48.815665   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:48.815673   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:48.815736   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:48.848990   75402 cri.go:89] found id: ""
	I0816 18:16:48.849018   75402 logs.go:276] 0 containers: []
	W0816 18:16:48.849030   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:48.849040   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:48.849098   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:48.886924   75402 cri.go:89] found id: ""
	I0816 18:16:48.886949   75402 logs.go:276] 0 containers: []
	W0816 18:16:48.886960   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:48.886968   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:48.887022   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:48.923989   75402 cri.go:89] found id: ""
	I0816 18:16:48.924018   75402 logs.go:276] 0 containers: []
	W0816 18:16:48.924030   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:48.924038   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:48.924102   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:48.959513   75402 cri.go:89] found id: ""
	I0816 18:16:48.959546   75402 logs.go:276] 0 containers: []
	W0816 18:16:48.959556   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:48.959562   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:48.959614   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:48.995615   75402 cri.go:89] found id: ""
	I0816 18:16:48.995651   75402 logs.go:276] 0 containers: []
	W0816 18:16:48.995662   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:48.995673   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:48.995688   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:49.008440   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:49.008468   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:49.076761   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:49.076780   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:49.076797   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:49.152855   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:49.152893   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:49.190857   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:49.190887   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:51.745344   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:51.759552   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:51.759628   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:51.795494   75402 cri.go:89] found id: ""
	I0816 18:16:51.795520   75402 logs.go:276] 0 containers: []
	W0816 18:16:51.795531   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:51.795539   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:51.795600   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:51.833162   75402 cri.go:89] found id: ""
	I0816 18:16:51.833188   75402 logs.go:276] 0 containers: []
	W0816 18:16:51.833198   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:51.833205   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:51.833265   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:51.866940   75402 cri.go:89] found id: ""
	I0816 18:16:51.866968   75402 logs.go:276] 0 containers: []
	W0816 18:16:51.866979   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:51.866986   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:51.867051   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:51.899824   75402 cri.go:89] found id: ""
	I0816 18:16:51.899857   75402 logs.go:276] 0 containers: []
	W0816 18:16:51.899867   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:51.899874   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:51.899937   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:51.932273   75402 cri.go:89] found id: ""
	I0816 18:16:51.932297   75402 logs.go:276] 0 containers: []
	W0816 18:16:51.932312   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:51.932320   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:51.932390   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:51.966885   75402 cri.go:89] found id: ""
	I0816 18:16:51.966911   75402 logs.go:276] 0 containers: []
	W0816 18:16:51.966922   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:51.966930   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:51.966996   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:52.002988   75402 cri.go:89] found id: ""
	I0816 18:16:52.003020   75402 logs.go:276] 0 containers: []
	W0816 18:16:52.003029   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:52.003035   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:52.003098   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:52.038858   75402 cri.go:89] found id: ""
	I0816 18:16:52.038894   75402 logs.go:276] 0 containers: []
	W0816 18:16:52.038909   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:52.038919   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:52.038933   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:52.076404   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:52.076431   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:52.127735   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:52.127767   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:52.140657   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:52.140680   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:52.202961   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:52.202989   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:52.203008   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:50.440577   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:52.441865   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:50.705618   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:52.706795   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:51.448948   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:53.947907   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:54.787095   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:54.801258   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:54.801332   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:54.837987   75402 cri.go:89] found id: ""
	I0816 18:16:54.838018   75402 logs.go:276] 0 containers: []
	W0816 18:16:54.838028   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:54.838034   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:54.838118   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:54.872439   75402 cri.go:89] found id: ""
	I0816 18:16:54.872466   75402 logs.go:276] 0 containers: []
	W0816 18:16:54.872477   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:54.872490   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:54.872554   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:54.904676   75402 cri.go:89] found id: ""
	I0816 18:16:54.904706   75402 logs.go:276] 0 containers: []
	W0816 18:16:54.904717   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:54.904724   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:54.904783   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:54.938101   75402 cri.go:89] found id: ""
	I0816 18:16:54.938134   75402 logs.go:276] 0 containers: []
	W0816 18:16:54.938145   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:54.938154   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:54.938218   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:54.977409   75402 cri.go:89] found id: ""
	I0816 18:16:54.977442   75402 logs.go:276] 0 containers: []
	W0816 18:16:54.977453   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:54.977460   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:54.977521   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:55.013248   75402 cri.go:89] found id: ""
	I0816 18:16:55.013275   75402 logs.go:276] 0 containers: []
	W0816 18:16:55.013286   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:55.013294   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:55.013363   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:55.044555   75402 cri.go:89] found id: ""
	I0816 18:16:55.044588   75402 logs.go:276] 0 containers: []
	W0816 18:16:55.044597   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:55.044603   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:55.044690   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:55.075970   75402 cri.go:89] found id: ""
	I0816 18:16:55.075997   75402 logs.go:276] 0 containers: []
	W0816 18:16:55.076006   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:55.076014   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:55.076025   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:55.149982   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:55.150017   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:55.190160   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:55.190194   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:55.242629   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:55.242660   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:55.255229   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:55.255254   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:55.324775   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:57.824996   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:57.838666   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:57.838740   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:57.872828   75402 cri.go:89] found id: ""
	I0816 18:16:57.872861   75402 logs.go:276] 0 containers: []
	W0816 18:16:57.872869   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:57.872875   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:57.872927   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:57.907324   75402 cri.go:89] found id: ""
	I0816 18:16:57.907354   75402 logs.go:276] 0 containers: []
	W0816 18:16:57.907366   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:57.907373   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:57.907433   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:57.941657   75402 cri.go:89] found id: ""
	I0816 18:16:57.941682   75402 logs.go:276] 0 containers: []
	W0816 18:16:57.941689   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:57.941695   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:57.941746   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:57.981424   75402 cri.go:89] found id: ""
	I0816 18:16:57.981466   75402 logs.go:276] 0 containers: []
	W0816 18:16:57.981480   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:57.981489   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:57.981562   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:58.015534   75402 cri.go:89] found id: ""
	I0816 18:16:58.015587   75402 logs.go:276] 0 containers: []
	W0816 18:16:58.015598   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:58.015606   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:58.015669   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:58.047875   75402 cri.go:89] found id: ""
	I0816 18:16:58.047908   75402 logs.go:276] 0 containers: []
	W0816 18:16:58.047917   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:58.047923   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:58.047976   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:58.079294   75402 cri.go:89] found id: ""
	I0816 18:16:58.079324   75402 logs.go:276] 0 containers: []
	W0816 18:16:58.079334   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:58.079342   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:58.079406   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:54.940977   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:57.439254   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:55.208298   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:57.706380   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:55.948080   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:57.949589   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:58.112357   75402 cri.go:89] found id: ""
	I0816 18:16:58.112389   75402 logs.go:276] 0 containers: []
	W0816 18:16:58.112401   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:58.112413   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:58.112428   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:58.159903   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:58.159934   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:58.172763   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:58.172789   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:58.245827   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:58.245856   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:58.245872   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:58.325008   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:58.325049   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:00.864354   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:00.877517   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:00.877593   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:00.915396   75402 cri.go:89] found id: ""
	I0816 18:17:00.915428   75402 logs.go:276] 0 containers: []
	W0816 18:17:00.915438   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:00.915446   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:00.915611   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:00.953950   75402 cri.go:89] found id: ""
	I0816 18:17:00.953977   75402 logs.go:276] 0 containers: []
	W0816 18:17:00.953987   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:00.953993   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:00.954051   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:00.987673   75402 cri.go:89] found id: ""
	I0816 18:17:00.987703   75402 logs.go:276] 0 containers: []
	W0816 18:17:00.987713   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:00.987721   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:00.987784   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:01.021230   75402 cri.go:89] found id: ""
	I0816 18:17:01.021277   75402 logs.go:276] 0 containers: []
	W0816 18:17:01.021308   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:01.021315   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:01.021388   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:01.057087   75402 cri.go:89] found id: ""
	I0816 18:17:01.057117   75402 logs.go:276] 0 containers: []
	W0816 18:17:01.057127   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:01.057135   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:01.057207   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:01.094142   75402 cri.go:89] found id: ""
	I0816 18:17:01.094168   75402 logs.go:276] 0 containers: []
	W0816 18:17:01.094176   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:01.094183   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:01.094233   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:01.132799   75402 cri.go:89] found id: ""
	I0816 18:17:01.132824   75402 logs.go:276] 0 containers: []
	W0816 18:17:01.132831   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:01.132837   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:01.132888   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:01.173367   75402 cri.go:89] found id: ""
	I0816 18:17:01.173402   75402 logs.go:276] 0 containers: []
	W0816 18:17:01.173414   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:01.173425   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:01.173443   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:01.186856   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:01.186896   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:01.259913   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:01.259941   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:01.259955   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:01.340914   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:01.340947   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:01.381023   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:01.381058   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:59.440314   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:01.440377   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:59.706750   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:01.707186   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:00.448182   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:02.448773   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:03.933420   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:03.946940   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:03.947008   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:03.984529   75402 cri.go:89] found id: ""
	I0816 18:17:03.984560   75402 logs.go:276] 0 containers: []
	W0816 18:17:03.984571   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:03.984581   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:03.984668   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:04.017900   75402 cri.go:89] found id: ""
	I0816 18:17:04.017929   75402 logs.go:276] 0 containers: []
	W0816 18:17:04.017940   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:04.017948   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:04.018009   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:04.050837   75402 cri.go:89] found id: ""
	I0816 18:17:04.050871   75402 logs.go:276] 0 containers: []
	W0816 18:17:04.050888   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:04.050896   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:04.050959   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:04.085448   75402 cri.go:89] found id: ""
	I0816 18:17:04.085477   75402 logs.go:276] 0 containers: []
	W0816 18:17:04.085487   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:04.085495   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:04.085564   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:04.118177   75402 cri.go:89] found id: ""
	I0816 18:17:04.118203   75402 logs.go:276] 0 containers: []
	W0816 18:17:04.118213   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:04.118220   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:04.118284   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:04.150289   75402 cri.go:89] found id: ""
	I0816 18:17:04.150317   75402 logs.go:276] 0 containers: []
	W0816 18:17:04.150330   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:04.150338   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:04.150404   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:04.184258   75402 cri.go:89] found id: ""
	I0816 18:17:04.184282   75402 logs.go:276] 0 containers: []
	W0816 18:17:04.184290   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:04.184295   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:04.184347   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:04.217142   75402 cri.go:89] found id: ""
	I0816 18:17:04.217174   75402 logs.go:276] 0 containers: []
	W0816 18:17:04.217184   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:04.217192   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:04.217204   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:04.253000   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:04.253034   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:04.304978   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:04.305018   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:04.320210   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:04.320241   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:04.396146   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:04.396169   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:04.396184   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:06.980747   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:06.992944   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:06.993006   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:07.026303   75402 cri.go:89] found id: ""
	I0816 18:17:07.026356   75402 logs.go:276] 0 containers: []
	W0816 18:17:07.026368   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:07.026376   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:07.026443   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:07.059226   75402 cri.go:89] found id: ""
	I0816 18:17:07.059257   75402 logs.go:276] 0 containers: []
	W0816 18:17:07.059268   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:07.059277   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:07.059339   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:07.092142   75402 cri.go:89] found id: ""
	I0816 18:17:07.092171   75402 logs.go:276] 0 containers: []
	W0816 18:17:07.092182   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:07.092188   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:07.092248   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:07.125284   75402 cri.go:89] found id: ""
	I0816 18:17:07.125330   75402 logs.go:276] 0 containers: []
	W0816 18:17:07.125347   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:07.125355   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:07.125420   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:07.163890   75402 cri.go:89] found id: ""
	I0816 18:17:07.163919   75402 logs.go:276] 0 containers: []
	W0816 18:17:07.163930   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:07.163938   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:07.164002   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:07.197988   75402 cri.go:89] found id: ""
	I0816 18:17:07.198014   75402 logs.go:276] 0 containers: []
	W0816 18:17:07.198025   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:07.198033   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:07.198116   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:07.232709   75402 cri.go:89] found id: ""
	I0816 18:17:07.232738   75402 logs.go:276] 0 containers: []
	W0816 18:17:07.232749   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:07.232756   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:07.232817   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:07.264514   75402 cri.go:89] found id: ""
	I0816 18:17:07.264548   75402 logs.go:276] 0 containers: []
	W0816 18:17:07.264558   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:07.264569   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:07.264583   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:07.316138   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:07.316173   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:07.329659   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:07.329688   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:07.397345   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:07.397380   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:07.397397   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:07.481245   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:07.481280   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:03.940100   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:05.940355   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:07.940821   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:04.207253   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:06.705745   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:08.706828   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:04.949027   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:07.447957   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:10.024405   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:10.036860   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:10.036927   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:10.069402   75402 cri.go:89] found id: ""
	I0816 18:17:10.069436   75402 logs.go:276] 0 containers: []
	W0816 18:17:10.069448   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:10.069458   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:10.069511   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:10.101480   75402 cri.go:89] found id: ""
	I0816 18:17:10.101508   75402 logs.go:276] 0 containers: []
	W0816 18:17:10.101518   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:10.101529   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:10.101601   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:10.131673   75402 cri.go:89] found id: ""
	I0816 18:17:10.131708   75402 logs.go:276] 0 containers: []
	W0816 18:17:10.131719   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:10.131726   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:10.131821   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:10.166476   75402 cri.go:89] found id: ""
	I0816 18:17:10.166508   75402 logs.go:276] 0 containers: []
	W0816 18:17:10.166518   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:10.166525   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:10.166590   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:10.199296   75402 cri.go:89] found id: ""
	I0816 18:17:10.199321   75402 logs.go:276] 0 containers: []
	W0816 18:17:10.199332   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:10.199340   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:10.199406   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:10.232640   75402 cri.go:89] found id: ""
	I0816 18:17:10.232672   75402 logs.go:276] 0 containers: []
	W0816 18:17:10.232683   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:10.232691   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:10.232775   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:10.263958   75402 cri.go:89] found id: ""
	I0816 18:17:10.263988   75402 logs.go:276] 0 containers: []
	W0816 18:17:10.263998   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:10.264003   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:10.264052   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:10.295904   75402 cri.go:89] found id: ""
	I0816 18:17:10.295929   75402 logs.go:276] 0 containers: []
	W0816 18:17:10.295937   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:10.295946   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:10.295957   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:10.344874   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:10.344909   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:10.358523   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:10.358552   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:10.433311   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:10.433334   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:10.433351   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:10.514580   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:10.514620   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:13.053815   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:13.068517   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:13.068597   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:10.440472   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:12.939209   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:10.707438   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:13.207630   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:09.947889   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:11.949408   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:14.447906   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:13.104251   75402 cri.go:89] found id: ""
	I0816 18:17:13.104279   75402 logs.go:276] 0 containers: []
	W0816 18:17:13.104313   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:13.104321   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:13.104375   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:13.137415   75402 cri.go:89] found id: ""
	I0816 18:17:13.137442   75402 logs.go:276] 0 containers: []
	W0816 18:17:13.137453   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:13.137461   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:13.137510   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:13.174165   75402 cri.go:89] found id: ""
	I0816 18:17:13.174191   75402 logs.go:276] 0 containers: []
	W0816 18:17:13.174203   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:13.174210   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:13.174271   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:13.206789   75402 cri.go:89] found id: ""
	I0816 18:17:13.206814   75402 logs.go:276] 0 containers: []
	W0816 18:17:13.206823   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:13.206831   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:13.206892   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:13.238950   75402 cri.go:89] found id: ""
	I0816 18:17:13.238975   75402 logs.go:276] 0 containers: []
	W0816 18:17:13.238984   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:13.238990   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:13.239037   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:13.271485   75402 cri.go:89] found id: ""
	I0816 18:17:13.271518   75402 logs.go:276] 0 containers: []
	W0816 18:17:13.271535   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:13.271544   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:13.271612   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:13.307576   75402 cri.go:89] found id: ""
	I0816 18:17:13.307610   75402 logs.go:276] 0 containers: []
	W0816 18:17:13.307622   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:13.307632   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:13.307698   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:13.339746   75402 cri.go:89] found id: ""
	I0816 18:17:13.339792   75402 logs.go:276] 0 containers: []
	W0816 18:17:13.339802   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:13.339813   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:13.339827   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:13.352847   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:13.352875   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:13.440397   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:13.440418   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:13.440432   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:13.514879   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:13.514916   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:13.553848   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:13.553882   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:16.103318   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:16.115837   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:16.115922   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:16.147079   75402 cri.go:89] found id: ""
	I0816 18:17:16.147108   75402 logs.go:276] 0 containers: []
	W0816 18:17:16.147119   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:16.147127   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:16.147189   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:16.184207   75402 cri.go:89] found id: ""
	I0816 18:17:16.184233   75402 logs.go:276] 0 containers: []
	W0816 18:17:16.184241   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:16.184247   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:16.184295   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:16.219036   75402 cri.go:89] found id: ""
	I0816 18:17:16.219065   75402 logs.go:276] 0 containers: []
	W0816 18:17:16.219072   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:16.219078   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:16.219163   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:16.251269   75402 cri.go:89] found id: ""
	I0816 18:17:16.251307   75402 logs.go:276] 0 containers: []
	W0816 18:17:16.251320   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:16.251329   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:16.251394   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:16.286549   75402 cri.go:89] found id: ""
	I0816 18:17:16.286576   75402 logs.go:276] 0 containers: []
	W0816 18:17:16.286585   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:16.286591   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:16.286647   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:16.322017   75402 cri.go:89] found id: ""
	I0816 18:17:16.322045   75402 logs.go:276] 0 containers: []
	W0816 18:17:16.322055   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:16.322063   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:16.322128   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:16.353606   75402 cri.go:89] found id: ""
	I0816 18:17:16.353636   75402 logs.go:276] 0 containers: []
	W0816 18:17:16.353646   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:16.353653   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:16.353719   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:16.386973   75402 cri.go:89] found id: ""
	I0816 18:17:16.387005   75402 logs.go:276] 0 containers: []
	W0816 18:17:16.387016   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:16.387027   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:16.387039   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:16.437031   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:16.437066   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:16.451258   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:16.451292   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:16.519130   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:16.519155   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:16.519170   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:16.598591   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:16.598626   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:14.939993   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:17.440655   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:15.705969   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:17.706271   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:16.449266   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:18.948220   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:19.147916   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:19.160525   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:19.160600   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:19.193494   75402 cri.go:89] found id: ""
	I0816 18:17:19.193520   75402 logs.go:276] 0 containers: []
	W0816 18:17:19.193527   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:19.193533   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:19.193599   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:19.230936   75402 cri.go:89] found id: ""
	I0816 18:17:19.230963   75402 logs.go:276] 0 containers: []
	W0816 18:17:19.230971   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:19.230976   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:19.231029   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:19.263713   75402 cri.go:89] found id: ""
	I0816 18:17:19.263735   75402 logs.go:276] 0 containers: []
	W0816 18:17:19.263742   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:19.263748   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:19.263794   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:19.294609   75402 cri.go:89] found id: ""
	I0816 18:17:19.294635   75402 logs.go:276] 0 containers: []
	W0816 18:17:19.294642   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:19.294647   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:19.294698   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:19.329278   75402 cri.go:89] found id: ""
	I0816 18:17:19.329303   75402 logs.go:276] 0 containers: []
	W0816 18:17:19.329313   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:19.329319   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:19.329368   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:19.362007   75402 cri.go:89] found id: ""
	I0816 18:17:19.362043   75402 logs.go:276] 0 containers: []
	W0816 18:17:19.362052   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:19.362067   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:19.362120   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:19.395190   75402 cri.go:89] found id: ""
	I0816 18:17:19.395217   75402 logs.go:276] 0 containers: []
	W0816 18:17:19.395248   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:19.395255   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:19.395302   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:19.426962   75402 cri.go:89] found id: ""
	I0816 18:17:19.426991   75402 logs.go:276] 0 containers: []
	W0816 18:17:19.427002   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:19.427012   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:19.427027   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:19.441319   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:19.441346   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:19.511390   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:19.511409   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:19.511425   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:19.590897   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:19.590935   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:19.628753   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:19.628781   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:22.182534   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:22.194844   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:22.194917   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:22.228225   75402 cri.go:89] found id: ""
	I0816 18:17:22.228247   75402 logs.go:276] 0 containers: []
	W0816 18:17:22.228269   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:22.228276   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:22.228325   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:22.258614   75402 cri.go:89] found id: ""
	I0816 18:17:22.258646   75402 logs.go:276] 0 containers: []
	W0816 18:17:22.258654   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:22.258660   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:22.258708   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:22.289103   75402 cri.go:89] found id: ""
	I0816 18:17:22.289136   75402 logs.go:276] 0 containers: []
	W0816 18:17:22.289147   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:22.289154   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:22.289215   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:22.321828   75402 cri.go:89] found id: ""
	I0816 18:17:22.321857   75402 logs.go:276] 0 containers: []
	W0816 18:17:22.321869   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:22.321877   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:22.321942   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:22.353557   75402 cri.go:89] found id: ""
	I0816 18:17:22.353588   75402 logs.go:276] 0 containers: []
	W0816 18:17:22.353597   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:22.353602   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:22.353660   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:22.385078   75402 cri.go:89] found id: ""
	I0816 18:17:22.385103   75402 logs.go:276] 0 containers: []
	W0816 18:17:22.385110   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:22.385116   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:22.385189   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:22.415864   75402 cri.go:89] found id: ""
	I0816 18:17:22.415900   75402 logs.go:276] 0 containers: []
	W0816 18:17:22.415913   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:22.415922   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:22.415990   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:22.449895   75402 cri.go:89] found id: ""
	I0816 18:17:22.449922   75402 logs.go:276] 0 containers: []
	W0816 18:17:22.449942   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:22.449957   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:22.449974   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:22.523055   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:22.523073   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:22.523084   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:22.599680   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:22.599719   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:22.638021   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:22.638057   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:22.688970   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:22.689010   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:19.941154   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:22.440580   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:20.207713   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:22.706805   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:21.448399   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:23.448444   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:25.202748   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:25.217316   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:25.217388   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:25.249528   75402 cri.go:89] found id: ""
	I0816 18:17:25.249558   75402 logs.go:276] 0 containers: []
	W0816 18:17:25.249566   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:25.249578   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:25.249625   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:25.282667   75402 cri.go:89] found id: ""
	I0816 18:17:25.282696   75402 logs.go:276] 0 containers: []
	W0816 18:17:25.282706   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:25.282712   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:25.282764   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:25.314061   75402 cri.go:89] found id: ""
	I0816 18:17:25.314091   75402 logs.go:276] 0 containers: []
	W0816 18:17:25.314101   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:25.314108   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:25.314161   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:25.351260   75402 cri.go:89] found id: ""
	I0816 18:17:25.351287   75402 logs.go:276] 0 containers: []
	W0816 18:17:25.351296   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:25.351301   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:25.351352   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:25.388303   75402 cri.go:89] found id: ""
	I0816 18:17:25.388334   75402 logs.go:276] 0 containers: []
	W0816 18:17:25.388345   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:25.388352   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:25.388412   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:25.422133   75402 cri.go:89] found id: ""
	I0816 18:17:25.422161   75402 logs.go:276] 0 containers: []
	W0816 18:17:25.422169   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:25.422175   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:25.422232   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:25.456749   75402 cri.go:89] found id: ""
	I0816 18:17:25.456775   75402 logs.go:276] 0 containers: []
	W0816 18:17:25.456783   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:25.456789   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:25.456836   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:25.494783   75402 cri.go:89] found id: ""
	I0816 18:17:25.494809   75402 logs.go:276] 0 containers: []
	W0816 18:17:25.494817   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:25.494825   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:25.494836   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:25.561253   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:25.561290   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:25.580349   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:25.580383   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:25.656333   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:25.656361   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:25.656378   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:25.733479   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:25.733515   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:24.444069   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:26.939743   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:24.707849   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:26.709711   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:25.448555   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:27.449070   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:28.272217   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:28.285750   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:28.285822   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:28.318230   75402 cri.go:89] found id: ""
	I0816 18:17:28.318260   75402 logs.go:276] 0 containers: []
	W0816 18:17:28.318268   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:28.318275   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:28.318344   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:28.351766   75402 cri.go:89] found id: ""
	I0816 18:17:28.351798   75402 logs.go:276] 0 containers: []
	W0816 18:17:28.351808   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:28.351814   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:28.351872   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:28.385543   75402 cri.go:89] found id: ""
	I0816 18:17:28.385572   75402 logs.go:276] 0 containers: []
	W0816 18:17:28.385581   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:28.385588   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:28.385653   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:28.418808   75402 cri.go:89] found id: ""
	I0816 18:17:28.418837   75402 logs.go:276] 0 containers: []
	W0816 18:17:28.418846   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:28.418852   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:28.418900   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:28.453883   75402 cri.go:89] found id: ""
	I0816 18:17:28.453911   75402 logs.go:276] 0 containers: []
	W0816 18:17:28.453922   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:28.453929   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:28.453996   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:28.486261   75402 cri.go:89] found id: ""
	I0816 18:17:28.486291   75402 logs.go:276] 0 containers: []
	W0816 18:17:28.486304   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:28.486310   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:28.486366   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:28.520617   75402 cri.go:89] found id: ""
	I0816 18:17:28.520658   75402 logs.go:276] 0 containers: []
	W0816 18:17:28.520670   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:28.520678   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:28.520731   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:28.552996   75402 cri.go:89] found id: ""
	I0816 18:17:28.553026   75402 logs.go:276] 0 containers: []
	W0816 18:17:28.553036   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:28.553046   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:28.553061   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:28.604149   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:28.604192   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:28.617393   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:28.617421   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:28.683258   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:28.683279   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:28.683294   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:28.766933   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:28.766977   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:31.305897   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:31.326070   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:31.326143   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:31.375314   75402 cri.go:89] found id: ""
	I0816 18:17:31.375350   75402 logs.go:276] 0 containers: []
	W0816 18:17:31.375361   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:31.375369   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:31.375429   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:31.407372   75402 cri.go:89] found id: ""
	I0816 18:17:31.407398   75402 logs.go:276] 0 containers: []
	W0816 18:17:31.407406   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:31.407411   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:31.407459   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:31.445679   75402 cri.go:89] found id: ""
	I0816 18:17:31.445706   75402 logs.go:276] 0 containers: []
	W0816 18:17:31.445714   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:31.445720   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:31.445781   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:31.480040   75402 cri.go:89] found id: ""
	I0816 18:17:31.480072   75402 logs.go:276] 0 containers: []
	W0816 18:17:31.480080   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:31.480085   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:31.480145   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:31.511143   75402 cri.go:89] found id: ""
	I0816 18:17:31.511171   75402 logs.go:276] 0 containers: []
	W0816 18:17:31.511182   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:31.511188   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:31.511252   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:31.544254   75402 cri.go:89] found id: ""
	I0816 18:17:31.544282   75402 logs.go:276] 0 containers: []
	W0816 18:17:31.544293   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:31.544300   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:31.544363   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:31.579007   75402 cri.go:89] found id: ""
	I0816 18:17:31.579033   75402 logs.go:276] 0 containers: []
	W0816 18:17:31.579041   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:31.579046   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:31.579108   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:31.619966   75402 cri.go:89] found id: ""
	I0816 18:17:31.619995   75402 logs.go:276] 0 containers: []
	W0816 18:17:31.620005   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:31.620018   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:31.620035   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:31.657784   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:31.657815   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:31.706824   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:31.706853   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:31.719696   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:31.719721   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:31.786096   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:31.786124   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:31.786142   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:28.940711   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:31.440514   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:29.206929   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:31.706188   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:33.706244   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:29.948053   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:32.448453   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:34.363862   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:34.377365   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:34.377430   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:34.414191   75402 cri.go:89] found id: ""
	I0816 18:17:34.414216   75402 logs.go:276] 0 containers: []
	W0816 18:17:34.414223   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:34.414229   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:34.414285   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:34.446811   75402 cri.go:89] found id: ""
	I0816 18:17:34.446836   75402 logs.go:276] 0 containers: []
	W0816 18:17:34.446843   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:34.446848   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:34.446905   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:34.477582   75402 cri.go:89] found id: ""
	I0816 18:17:34.477615   75402 logs.go:276] 0 containers: []
	W0816 18:17:34.477627   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:34.477634   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:34.477695   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:34.507868   75402 cri.go:89] found id: ""
	I0816 18:17:34.507901   75402 logs.go:276] 0 containers: []
	W0816 18:17:34.507912   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:34.507921   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:34.507984   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:34.538719   75402 cri.go:89] found id: ""
	I0816 18:17:34.538754   75402 logs.go:276] 0 containers: []
	W0816 18:17:34.538765   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:34.538772   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:34.538826   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:34.571445   75402 cri.go:89] found id: ""
	I0816 18:17:34.571468   75402 logs.go:276] 0 containers: []
	W0816 18:17:34.571477   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:34.571484   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:34.571557   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:34.601587   75402 cri.go:89] found id: ""
	I0816 18:17:34.601611   75402 logs.go:276] 0 containers: []
	W0816 18:17:34.601618   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:34.601624   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:34.601669   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:34.634850   75402 cri.go:89] found id: ""
	I0816 18:17:34.634878   75402 logs.go:276] 0 containers: []
	W0816 18:17:34.634892   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:34.634906   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:34.634920   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:34.682828   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:34.682859   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:34.695796   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:34.695820   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:34.762100   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:34.762121   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:34.762133   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:34.845329   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:34.845359   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:37.386266   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:37.398940   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:37.399005   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:37.433072   75402 cri.go:89] found id: ""
	I0816 18:17:37.433099   75402 logs.go:276] 0 containers: []
	W0816 18:17:37.433112   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:37.433118   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:37.433169   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:37.466968   75402 cri.go:89] found id: ""
	I0816 18:17:37.467001   75402 logs.go:276] 0 containers: []
	W0816 18:17:37.467012   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:37.467021   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:37.467086   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:37.509268   75402 cri.go:89] found id: ""
	I0816 18:17:37.509291   75402 logs.go:276] 0 containers: []
	W0816 18:17:37.509300   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:37.509306   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:37.509365   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:37.541295   75402 cri.go:89] found id: ""
	I0816 18:17:37.541338   75402 logs.go:276] 0 containers: []
	W0816 18:17:37.541350   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:37.541357   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:37.541421   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:37.575423   75402 cri.go:89] found id: ""
	I0816 18:17:37.575453   75402 logs.go:276] 0 containers: []
	W0816 18:17:37.575464   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:37.575472   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:37.575540   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:37.614787   75402 cri.go:89] found id: ""
	I0816 18:17:37.614817   75402 logs.go:276] 0 containers: []
	W0816 18:17:37.614828   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:37.614835   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:37.614896   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:37.646396   75402 cri.go:89] found id: ""
	I0816 18:17:37.646430   75402 logs.go:276] 0 containers: []
	W0816 18:17:37.646441   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:37.646449   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:37.646517   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:37.679383   75402 cri.go:89] found id: ""
	I0816 18:17:37.679414   75402 logs.go:276] 0 containers: []
	W0816 18:17:37.679423   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:37.679431   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:37.679442   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:37.729641   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:37.729673   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:37.742420   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:37.742448   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:37.812572   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:37.812600   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:37.812615   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:37.887100   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:37.887137   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:33.940380   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:35.941055   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:38.440700   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:35.706903   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:38.207115   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:34.947638   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:37.448511   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:39.448944   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:40.424202   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:40.438231   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:40.438337   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:40.474614   75402 cri.go:89] found id: ""
	I0816 18:17:40.474639   75402 logs.go:276] 0 containers: []
	W0816 18:17:40.474648   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:40.474653   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:40.474701   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:40.510123   75402 cri.go:89] found id: ""
	I0816 18:17:40.510154   75402 logs.go:276] 0 containers: []
	W0816 18:17:40.510162   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:40.510167   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:40.510217   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:40.548971   75402 cri.go:89] found id: ""
	I0816 18:17:40.549000   75402 logs.go:276] 0 containers: []
	W0816 18:17:40.549008   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:40.549013   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:40.549069   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:40.595126   75402 cri.go:89] found id: ""
	I0816 18:17:40.595158   75402 logs.go:276] 0 containers: []
	W0816 18:17:40.595167   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:40.595174   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:40.595220   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:40.629769   75402 cri.go:89] found id: ""
	I0816 18:17:40.629793   75402 logs.go:276] 0 containers: []
	W0816 18:17:40.629801   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:40.629807   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:40.629871   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:40.661889   75402 cri.go:89] found id: ""
	I0816 18:17:40.661922   75402 logs.go:276] 0 containers: []
	W0816 18:17:40.661932   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:40.661939   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:40.662001   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:40.697764   75402 cri.go:89] found id: ""
	I0816 18:17:40.697790   75402 logs.go:276] 0 containers: []
	W0816 18:17:40.697801   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:40.697808   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:40.697867   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:40.734825   75402 cri.go:89] found id: ""
	I0816 18:17:40.734852   75402 logs.go:276] 0 containers: []
	W0816 18:17:40.734862   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:40.734872   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:40.734939   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:40.787975   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:40.788015   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:40.800817   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:40.800843   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:40.874182   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:40.874205   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:40.874219   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:40.960032   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:40.960066   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:40.940284   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:42.943218   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:40.207943   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:42.707356   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:41.947437   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:43.947887   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:43.499770   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:43.513726   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:43.513806   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:43.548368   75402 cri.go:89] found id: ""
	I0816 18:17:43.548396   75402 logs.go:276] 0 containers: []
	W0816 18:17:43.548406   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:43.548413   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:43.548474   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:43.581177   75402 cri.go:89] found id: ""
	I0816 18:17:43.581205   75402 logs.go:276] 0 containers: []
	W0816 18:17:43.581216   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:43.581223   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:43.581291   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:43.614315   75402 cri.go:89] found id: ""
	I0816 18:17:43.614354   75402 logs.go:276] 0 containers: []
	W0816 18:17:43.614367   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:43.614374   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:43.614437   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:43.648608   75402 cri.go:89] found id: ""
	I0816 18:17:43.648645   75402 logs.go:276] 0 containers: []
	W0816 18:17:43.648658   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:43.648669   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:43.648722   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:43.680549   75402 cri.go:89] found id: ""
	I0816 18:17:43.680586   75402 logs.go:276] 0 containers: []
	W0816 18:17:43.680597   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:43.680604   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:43.680686   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:43.710473   75402 cri.go:89] found id: ""
	I0816 18:17:43.710497   75402 logs.go:276] 0 containers: []
	W0816 18:17:43.710506   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:43.710514   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:43.710576   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:43.741415   75402 cri.go:89] found id: ""
	I0816 18:17:43.741442   75402 logs.go:276] 0 containers: []
	W0816 18:17:43.741450   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:43.741456   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:43.741505   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:43.775018   75402 cri.go:89] found id: ""
	I0816 18:17:43.775051   75402 logs.go:276] 0 containers: []
	W0816 18:17:43.775063   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:43.775074   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:43.775087   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:43.825596   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:43.825630   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:43.839133   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:43.839161   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:43.905645   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:43.905667   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:43.905679   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:43.988860   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:43.988901   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:46.525896   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:46.539147   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:46.539229   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:46.570703   75402 cri.go:89] found id: ""
	I0816 18:17:46.570726   75402 logs.go:276] 0 containers: []
	W0816 18:17:46.570734   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:46.570740   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:46.570785   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:46.605909   75402 cri.go:89] found id: ""
	I0816 18:17:46.605939   75402 logs.go:276] 0 containers: []
	W0816 18:17:46.605954   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:46.605961   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:46.606013   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:46.638865   75402 cri.go:89] found id: ""
	I0816 18:17:46.638899   75402 logs.go:276] 0 containers: []
	W0816 18:17:46.638911   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:46.638919   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:46.638994   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:46.671869   75402 cri.go:89] found id: ""
	I0816 18:17:46.671904   75402 logs.go:276] 0 containers: []
	W0816 18:17:46.671917   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:46.671926   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:46.671988   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:46.703423   75402 cri.go:89] found id: ""
	I0816 18:17:46.703464   75402 logs.go:276] 0 containers: []
	W0816 18:17:46.703473   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:46.703479   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:46.703545   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:46.735824   75402 cri.go:89] found id: ""
	I0816 18:17:46.735853   75402 logs.go:276] 0 containers: []
	W0816 18:17:46.735864   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:46.735871   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:46.735926   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:46.767122   75402 cri.go:89] found id: ""
	I0816 18:17:46.767146   75402 logs.go:276] 0 containers: []
	W0816 18:17:46.767154   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:46.767160   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:46.767207   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:46.798093   75402 cri.go:89] found id: ""
	I0816 18:17:46.798126   75402 logs.go:276] 0 containers: []
	W0816 18:17:46.798140   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:46.798152   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:46.798167   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:46.832699   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:46.832725   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:46.884212   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:46.884246   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:46.896896   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:46.896921   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:46.968805   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:46.968824   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:46.968838   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:45.440474   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:47.940127   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:45.206534   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:47.206973   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:45.948252   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:48.448086   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:49.552581   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:49.565134   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:49.565212   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:49.597012   75402 cri.go:89] found id: ""
	I0816 18:17:49.597042   75402 logs.go:276] 0 containers: []
	W0816 18:17:49.597057   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:49.597067   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:49.597133   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:49.628902   75402 cri.go:89] found id: ""
	I0816 18:17:49.628935   75402 logs.go:276] 0 containers: []
	W0816 18:17:49.628948   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:49.628957   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:49.629025   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:49.662668   75402 cri.go:89] found id: ""
	I0816 18:17:49.662698   75402 logs.go:276] 0 containers: []
	W0816 18:17:49.662709   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:49.662715   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:49.662778   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:49.696354   75402 cri.go:89] found id: ""
	I0816 18:17:49.696381   75402 logs.go:276] 0 containers: []
	W0816 18:17:49.696389   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:49.696395   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:49.696487   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:49.730801   75402 cri.go:89] found id: ""
	I0816 18:17:49.730838   75402 logs.go:276] 0 containers: []
	W0816 18:17:49.730849   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:49.730856   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:49.730921   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:49.764474   75402 cri.go:89] found id: ""
	I0816 18:17:49.764503   75402 logs.go:276] 0 containers: []
	W0816 18:17:49.764514   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:49.764522   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:49.764585   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:49.798577   75402 cri.go:89] found id: ""
	I0816 18:17:49.798616   75402 logs.go:276] 0 containers: []
	W0816 18:17:49.798627   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:49.798634   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:49.798703   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:49.830987   75402 cri.go:89] found id: ""
	I0816 18:17:49.831016   75402 logs.go:276] 0 containers: []
	W0816 18:17:49.831024   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:49.831032   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:49.831043   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:49.883397   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:49.883433   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:49.897208   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:49.897239   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:49.968363   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:49.968386   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:49.968398   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:50.056552   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:50.056583   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:52.596191   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:52.609592   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:52.609668   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:52.645775   75402 cri.go:89] found id: ""
	I0816 18:17:52.645807   75402 logs.go:276] 0 containers: []
	W0816 18:17:52.645817   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:52.645823   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:52.645869   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:52.677817   75402 cri.go:89] found id: ""
	I0816 18:17:52.677852   75402 logs.go:276] 0 containers: []
	W0816 18:17:52.677862   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:52.677870   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:52.677935   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:52.710618   75402 cri.go:89] found id: ""
	I0816 18:17:52.710648   75402 logs.go:276] 0 containers: []
	W0816 18:17:52.710658   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:52.710664   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:52.710716   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:52.745830   75402 cri.go:89] found id: ""
	I0816 18:17:52.745858   75402 logs.go:276] 0 containers: []
	W0816 18:17:52.745867   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:52.745872   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:52.745929   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:52.778511   75402 cri.go:89] found id: ""
	I0816 18:17:52.778538   75402 logs.go:276] 0 containers: []
	W0816 18:17:52.778548   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:52.778567   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:52.778632   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:52.810759   75402 cri.go:89] found id: ""
	I0816 18:17:52.810788   75402 logs.go:276] 0 containers: []
	W0816 18:17:52.810800   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:52.810807   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:52.810872   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:52.843786   75402 cri.go:89] found id: ""
	I0816 18:17:52.843814   75402 logs.go:276] 0 containers: []
	W0816 18:17:52.843824   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:52.843831   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:52.843886   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:52.876886   75402 cri.go:89] found id: ""
	I0816 18:17:52.876914   75402 logs.go:276] 0 containers: []
	W0816 18:17:52.876924   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:52.876934   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:52.876950   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:52.932519   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:52.932559   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:52.946645   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:52.946671   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:53.018156   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:53.018177   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:53.018190   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:53.095562   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:53.095600   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:49.940263   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:51.940433   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:49.707635   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:52.206027   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:50.449204   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:52.949591   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:55.633820   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:55.646170   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:55.646238   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:55.678147   75402 cri.go:89] found id: ""
	I0816 18:17:55.678181   75402 logs.go:276] 0 containers: []
	W0816 18:17:55.678194   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:55.678202   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:55.678264   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:55.710910   75402 cri.go:89] found id: ""
	I0816 18:17:55.710938   75402 logs.go:276] 0 containers: []
	W0816 18:17:55.710948   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:55.710956   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:55.711012   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:55.744822   75402 cri.go:89] found id: ""
	I0816 18:17:55.744853   75402 logs.go:276] 0 containers: []
	W0816 18:17:55.744863   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:55.744870   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:55.744931   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:55.791677   75402 cri.go:89] found id: ""
	I0816 18:17:55.791708   75402 logs.go:276] 0 containers: []
	W0816 18:17:55.791719   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:55.791727   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:55.791788   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:55.826448   75402 cri.go:89] found id: ""
	I0816 18:17:55.826481   75402 logs.go:276] 0 containers: []
	W0816 18:17:55.826492   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:55.826500   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:55.826564   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:55.861178   75402 cri.go:89] found id: ""
	I0816 18:17:55.861210   75402 logs.go:276] 0 containers: []
	W0816 18:17:55.861219   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:55.861225   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:55.861280   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:55.898073   75402 cri.go:89] found id: ""
	I0816 18:17:55.898099   75402 logs.go:276] 0 containers: []
	W0816 18:17:55.898110   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:55.898117   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:55.898184   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:55.931446   75402 cri.go:89] found id: ""
	I0816 18:17:55.931478   75402 logs.go:276] 0 containers: []
	W0816 18:17:55.931487   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:55.931498   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:55.931514   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:55.999910   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:55.999931   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:55.999943   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:56.077240   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:56.077312   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:56.115479   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:56.115506   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:56.166954   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:56.166989   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:54.440166   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:56.939865   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:54.206368   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:56.206710   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:58.207053   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:55.448566   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:57.948891   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:58.680571   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:58.692824   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:58.692890   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:58.729761   75402 cri.go:89] found id: ""
	I0816 18:17:58.729786   75402 logs.go:276] 0 containers: []
	W0816 18:17:58.729794   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:58.729799   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:58.729857   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:58.764943   75402 cri.go:89] found id: ""
	I0816 18:17:58.765082   75402 logs.go:276] 0 containers: []
	W0816 18:17:58.765113   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:58.765124   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:58.765179   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:58.801314   75402 cri.go:89] found id: ""
	I0816 18:17:58.801345   75402 logs.go:276] 0 containers: []
	W0816 18:17:58.801357   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:58.801365   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:58.801429   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:58.833936   75402 cri.go:89] found id: ""
	I0816 18:17:58.833973   75402 logs.go:276] 0 containers: []
	W0816 18:17:58.833982   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:58.833988   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:58.834046   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:58.870108   75402 cri.go:89] found id: ""
	I0816 18:17:58.870137   75402 logs.go:276] 0 containers: []
	W0816 18:17:58.870148   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:58.870155   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:58.870219   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:58.904157   75402 cri.go:89] found id: ""
	I0816 18:17:58.904184   75402 logs.go:276] 0 containers: []
	W0816 18:17:58.904194   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:58.904201   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:58.904264   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:58.937862   75402 cri.go:89] found id: ""
	I0816 18:17:58.937891   75402 logs.go:276] 0 containers: []
	W0816 18:17:58.937901   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:58.937909   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:58.937972   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:58.972465   75402 cri.go:89] found id: ""
	I0816 18:17:58.972495   75402 logs.go:276] 0 containers: []
	W0816 18:17:58.972506   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:58.972517   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:58.972532   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:59.047197   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:59.047223   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:59.047238   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:59.126634   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:59.126668   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:59.165528   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:59.165562   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:59.214294   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:59.214433   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:01.729662   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:01.742582   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:01.742642   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:01.776148   75402 cri.go:89] found id: ""
	I0816 18:18:01.776180   75402 logs.go:276] 0 containers: []
	W0816 18:18:01.776188   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:01.776197   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:01.776243   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:01.809186   75402 cri.go:89] found id: ""
	I0816 18:18:01.809218   75402 logs.go:276] 0 containers: []
	W0816 18:18:01.809229   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:01.809237   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:01.809307   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:01.842379   75402 cri.go:89] found id: ""
	I0816 18:18:01.842406   75402 logs.go:276] 0 containers: []
	W0816 18:18:01.842417   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:01.842425   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:01.842490   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:01.874706   75402 cri.go:89] found id: ""
	I0816 18:18:01.874739   75402 logs.go:276] 0 containers: []
	W0816 18:18:01.874747   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:01.874753   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:01.874813   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:01.915567   75402 cri.go:89] found id: ""
	I0816 18:18:01.915596   75402 logs.go:276] 0 containers: []
	W0816 18:18:01.915607   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:01.915615   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:01.915675   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:01.951527   75402 cri.go:89] found id: ""
	I0816 18:18:01.951559   75402 logs.go:276] 0 containers: []
	W0816 18:18:01.951569   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:01.951576   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:01.951638   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:01.983822   75402 cri.go:89] found id: ""
	I0816 18:18:01.983848   75402 logs.go:276] 0 containers: []
	W0816 18:18:01.983856   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:01.983861   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:01.983909   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:02.018976   75402 cri.go:89] found id: ""
	I0816 18:18:02.019003   75402 logs.go:276] 0 containers: []
	W0816 18:18:02.019012   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:02.019019   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:02.019033   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:02.071096   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:02.071131   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:02.085163   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:02.085189   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:02.154771   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:02.154789   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:02.154800   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:02.242068   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:02.242105   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:58.941456   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:01.440404   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:00.208085   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:02.705334   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:00.447843   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:02.448334   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:04.790311   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:04.803215   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:04.803298   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:04.835834   75402 cri.go:89] found id: ""
	I0816 18:18:04.835868   75402 logs.go:276] 0 containers: []
	W0816 18:18:04.835879   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:04.835886   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:04.835951   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:04.870000   75402 cri.go:89] found id: ""
	I0816 18:18:04.870032   75402 logs.go:276] 0 containers: []
	W0816 18:18:04.870042   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:04.870049   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:04.870111   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:04.906624   75402 cri.go:89] found id: ""
	I0816 18:18:04.906653   75402 logs.go:276] 0 containers: []
	W0816 18:18:04.906663   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:04.906670   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:04.906730   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:04.940115   75402 cri.go:89] found id: ""
	I0816 18:18:04.940139   75402 logs.go:276] 0 containers: []
	W0816 18:18:04.940148   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:04.940155   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:04.940213   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:04.974461   75402 cri.go:89] found id: ""
	I0816 18:18:04.974493   75402 logs.go:276] 0 containers: []
	W0816 18:18:04.974503   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:04.974510   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:04.974571   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:05.006593   75402 cri.go:89] found id: ""
	I0816 18:18:05.006618   75402 logs.go:276] 0 containers: []
	W0816 18:18:05.006628   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:05.006635   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:05.006691   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:05.040041   75402 cri.go:89] found id: ""
	I0816 18:18:05.040066   75402 logs.go:276] 0 containers: []
	W0816 18:18:05.040082   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:05.040089   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:05.040144   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:05.072968   75402 cri.go:89] found id: ""
	I0816 18:18:05.072996   75402 logs.go:276] 0 containers: []
	W0816 18:18:05.073005   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:05.073014   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:05.073025   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:05.124510   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:05.124543   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:05.145566   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:05.145592   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:05.221874   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:05.221898   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:05.221914   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:05.297283   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:05.297316   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:07.837564   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:07.850372   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:07.850441   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:07.882879   75402 cri.go:89] found id: ""
	I0816 18:18:07.882906   75402 logs.go:276] 0 containers: []
	W0816 18:18:07.882915   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:07.882920   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:07.882978   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:07.916983   75402 cri.go:89] found id: ""
	I0816 18:18:07.917011   75402 logs.go:276] 0 containers: []
	W0816 18:18:07.917019   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:07.917024   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:07.917075   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:07.953864   75402 cri.go:89] found id: ""
	I0816 18:18:07.953886   75402 logs.go:276] 0 containers: []
	W0816 18:18:07.953896   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:07.953903   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:07.953951   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:07.994375   75402 cri.go:89] found id: ""
	I0816 18:18:07.994399   75402 logs.go:276] 0 containers: []
	W0816 18:18:07.994408   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:07.994414   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:07.994472   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:08.029137   75402 cri.go:89] found id: ""
	I0816 18:18:08.029170   75402 logs.go:276] 0 containers: []
	W0816 18:18:08.029182   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:08.029189   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:08.029253   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:08.062331   75402 cri.go:89] found id: ""
	I0816 18:18:08.062358   75402 logs.go:276] 0 containers: []
	W0816 18:18:08.062367   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:08.062373   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:08.062430   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:08.097021   75402 cri.go:89] found id: ""
	I0816 18:18:08.097044   75402 logs.go:276] 0 containers: []
	W0816 18:18:08.097051   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:08.097056   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:08.097112   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:03.940724   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:06.441847   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:04.706298   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:06.707011   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:04.948066   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:06.948125   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:08.948992   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:08.131147   75402 cri.go:89] found id: ""
	I0816 18:18:08.131174   75402 logs.go:276] 0 containers: []
	W0816 18:18:08.131184   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:08.131192   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:08.131203   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:08.182334   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:08.182373   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:08.195459   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:08.195485   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:08.260333   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:08.260351   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:08.260363   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:08.344466   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:08.344506   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:10.881640   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:10.896400   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:10.896482   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:10.934034   75402 cri.go:89] found id: ""
	I0816 18:18:10.934068   75402 logs.go:276] 0 containers: []
	W0816 18:18:10.934076   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:10.934081   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:10.934130   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:10.966697   75402 cri.go:89] found id: ""
	I0816 18:18:10.966724   75402 logs.go:276] 0 containers: []
	W0816 18:18:10.966733   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:10.966741   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:10.966807   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:11.000540   75402 cri.go:89] found id: ""
	I0816 18:18:11.000568   75402 logs.go:276] 0 containers: []
	W0816 18:18:11.000579   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:11.000587   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:11.000665   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:11.034322   75402 cri.go:89] found id: ""
	I0816 18:18:11.034346   75402 logs.go:276] 0 containers: []
	W0816 18:18:11.034354   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:11.034360   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:11.034407   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:11.067081   75402 cri.go:89] found id: ""
	I0816 18:18:11.067108   75402 logs.go:276] 0 containers: []
	W0816 18:18:11.067116   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:11.067122   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:11.067170   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:11.099726   75402 cri.go:89] found id: ""
	I0816 18:18:11.099753   75402 logs.go:276] 0 containers: []
	W0816 18:18:11.099763   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:11.099770   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:11.099834   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:11.133187   75402 cri.go:89] found id: ""
	I0816 18:18:11.133216   75402 logs.go:276] 0 containers: []
	W0816 18:18:11.133226   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:11.133235   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:11.133315   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:11.167121   75402 cri.go:89] found id: ""
	I0816 18:18:11.167157   75402 logs.go:276] 0 containers: []
	W0816 18:18:11.167166   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:11.167177   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:11.167194   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:11.181396   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:11.181424   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:11.248286   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:11.248313   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:11.248325   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:11.328546   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:11.328583   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:11.365534   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:11.365576   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:08.939686   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:10.941097   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:13.440001   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:09.207018   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:11.207677   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:13.706818   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:10.949461   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:13.448057   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:13.919889   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:13.935097   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:13.935178   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:13.973196   75402 cri.go:89] found id: ""
	I0816 18:18:13.973225   75402 logs.go:276] 0 containers: []
	W0816 18:18:13.973236   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:13.973244   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:13.973328   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:14.011913   75402 cri.go:89] found id: ""
	I0816 18:18:14.011936   75402 logs.go:276] 0 containers: []
	W0816 18:18:14.011944   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:14.011950   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:14.012013   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:14.048418   75402 cri.go:89] found id: ""
	I0816 18:18:14.048447   75402 logs.go:276] 0 containers: []
	W0816 18:18:14.048459   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:14.048466   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:14.048515   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:14.082462   75402 cri.go:89] found id: ""
	I0816 18:18:14.082496   75402 logs.go:276] 0 containers: []
	W0816 18:18:14.082506   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:14.082514   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:14.082576   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:14.114958   75402 cri.go:89] found id: ""
	I0816 18:18:14.114986   75402 logs.go:276] 0 containers: []
	W0816 18:18:14.114996   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:14.115005   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:14.115067   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:14.154829   75402 cri.go:89] found id: ""
	I0816 18:18:14.154865   75402 logs.go:276] 0 containers: []
	W0816 18:18:14.154878   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:14.154888   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:14.154957   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:14.190012   75402 cri.go:89] found id: ""
	I0816 18:18:14.190045   75402 logs.go:276] 0 containers: []
	W0816 18:18:14.190053   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:14.190058   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:14.190108   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:14.223314   75402 cri.go:89] found id: ""
	I0816 18:18:14.223341   75402 logs.go:276] 0 containers: []
	W0816 18:18:14.223350   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:14.223360   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:14.223381   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:14.274995   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:14.275035   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:14.288518   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:14.288564   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:14.365668   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:14.365691   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:14.365705   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:14.445828   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:14.445866   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:16.981802   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:16.994729   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:16.994794   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:17.029790   75402 cri.go:89] found id: ""
	I0816 18:18:17.029821   75402 logs.go:276] 0 containers: []
	W0816 18:18:17.029839   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:17.029848   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:17.029912   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:17.063194   75402 cri.go:89] found id: ""
	I0816 18:18:17.063223   75402 logs.go:276] 0 containers: []
	W0816 18:18:17.063233   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:17.063240   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:17.063293   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:17.097808   75402 cri.go:89] found id: ""
	I0816 18:18:17.097831   75402 logs.go:276] 0 containers: []
	W0816 18:18:17.097839   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:17.097844   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:17.097900   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:17.132646   75402 cri.go:89] found id: ""
	I0816 18:18:17.132682   75402 logs.go:276] 0 containers: []
	W0816 18:18:17.132691   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:17.132697   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:17.132751   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:17.164285   75402 cri.go:89] found id: ""
	I0816 18:18:17.164316   75402 logs.go:276] 0 containers: []
	W0816 18:18:17.164328   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:17.164335   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:17.164391   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:17.195642   75402 cri.go:89] found id: ""
	I0816 18:18:17.195672   75402 logs.go:276] 0 containers: []
	W0816 18:18:17.195683   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:17.195691   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:17.195754   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:17.228005   75402 cri.go:89] found id: ""
	I0816 18:18:17.228033   75402 logs.go:276] 0 containers: []
	W0816 18:18:17.228041   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:17.228047   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:17.228107   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:17.279195   75402 cri.go:89] found id: ""
	I0816 18:18:17.279229   75402 logs.go:276] 0 containers: []
	W0816 18:18:17.279241   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:17.279253   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:17.279270   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:17.360084   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:17.360125   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:17.405184   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:17.405210   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:17.457453   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:17.457483   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:17.471472   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:17.471502   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:17.536478   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:15.939660   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:17.940456   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:16.207019   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:18.706191   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:15.450419   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:17.948912   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:20.036644   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:20.050169   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:20.050244   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:20.087943   75402 cri.go:89] found id: ""
	I0816 18:18:20.087971   75402 logs.go:276] 0 containers: []
	W0816 18:18:20.087981   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:20.087988   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:20.088051   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:20.119908   75402 cri.go:89] found id: ""
	I0816 18:18:20.119931   75402 logs.go:276] 0 containers: []
	W0816 18:18:20.119940   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:20.119945   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:20.120013   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:20.152115   75402 cri.go:89] found id: ""
	I0816 18:18:20.152146   75402 logs.go:276] 0 containers: []
	W0816 18:18:20.152156   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:20.152162   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:20.152209   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:20.189464   75402 cri.go:89] found id: ""
	I0816 18:18:20.189488   75402 logs.go:276] 0 containers: []
	W0816 18:18:20.189495   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:20.189500   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:20.189550   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:20.224779   75402 cri.go:89] found id: ""
	I0816 18:18:20.224807   75402 logs.go:276] 0 containers: []
	W0816 18:18:20.224817   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:20.224824   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:20.224888   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:20.257021   75402 cri.go:89] found id: ""
	I0816 18:18:20.257048   75402 logs.go:276] 0 containers: []
	W0816 18:18:20.257059   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:20.257067   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:20.257121   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:20.290991   75402 cri.go:89] found id: ""
	I0816 18:18:20.291023   75402 logs.go:276] 0 containers: []
	W0816 18:18:20.291032   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:20.291039   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:20.291099   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:20.323674   75402 cri.go:89] found id: ""
	I0816 18:18:20.323704   75402 logs.go:276] 0 containers: []
	W0816 18:18:20.323715   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:20.323726   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:20.323742   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:20.373411   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:20.373447   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:20.386954   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:20.386981   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:20.464366   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:20.464384   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:20.464403   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:20.541836   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:20.541881   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:23.085071   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:23.100460   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:23.100524   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:20.440656   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:22.942713   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:20.706771   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:23.207824   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:20.448676   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:22.948907   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:23.141239   75402 cri.go:89] found id: ""
	I0816 18:18:23.141269   75402 logs.go:276] 0 containers: []
	W0816 18:18:23.141280   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:23.141287   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:23.141354   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:23.172914   75402 cri.go:89] found id: ""
	I0816 18:18:23.172941   75402 logs.go:276] 0 containers: []
	W0816 18:18:23.172950   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:23.172958   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:23.173015   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:23.205593   75402 cri.go:89] found id: ""
	I0816 18:18:23.205621   75402 logs.go:276] 0 containers: []
	W0816 18:18:23.205632   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:23.205640   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:23.205706   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:23.239358   75402 cri.go:89] found id: ""
	I0816 18:18:23.239383   75402 logs.go:276] 0 containers: []
	W0816 18:18:23.239392   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:23.239401   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:23.239463   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:23.271798   75402 cri.go:89] found id: ""
	I0816 18:18:23.271828   75402 logs.go:276] 0 containers: []
	W0816 18:18:23.271838   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:23.271844   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:23.271911   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:23.305287   75402 cri.go:89] found id: ""
	I0816 18:18:23.305316   75402 logs.go:276] 0 containers: []
	W0816 18:18:23.305327   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:23.305335   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:23.305397   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:23.344041   75402 cri.go:89] found id: ""
	I0816 18:18:23.344067   75402 logs.go:276] 0 containers: []
	W0816 18:18:23.344075   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:23.344080   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:23.344134   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:23.376540   75402 cri.go:89] found id: ""
	I0816 18:18:23.376571   75402 logs.go:276] 0 containers: []
	W0816 18:18:23.376583   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:23.376601   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:23.376616   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:23.428265   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:23.428301   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:23.441377   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:23.441404   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:23.509219   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:23.509243   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:23.509259   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:23.589151   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:23.589186   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:26.126176   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:26.140228   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:26.140292   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:26.176768   75402 cri.go:89] found id: ""
	I0816 18:18:26.176807   75402 logs.go:276] 0 containers: []
	W0816 18:18:26.176820   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:26.176829   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:26.176887   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:26.212357   75402 cri.go:89] found id: ""
	I0816 18:18:26.212383   75402 logs.go:276] 0 containers: []
	W0816 18:18:26.212390   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:26.212396   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:26.212457   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:26.245256   75402 cri.go:89] found id: ""
	I0816 18:18:26.245290   75402 logs.go:276] 0 containers: []
	W0816 18:18:26.245302   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:26.245309   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:26.245370   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:26.277525   75402 cri.go:89] found id: ""
	I0816 18:18:26.277561   75402 logs.go:276] 0 containers: []
	W0816 18:18:26.277569   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:26.277575   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:26.277627   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:26.310928   75402 cri.go:89] found id: ""
	I0816 18:18:26.310956   75402 logs.go:276] 0 containers: []
	W0816 18:18:26.310967   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:26.310976   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:26.311052   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:26.344595   75402 cri.go:89] found id: ""
	I0816 18:18:26.344647   75402 logs.go:276] 0 containers: []
	W0816 18:18:26.344661   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:26.344669   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:26.344741   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:26.377776   75402 cri.go:89] found id: ""
	I0816 18:18:26.377805   75402 logs.go:276] 0 containers: []
	W0816 18:18:26.377814   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:26.377820   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:26.377872   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:26.411139   75402 cri.go:89] found id: ""
	I0816 18:18:26.411167   75402 logs.go:276] 0 containers: []
	W0816 18:18:26.411179   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:26.411190   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:26.411204   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:26.493802   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:26.493838   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:26.529542   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:26.529576   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:26.583544   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:26.583588   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:26.596429   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:26.596459   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:26.667858   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:25.441062   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:27.940609   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:25.706109   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:28.206196   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:25.448352   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:27.947950   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:29.168766   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:29.182032   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:29.182103   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:29.220213   75402 cri.go:89] found id: ""
	I0816 18:18:29.220239   75402 logs.go:276] 0 containers: []
	W0816 18:18:29.220247   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:29.220253   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:29.220300   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:29.257820   75402 cri.go:89] found id: ""
	I0816 18:18:29.257850   75402 logs.go:276] 0 containers: []
	W0816 18:18:29.257861   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:29.257867   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:29.257933   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:29.290450   75402 cri.go:89] found id: ""
	I0816 18:18:29.290473   75402 logs.go:276] 0 containers: []
	W0816 18:18:29.290480   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:29.290485   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:29.290546   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:29.328032   75402 cri.go:89] found id: ""
	I0816 18:18:29.328061   75402 logs.go:276] 0 containers: []
	W0816 18:18:29.328070   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:29.328076   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:29.328135   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:29.362104   75402 cri.go:89] found id: ""
	I0816 18:18:29.362132   75402 logs.go:276] 0 containers: []
	W0816 18:18:29.362141   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:29.362149   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:29.362218   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:29.395258   75402 cri.go:89] found id: ""
	I0816 18:18:29.395290   75402 logs.go:276] 0 containers: []
	W0816 18:18:29.395301   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:29.395309   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:29.395375   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:29.426617   75402 cri.go:89] found id: ""
	I0816 18:18:29.426646   75402 logs.go:276] 0 containers: []
	W0816 18:18:29.426656   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:29.426663   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:29.426725   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:29.462861   75402 cri.go:89] found id: ""
	I0816 18:18:29.462890   75402 logs.go:276] 0 containers: []
	W0816 18:18:29.462901   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:29.462912   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:29.462928   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:29.514882   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:29.514915   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:29.528101   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:29.528128   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:29.598983   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:29.599005   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:29.599020   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:29.684955   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:29.684991   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:32.230155   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:32.244158   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:32.244226   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:32.281993   75402 cri.go:89] found id: ""
	I0816 18:18:32.282020   75402 logs.go:276] 0 containers: []
	W0816 18:18:32.282031   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:32.282037   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:32.282100   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:32.316870   75402 cri.go:89] found id: ""
	I0816 18:18:32.316896   75402 logs.go:276] 0 containers: []
	W0816 18:18:32.316906   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:32.316914   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:32.316976   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:32.352597   75402 cri.go:89] found id: ""
	I0816 18:18:32.352637   75402 logs.go:276] 0 containers: []
	W0816 18:18:32.352649   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:32.352656   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:32.352722   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:32.387520   75402 cri.go:89] found id: ""
	I0816 18:18:32.387564   75402 logs.go:276] 0 containers: []
	W0816 18:18:32.387576   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:32.387584   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:32.387638   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:32.421499   75402 cri.go:89] found id: ""
	I0816 18:18:32.421526   75402 logs.go:276] 0 containers: []
	W0816 18:18:32.421537   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:32.421544   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:32.421603   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:32.460048   75402 cri.go:89] found id: ""
	I0816 18:18:32.460075   75402 logs.go:276] 0 containers: []
	W0816 18:18:32.460086   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:32.460093   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:32.460151   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:32.498148   75402 cri.go:89] found id: ""
	I0816 18:18:32.498176   75402 logs.go:276] 0 containers: []
	W0816 18:18:32.498184   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:32.498190   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:32.498248   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:32.530683   75402 cri.go:89] found id: ""
	I0816 18:18:32.530717   75402 logs.go:276] 0 containers: []
	W0816 18:18:32.530730   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:32.530741   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:32.530762   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:32.614776   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:32.614820   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:32.655628   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:32.655667   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:32.722763   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:32.722807   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:32.739817   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:32.739847   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:32.819297   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:30.440684   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:32.441210   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:30.206433   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:32.707436   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:30.448781   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:32.457660   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:35.320173   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:35.332427   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:35.332503   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:35.366316   75402 cri.go:89] found id: ""
	I0816 18:18:35.366346   75402 logs.go:276] 0 containers: []
	W0816 18:18:35.366357   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:35.366365   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:35.366433   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:35.399308   75402 cri.go:89] found id: ""
	I0816 18:18:35.399346   75402 logs.go:276] 0 containers: []
	W0816 18:18:35.399357   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:35.399367   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:35.399434   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:35.434926   75402 cri.go:89] found id: ""
	I0816 18:18:35.434958   75402 logs.go:276] 0 containers: []
	W0816 18:18:35.434971   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:35.434980   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:35.435042   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:35.473222   75402 cri.go:89] found id: ""
	I0816 18:18:35.473247   75402 logs.go:276] 0 containers: []
	W0816 18:18:35.473258   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:35.473266   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:35.473343   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:35.505484   75402 cri.go:89] found id: ""
	I0816 18:18:35.505521   75402 logs.go:276] 0 containers: []
	W0816 18:18:35.505533   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:35.505540   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:35.505608   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:35.540532   75402 cri.go:89] found id: ""
	I0816 18:18:35.540573   75402 logs.go:276] 0 containers: []
	W0816 18:18:35.540584   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:35.540590   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:35.540663   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:35.574205   75402 cri.go:89] found id: ""
	I0816 18:18:35.574235   75402 logs.go:276] 0 containers: []
	W0816 18:18:35.574245   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:35.574252   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:35.574343   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:35.614707   75402 cri.go:89] found id: ""
	I0816 18:18:35.614732   75402 logs.go:276] 0 containers: []
	W0816 18:18:35.614739   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:35.614747   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:35.614759   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:35.690830   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:35.690861   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:35.726601   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:35.726627   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:35.774706   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:35.774736   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:35.787557   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:35.787616   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:35.857474   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:34.940337   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:37.440507   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:34.701151   74828 pod_ready.go:82] duration metric: took 4m0.000965442s for pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace to be "Ready" ...
	E0816 18:18:34.701178   74828 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace to be "Ready" (will not retry!)
	I0816 18:18:34.701196   74828 pod_ready.go:39] duration metric: took 4m13.502588966s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:18:34.701228   74828 kubeadm.go:597] duration metric: took 4m21.306103533s to restartPrimaryControlPlane
	W0816 18:18:34.701293   74828 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0816 18:18:34.701330   74828 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 18:18:34.948583   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:37.447544   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:39.448942   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:38.358057   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:38.371128   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:38.371189   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:38.404812   75402 cri.go:89] found id: ""
	I0816 18:18:38.404844   75402 logs.go:276] 0 containers: []
	W0816 18:18:38.404855   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:38.404864   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:38.404926   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:38.437922   75402 cri.go:89] found id: ""
	I0816 18:18:38.437950   75402 logs.go:276] 0 containers: []
	W0816 18:18:38.437960   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:38.437967   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:38.438023   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:38.471474   75402 cri.go:89] found id: ""
	I0816 18:18:38.471509   75402 logs.go:276] 0 containers: []
	W0816 18:18:38.471519   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:38.471525   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:38.471582   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:38.510132   75402 cri.go:89] found id: ""
	I0816 18:18:38.510158   75402 logs.go:276] 0 containers: []
	W0816 18:18:38.510168   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:38.510184   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:38.510246   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:38.542212   75402 cri.go:89] found id: ""
	I0816 18:18:38.542251   75402 logs.go:276] 0 containers: []
	W0816 18:18:38.542262   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:38.542269   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:38.542341   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:38.579037   75402 cri.go:89] found id: ""
	I0816 18:18:38.579068   75402 logs.go:276] 0 containers: []
	W0816 18:18:38.579076   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:38.579082   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:38.579129   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:38.619219   75402 cri.go:89] found id: ""
	I0816 18:18:38.619252   75402 logs.go:276] 0 containers: []
	W0816 18:18:38.619263   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:38.619272   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:38.619335   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:38.655124   75402 cri.go:89] found id: ""
	I0816 18:18:38.655149   75402 logs.go:276] 0 containers: []
	W0816 18:18:38.655169   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:38.655180   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:38.655194   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:38.737857   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:38.737894   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:38.779777   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:38.779806   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:38.831556   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:38.831590   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:38.844496   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:38.844523   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:38.914543   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:41.415612   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:41.428187   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:41.428251   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:41.462932   75402 cri.go:89] found id: ""
	I0816 18:18:41.462964   75402 logs.go:276] 0 containers: []
	W0816 18:18:41.462975   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:41.462983   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:41.463043   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:41.497712   75402 cri.go:89] found id: ""
	I0816 18:18:41.497739   75402 logs.go:276] 0 containers: []
	W0816 18:18:41.497748   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:41.497754   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:41.497804   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:41.528430   75402 cri.go:89] found id: ""
	I0816 18:18:41.528455   75402 logs.go:276] 0 containers: []
	W0816 18:18:41.528463   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:41.528468   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:41.528527   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:41.560048   75402 cri.go:89] found id: ""
	I0816 18:18:41.560071   75402 logs.go:276] 0 containers: []
	W0816 18:18:41.560081   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:41.560088   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:41.560142   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:41.592536   75402 cri.go:89] found id: ""
	I0816 18:18:41.592566   75402 logs.go:276] 0 containers: []
	W0816 18:18:41.592577   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:41.592585   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:41.592663   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:41.626850   75402 cri.go:89] found id: ""
	I0816 18:18:41.626884   75402 logs.go:276] 0 containers: []
	W0816 18:18:41.626894   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:41.626902   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:41.626965   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:41.660452   75402 cri.go:89] found id: ""
	I0816 18:18:41.660478   75402 logs.go:276] 0 containers: []
	W0816 18:18:41.660486   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:41.660491   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:41.660542   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:41.695990   75402 cri.go:89] found id: ""
	I0816 18:18:41.696012   75402 logs.go:276] 0 containers: []
	W0816 18:18:41.696020   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:41.696028   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:41.696039   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:41.733107   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:41.733134   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:41.782812   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:41.782843   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:41.795954   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:41.795984   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:41.867473   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:41.867526   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:41.867545   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:39.442037   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:41.940088   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:41.948682   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:43.942215   75006 pod_ready.go:82] duration metric: took 4m0.000164284s for pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace to be "Ready" ...
	E0816 18:18:43.942239   75006 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace to be "Ready" (will not retry!)
	I0816 18:18:43.942255   75006 pod_ready.go:39] duration metric: took 4m12.163955241s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:18:43.942279   75006 kubeadm.go:597] duration metric: took 4m21.898271101s to restartPrimaryControlPlane
	W0816 18:18:43.942326   75006 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0816 18:18:43.942352   75006 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 18:18:44.450340   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:44.463299   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:44.463361   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:44.495068   75402 cri.go:89] found id: ""
	I0816 18:18:44.495098   75402 logs.go:276] 0 containers: []
	W0816 18:18:44.495108   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:44.495116   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:44.495221   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:44.529615   75402 cri.go:89] found id: ""
	I0816 18:18:44.529638   75402 logs.go:276] 0 containers: []
	W0816 18:18:44.529646   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:44.529651   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:44.529701   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:44.565275   75402 cri.go:89] found id: ""
	I0816 18:18:44.565298   75402 logs.go:276] 0 containers: []
	W0816 18:18:44.565306   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:44.565321   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:44.565384   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:44.598554   75402 cri.go:89] found id: ""
	I0816 18:18:44.598590   75402 logs.go:276] 0 containers: []
	W0816 18:18:44.598601   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:44.598609   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:44.598673   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:44.631389   75402 cri.go:89] found id: ""
	I0816 18:18:44.631422   75402 logs.go:276] 0 containers: []
	W0816 18:18:44.631436   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:44.631446   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:44.631519   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:44.663986   75402 cri.go:89] found id: ""
	I0816 18:18:44.664013   75402 logs.go:276] 0 containers: []
	W0816 18:18:44.664023   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:44.664031   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:44.664095   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:44.700238   75402 cri.go:89] found id: ""
	I0816 18:18:44.700263   75402 logs.go:276] 0 containers: []
	W0816 18:18:44.700272   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:44.700277   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:44.700330   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:44.732737   75402 cri.go:89] found id: ""
	I0816 18:18:44.732766   75402 logs.go:276] 0 containers: []
	W0816 18:18:44.732779   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:44.732790   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:44.732807   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:44.806427   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:44.806462   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:44.842965   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:44.842994   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:44.895745   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:44.895781   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:44.909850   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:44.909885   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:44.979315   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:47.479563   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:47.491876   75402 kubeadm.go:597] duration metric: took 4m4.431091965s to restartPrimaryControlPlane
	W0816 18:18:47.491939   75402 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0816 18:18:47.491962   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 18:18:43.941047   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:46.440592   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:48.441208   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:51.168302   75402 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.676317513s)
	I0816 18:18:51.168387   75402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 18:18:51.182492   75402 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 18:18:51.192403   75402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 18:18:51.202058   75402 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 18:18:51.202075   75402 kubeadm.go:157] found existing configuration files:
	
	I0816 18:18:51.202115   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 18:18:51.210661   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 18:18:51.210721   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 18:18:51.219979   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 18:18:51.228422   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 18:18:51.228488   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 18:18:51.237159   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 18:18:51.245555   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 18:18:51.245622   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 18:18:51.253986   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 18:18:51.261885   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 18:18:51.261927   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 18:18:51.270479   75402 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 18:18:51.335784   75402 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0816 18:18:51.335883   75402 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 18:18:51.482910   75402 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 18:18:51.483069   75402 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 18:18:51.483228   75402 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0816 18:18:51.652730   75402 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 18:18:51.655077   75402 out.go:235]   - Generating certificates and keys ...
	I0816 18:18:51.655185   75402 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 18:18:51.655304   75402 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 18:18:51.655425   75402 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 18:18:51.655521   75402 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 18:18:51.657408   75402 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 18:18:51.657485   75402 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 18:18:51.657561   75402 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 18:18:51.657645   75402 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 18:18:51.657748   75402 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 18:18:51.657854   75402 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 18:18:51.657911   75402 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 18:18:51.657984   75402 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 18:18:51.720786   75402 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 18:18:51.991165   75402 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 18:18:52.140983   75402 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 18:18:52.453361   75402 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 18:18:52.467210   75402 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 18:18:52.469222   75402 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 18:18:52.469338   75402 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 18:18:52.590938   75402 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 18:18:52.592875   75402 out.go:235]   - Booting up control plane ...
	I0816 18:18:52.592987   75402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 18:18:52.602597   75402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 18:18:52.603616   75402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 18:18:52.604417   75402 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 18:18:52.606669   75402 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0816 18:18:50.939639   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:52.940202   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:54.940917   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:57.439382   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:59.443139   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:01.940496   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:00.803654   74828 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.102297191s)
	I0816 18:19:00.803740   74828 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 18:19:00.818126   74828 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 18:19:00.827602   74828 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 18:19:00.836389   74828 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 18:19:00.836410   74828 kubeadm.go:157] found existing configuration files:
	
	I0816 18:19:00.836455   74828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 18:19:00.844830   74828 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 18:19:00.844880   74828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 18:19:00.853736   74828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 18:19:00.862795   74828 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 18:19:00.862855   74828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 18:19:00.872056   74828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 18:19:00.880410   74828 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 18:19:00.880461   74828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 18:19:00.889000   74828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 18:19:00.897508   74828 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 18:19:00.897568   74828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 18:19:00.906256   74828 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 18:19:00.953336   74828 kubeadm.go:310] W0816 18:19:00.929461    3053 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 18:19:00.955337   74828 kubeadm.go:310] W0816 18:19:00.931382    3053 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 18:19:01.068247   74828 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 18:19:03.940545   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:06.439727   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:08.440027   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:09.225829   74828 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0816 18:19:09.225908   74828 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 18:19:09.226014   74828 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 18:19:09.226126   74828 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 18:19:09.226242   74828 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0816 18:19:09.226329   74828 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 18:19:09.228065   74828 out.go:235]   - Generating certificates and keys ...
	I0816 18:19:09.228133   74828 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 18:19:09.228183   74828 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 18:19:09.228252   74828 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 18:19:09.228315   74828 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 18:19:09.228403   74828 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 18:19:09.228489   74828 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 18:19:09.228584   74828 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 18:19:09.228686   74828 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 18:19:09.228787   74828 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 18:19:09.228864   74828 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 18:19:09.228903   74828 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 18:19:09.228983   74828 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 18:19:09.229052   74828 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 18:19:09.229147   74828 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0816 18:19:09.229234   74828 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 18:19:09.229332   74828 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 18:19:09.229410   74828 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 18:19:09.229532   74828 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 18:19:09.229607   74828 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 18:19:09.230874   74828 out.go:235]   - Booting up control plane ...
	I0816 18:19:09.230948   74828 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 18:19:09.231032   74828 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 18:19:09.231090   74828 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 18:19:09.231202   74828 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 18:19:09.231321   74828 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 18:19:09.231381   74828 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 18:19:09.231572   74828 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0816 18:19:09.231662   74828 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0816 18:19:09.231711   74828 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.32263ms
	I0816 18:19:09.231774   74828 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0816 18:19:09.231824   74828 kubeadm.go:310] [api-check] The API server is healthy after 5.002367118s
	I0816 18:19:09.231923   74828 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0816 18:19:09.232091   74828 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0816 18:19:09.232166   74828 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0816 18:19:09.232419   74828 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-864476 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0816 18:19:09.232497   74828 kubeadm.go:310] [bootstrap-token] Using token: 6m1jus.xr9uhx26t28q092p
	I0816 18:19:09.233962   74828 out.go:235]   - Configuring RBAC rules ...
	I0816 18:19:09.234068   74828 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0816 18:19:09.234164   74828 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0816 18:19:09.234315   74828 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0816 18:19:09.234425   74828 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0816 18:19:09.234522   74828 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0816 18:19:09.234615   74828 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0816 18:19:09.234775   74828 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0816 18:19:09.234830   74828 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0816 18:19:09.234892   74828 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0816 18:19:09.234901   74828 kubeadm.go:310] 
	I0816 18:19:09.234971   74828 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0816 18:19:09.234980   74828 kubeadm.go:310] 
	I0816 18:19:09.235067   74828 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0816 18:19:09.235076   74828 kubeadm.go:310] 
	I0816 18:19:09.235115   74828 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0816 18:19:09.235194   74828 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0816 18:19:09.235271   74828 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0816 18:19:09.235280   74828 kubeadm.go:310] 
	I0816 18:19:09.235367   74828 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0816 18:19:09.235376   74828 kubeadm.go:310] 
	I0816 18:19:09.235448   74828 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0816 18:19:09.235459   74828 kubeadm.go:310] 
	I0816 18:19:09.235533   74828 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0816 18:19:09.235607   74828 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0816 18:19:09.235677   74828 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0816 18:19:09.235683   74828 kubeadm.go:310] 
	I0816 18:19:09.235795   74828 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0816 18:19:09.235907   74828 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0816 18:19:09.235916   74828 kubeadm.go:310] 
	I0816 18:19:09.235986   74828 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 6m1jus.xr9uhx26t28q092p \
	I0816 18:19:09.236080   74828 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:340cd7e6f4afd769ffe0b97bb40a910498795aaf29fab06cd6b8c9a3957ccd57 \
	I0816 18:19:09.236099   74828 kubeadm.go:310] 	--control-plane 
	I0816 18:19:09.236105   74828 kubeadm.go:310] 
	I0816 18:19:09.236177   74828 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0816 18:19:09.236185   74828 kubeadm.go:310] 
	I0816 18:19:09.236268   74828 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 6m1jus.xr9uhx26t28q092p \
	I0816 18:19:09.236403   74828 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:340cd7e6f4afd769ffe0b97bb40a910498795aaf29fab06cd6b8c9a3957ccd57 
	I0816 18:19:09.236416   74828 cni.go:84] Creating CNI manager for ""
	I0816 18:19:09.236422   74828 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 18:19:09.237971   74828 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 18:19:10.069497   75006 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.127122656s)
	I0816 18:19:10.069585   75006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 18:19:10.085322   75006 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 18:19:10.098736   75006 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 18:19:10.108163   75006 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 18:19:10.108183   75006 kubeadm.go:157] found existing configuration files:
	
	I0816 18:19:10.108224   75006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0816 18:19:10.117330   75006 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 18:19:10.117382   75006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 18:19:10.127090   75006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0816 18:19:10.135574   75006 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 18:19:10.135648   75006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 18:19:10.146127   75006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0816 18:19:10.154474   75006 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 18:19:10.154533   75006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 18:19:10.163245   75006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0816 18:19:10.171315   75006 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 18:19:10.171375   75006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 18:19:10.181088   75006 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 18:19:10.225495   75006 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0816 18:19:10.225571   75006 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 18:19:10.327332   75006 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 18:19:10.327442   75006 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 18:19:10.327586   75006 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0816 18:19:10.335739   75006 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 18:19:10.337610   75006 out.go:235]   - Generating certificates and keys ...
	I0816 18:19:10.337730   75006 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 18:19:10.337818   75006 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 18:19:10.337935   75006 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 18:19:10.338054   75006 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 18:19:10.338174   75006 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 18:19:10.338254   75006 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 18:19:10.338359   75006 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 18:19:10.338452   75006 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 18:19:10.338562   75006 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 18:19:10.338668   75006 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 18:19:10.338718   75006 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 18:19:10.338796   75006 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 18:19:10.437447   75006 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 18:19:10.868191   75006 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0816 18:19:10.961497   75006 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 18:19:11.363158   75006 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 18:19:11.963929   75006 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 18:19:11.964410   75006 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 18:19:11.967675   75006 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 18:19:09.239250   74828 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 18:19:09.250270   74828 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 18:19:09.267205   74828 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 18:19:09.267346   74828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:09.267366   74828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-864476 minikube.k8s.io/updated_at=2024_08_16T18_19_09_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd minikube.k8s.io/name=no-preload-864476 minikube.k8s.io/primary=true
	I0816 18:19:09.282111   74828 ops.go:34] apiserver oom_adj: -16
	I0816 18:19:09.471160   74828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:09.971453   74828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:10.471576   74828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:10.971748   74828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:11.471954   74828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:11.971371   74828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:12.471626   74828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:12.972021   74828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:13.472254   74828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:13.588350   74828 kubeadm.go:1113] duration metric: took 4.321062687s to wait for elevateKubeSystemPrivileges
	I0816 18:19:13.588392   74828 kubeadm.go:394] duration metric: took 5m0.245036951s to StartCluster
	I0816 18:19:13.588413   74828 settings.go:142] acquiring lock: {Name:mk7d6bbff611e99a9222156e58c7ea3b0a5541f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:19:13.588500   74828 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19461-9545/kubeconfig
	I0816 18:19:13.591118   74828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/kubeconfig: {Name:mk7d5abb3a1b9b654c5d7490d32aeae2c9a1bdd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:19:13.591418   74828 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.50 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 18:19:13.591683   74828 config.go:182] Loaded profile config "no-preload-864476": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 18:19:13.591744   74828 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 18:19:13.591809   74828 addons.go:69] Setting storage-provisioner=true in profile "no-preload-864476"
	I0816 18:19:13.591839   74828 addons.go:234] Setting addon storage-provisioner=true in "no-preload-864476"
	W0816 18:19:13.591851   74828 addons.go:243] addon storage-provisioner should already be in state true
	I0816 18:19:13.591882   74828 host.go:66] Checking if "no-preload-864476" exists ...
	I0816 18:19:13.592025   74828 addons.go:69] Setting default-storageclass=true in profile "no-preload-864476"
	I0816 18:19:13.592070   74828 addons.go:69] Setting metrics-server=true in profile "no-preload-864476"
	I0816 18:19:13.592135   74828 addons.go:234] Setting addon metrics-server=true in "no-preload-864476"
	W0816 18:19:13.592150   74828 addons.go:243] addon metrics-server should already be in state true
	I0816 18:19:13.592073   74828 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-864476"
	I0816 18:19:13.592272   74828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:13.592206   74828 host.go:66] Checking if "no-preload-864476" exists ...
	I0816 18:19:13.592326   74828 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:13.592654   74828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:13.592677   74828 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:13.592731   74828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:13.592753   74828 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:13.592790   74828 out.go:177] * Verifying Kubernetes components...
	I0816 18:19:13.594236   74828 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:19:13.613019   74828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42847
	I0816 18:19:13.613061   74828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44393
	I0816 18:19:13.613087   74828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40547
	I0816 18:19:13.613498   74828 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:13.613552   74828 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:13.613708   74828 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:13.614094   74828 main.go:141] libmachine: Using API Version  1
	I0816 18:19:13.614113   74828 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:13.614198   74828 main.go:141] libmachine: Using API Version  1
	I0816 18:19:13.614222   74828 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:13.614403   74828 main.go:141] libmachine: Using API Version  1
	I0816 18:19:13.614420   74828 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:13.614478   74828 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:13.614675   74828 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:13.614728   74828 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:13.614856   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetState
	I0816 18:19:13.615039   74828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:13.615068   74828 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:13.615401   74828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:13.615442   74828 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:13.619787   74828 addons.go:234] Setting addon default-storageclass=true in "no-preload-864476"
	W0816 18:19:13.619815   74828 addons.go:243] addon default-storageclass should already be in state true
	I0816 18:19:13.619848   74828 host.go:66] Checking if "no-preload-864476" exists ...
	I0816 18:19:13.620274   74828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:13.620438   74828 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:13.642013   74828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43679
	I0816 18:19:13.642196   74828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46207
	I0816 18:19:13.642654   74828 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:13.643201   74828 main.go:141] libmachine: Using API Version  1
	I0816 18:19:13.643227   74828 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:13.643304   74828 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:13.643888   74828 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:13.644065   74828 main.go:141] libmachine: Using API Version  1
	I0816 18:19:13.644086   74828 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:13.644537   74828 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:13.644548   74828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:13.644591   74828 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:13.645002   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetState
	I0816 18:19:13.646881   74828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40749
	I0816 18:19:13.647127   74828 main.go:141] libmachine: (no-preload-864476) Calling .DriverName
	I0816 18:19:13.647406   74828 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:13.648126   74828 main.go:141] libmachine: Using API Version  1
	I0816 18:19:13.648156   74828 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:13.648725   74828 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:13.648935   74828 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0816 18:19:13.649121   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetState
	I0816 18:19:13.649823   74828 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 18:19:13.649840   74828 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0816 18:19:13.649861   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:19:13.651524   74828 main.go:141] libmachine: (no-preload-864476) Calling .DriverName
	I0816 18:19:13.652917   74828 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:19:10.441027   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:12.939870   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:13.653916   74828 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 18:19:13.653933   74828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 18:19:13.653952   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:19:13.654035   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:19:13.654463   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:19:13.654482   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:19:13.654665   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHPort
	I0816 18:19:13.654883   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:19:13.655044   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHUsername
	I0816 18:19:13.655247   74828 sshutil.go:53] new ssh client: &{IP:192.168.50.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/no-preload-864476/id_rsa Username:docker}
	I0816 18:19:13.657315   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:19:13.657699   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:19:13.657783   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:19:13.657974   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHPort
	I0816 18:19:13.658125   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:19:13.658247   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHUsername
	I0816 18:19:13.658362   74828 sshutil.go:53] new ssh client: &{IP:192.168.50.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/no-preload-864476/id_rsa Username:docker}
	I0816 18:19:13.670111   74828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45349
	I0816 18:19:13.670711   74828 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:13.671220   74828 main.go:141] libmachine: Using API Version  1
	I0816 18:19:13.671239   74828 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:13.671585   74828 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:13.671778   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetState
	I0816 18:19:13.673274   74828 main.go:141] libmachine: (no-preload-864476) Calling .DriverName
	I0816 18:19:13.673480   74828 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 18:19:13.673493   74828 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 18:19:13.673511   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:19:13.677160   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:19:13.677542   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:19:13.677564   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:19:13.677854   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHPort
	I0816 18:19:13.678049   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:19:13.678170   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHUsername
	I0816 18:19:13.678263   74828 sshutil.go:53] new ssh client: &{IP:192.168.50.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/no-preload-864476/id_rsa Username:docker}
	I0816 18:19:11.970291   75006 out.go:235]   - Booting up control plane ...
	I0816 18:19:11.970385   75006 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 18:19:11.970516   75006 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 18:19:11.970617   75006 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 18:19:11.988374   75006 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 18:19:11.997980   75006 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 18:19:11.998045   75006 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 18:19:12.132297   75006 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0816 18:19:12.132447   75006 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0816 18:19:13.135489   75006 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.003222114s
	I0816 18:19:13.135584   75006 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0816 18:19:13.840111   74828 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 18:19:13.903130   74828 node_ready.go:35] waiting up to 6m0s for node "no-preload-864476" to be "Ready" ...
	I0816 18:19:13.915130   74828 node_ready.go:49] node "no-preload-864476" has status "Ready":"True"
	I0816 18:19:13.915163   74828 node_ready.go:38] duration metric: took 12.001127ms for node "no-preload-864476" to be "Ready" ...
	I0816 18:19:13.915174   74828 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:19:13.926756   74828 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-6zfgr" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:13.944598   74828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 18:19:13.971002   74828 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 18:19:13.971036   74828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0816 18:19:13.998897   74828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 18:19:14.015731   74828 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 18:19:14.015754   74828 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0816 18:19:14.080186   74828 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 18:19:14.080212   74828 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0816 18:19:14.187279   74828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 18:19:15.075984   74828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.077053329s)
	I0816 18:19:15.076058   74828 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:15.076071   74828 main.go:141] libmachine: (no-preload-864476) Calling .Close
	I0816 18:19:15.076364   74828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.131733705s)
	I0816 18:19:15.076478   74828 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:15.076495   74828 main.go:141] libmachine: (no-preload-864476) Calling .Close
	I0816 18:19:15.076405   74828 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:15.076567   74828 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:15.076591   74828 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:15.076600   74828 main.go:141] libmachine: (no-preload-864476) Calling .Close
	I0816 18:19:15.076436   74828 main.go:141] libmachine: (no-preload-864476) DBG | Closing plugin on server side
	I0816 18:19:15.076786   74828 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:15.076838   74828 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:15.076859   74828 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:15.076879   74828 main.go:141] libmachine: (no-preload-864476) Calling .Close
	I0816 18:19:15.076969   74828 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:15.076987   74828 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:15.077443   74828 main.go:141] libmachine: (no-preload-864476) DBG | Closing plugin on server side
	I0816 18:19:15.077517   74828 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:15.077535   74828 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:15.164872   74828 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:15.164903   74828 main.go:141] libmachine: (no-preload-864476) Calling .Close
	I0816 18:19:15.165218   74828 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:15.165238   74828 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:15.373294   74828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.1859614s)
	I0816 18:19:15.373399   74828 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:15.373417   74828 main.go:141] libmachine: (no-preload-864476) Calling .Close
	I0816 18:19:15.373716   74828 main.go:141] libmachine: (no-preload-864476) DBG | Closing plugin on server side
	I0816 18:19:15.373769   74828 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:15.373804   74828 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:15.373825   74828 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:15.373837   74828 main.go:141] libmachine: (no-preload-864476) Calling .Close
	I0816 18:19:15.374124   74828 main.go:141] libmachine: (no-preload-864476) DBG | Closing plugin on server side
	I0816 18:19:15.374130   74828 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:15.374181   74828 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:15.374192   74828 addons.go:475] Verifying addon metrics-server=true in "no-preload-864476"
	I0816 18:19:15.375801   74828 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0816 18:19:17.638005   75006 kubeadm.go:310] [api-check] The API server is healthy after 4.502130995s
	I0816 18:19:17.658334   75006 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0816 18:19:17.678882   75006 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0816 18:19:17.709612   75006 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0816 18:19:17.709881   75006 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-256678 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0816 18:19:17.724755   75006 kubeadm.go:310] [bootstrap-token] Using token: cdypho.k0vxtmnp4c93945s
	I0816 18:19:14.941895   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:17.440923   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:15.377611   74828 addons.go:510] duration metric: took 1.785861834s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0816 18:19:15.934515   74828 pod_ready.go:103] pod "coredns-6f6b679f8f-6zfgr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:18.435321   74828 pod_ready.go:103] pod "coredns-6f6b679f8f-6zfgr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:17.726222   75006 out.go:235]   - Configuring RBAC rules ...
	I0816 18:19:17.726361   75006 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0816 18:19:17.733325   75006 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0816 18:19:17.740707   75006 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0816 18:19:17.747325   75006 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0816 18:19:17.751554   75006 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0816 18:19:17.761084   75006 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0816 18:19:18.044607   75006 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0816 18:19:18.485134   75006 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0816 18:19:19.044481   75006 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0816 18:19:19.045968   75006 kubeadm.go:310] 
	I0816 18:19:19.046038   75006 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0816 18:19:19.046069   75006 kubeadm.go:310] 
	I0816 18:19:19.046185   75006 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0816 18:19:19.046198   75006 kubeadm.go:310] 
	I0816 18:19:19.046229   75006 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0816 18:19:19.046298   75006 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0816 18:19:19.046343   75006 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0816 18:19:19.046349   75006 kubeadm.go:310] 
	I0816 18:19:19.046396   75006 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0816 18:19:19.046413   75006 kubeadm.go:310] 
	I0816 18:19:19.046504   75006 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0816 18:19:19.046529   75006 kubeadm.go:310] 
	I0816 18:19:19.046614   75006 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0816 18:19:19.046718   75006 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0816 18:19:19.046813   75006 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0816 18:19:19.046828   75006 kubeadm.go:310] 
	I0816 18:19:19.046941   75006 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0816 18:19:19.047047   75006 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0816 18:19:19.047056   75006 kubeadm.go:310] 
	I0816 18:19:19.047153   75006 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token cdypho.k0vxtmnp4c93945s \
	I0816 18:19:19.047304   75006 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:340cd7e6f4afd769ffe0b97bb40a910498795aaf29fab06cd6b8c9a3957ccd57 \
	I0816 18:19:19.047346   75006 kubeadm.go:310] 	--control-plane 
	I0816 18:19:19.047358   75006 kubeadm.go:310] 
	I0816 18:19:19.047470   75006 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0816 18:19:19.047480   75006 kubeadm.go:310] 
	I0816 18:19:19.047596   75006 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token cdypho.k0vxtmnp4c93945s \
	I0816 18:19:19.047740   75006 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:340cd7e6f4afd769ffe0b97bb40a910498795aaf29fab06cd6b8c9a3957ccd57 
	I0816 18:19:19.048871   75006 kubeadm.go:310] W0816 18:19:10.202021    2564 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 18:19:19.049167   75006 kubeadm.go:310] W0816 18:19:10.202700    2564 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 18:19:19.049279   75006 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 18:19:19.049304   75006 cni.go:84] Creating CNI manager for ""
	I0816 18:19:19.049318   75006 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 18:19:19.051543   75006 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 18:19:19.052677   75006 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 18:19:19.063536   75006 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 18:19:19.084460   75006 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 18:19:19.084540   75006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:19.084608   75006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-256678 minikube.k8s.io/updated_at=2024_08_16T18_19_19_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd minikube.k8s.io/name=default-k8s-diff-port-256678 minikube.k8s.io/primary=true
	I0816 18:19:19.257760   75006 ops.go:34] apiserver oom_adj: -16
	I0816 18:19:19.258124   75006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:19.759000   75006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:19.940737   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:22.440273   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:20.934243   74828 pod_ready.go:103] pod "coredns-6f6b679f8f-6zfgr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:23.433046   74828 pod_ready.go:103] pod "coredns-6f6b679f8f-6zfgr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:20.258798   75006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:20.759112   75006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:21.258598   75006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:21.758433   75006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:22.258181   75006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:22.758276   75006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:23.258184   75006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:23.758168   75006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:23.846653   75006 kubeadm.go:1113] duration metric: took 4.762173901s to wait for elevateKubeSystemPrivileges
	I0816 18:19:23.846688   75006 kubeadm.go:394] duration metric: took 5m1.846731834s to StartCluster
	I0816 18:19:23.846708   75006 settings.go:142] acquiring lock: {Name:mk7d6bbff611e99a9222156e58c7ea3b0a5541f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:19:23.846784   75006 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19461-9545/kubeconfig
	I0816 18:19:23.848375   75006 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/kubeconfig: {Name:mk7d5abb3a1b9b654c5d7490d32aeae2c9a1bdd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:19:23.848662   75006 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.144 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 18:19:23.848750   75006 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 18:19:23.848814   75006 config.go:182] Loaded profile config "default-k8s-diff-port-256678": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 18:19:23.848840   75006 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-256678"
	I0816 18:19:23.848858   75006 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-256678"
	I0816 18:19:23.848866   75006 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-256678"
	I0816 18:19:23.848878   75006 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-256678"
	I0816 18:19:23.848882   75006 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-256678"
	W0816 18:19:23.848887   75006 addons.go:243] addon storage-provisioner should already be in state true
	W0816 18:19:23.848890   75006 addons.go:243] addon metrics-server should already be in state true
	I0816 18:19:23.848915   75006 host.go:66] Checking if "default-k8s-diff-port-256678" exists ...
	I0816 18:19:23.848918   75006 host.go:66] Checking if "default-k8s-diff-port-256678" exists ...
	I0816 18:19:23.848914   75006 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-256678"
	I0816 18:19:23.849232   75006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:23.849259   75006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:23.849271   75006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:23.849293   75006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:23.849362   75006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:23.849404   75006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:23.850478   75006 out.go:177] * Verifying Kubernetes components...
	I0816 18:19:23.852034   75006 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:19:23.865786   75006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32949
	I0816 18:19:23.865939   75006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32833
	I0816 18:19:23.866248   75006 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:23.866304   75006 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:23.866398   75006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46809
	I0816 18:19:23.866816   75006 main.go:141] libmachine: Using API Version  1
	I0816 18:19:23.866845   75006 main.go:141] libmachine: Using API Version  1
	I0816 18:19:23.866860   75006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:23.866863   75006 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:23.866935   75006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:23.867328   75006 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:23.867333   75006 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:23.867430   75006 main.go:141] libmachine: Using API Version  1
	I0816 18:19:23.867447   75006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:23.867517   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetState
	I0816 18:19:23.867742   75006 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:23.867871   75006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:23.867897   75006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:23.868227   75006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:23.868247   75006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:23.870993   75006 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-256678"
	W0816 18:19:23.871020   75006 addons.go:243] addon default-storageclass should already be in state true
	I0816 18:19:23.871051   75006 host.go:66] Checking if "default-k8s-diff-port-256678" exists ...
	I0816 18:19:23.871403   75006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:23.871433   75006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:23.885139   75006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42813
	I0816 18:19:23.885814   75006 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:23.886386   75006 main.go:141] libmachine: Using API Version  1
	I0816 18:19:23.886403   75006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:23.886814   75006 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:23.886856   75006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39359
	I0816 18:19:23.887024   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetState
	I0816 18:19:23.887202   75006 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:23.887542   75006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36361
	I0816 18:19:23.887784   75006 main.go:141] libmachine: Using API Version  1
	I0816 18:19:23.887797   75006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:23.887863   75006 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:23.888165   75006 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:23.888372   75006 main.go:141] libmachine: Using API Version  1
	I0816 18:19:23.888389   75006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:23.889026   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .DriverName
	I0816 18:19:23.889254   75006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:23.889268   75006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:23.889518   75006 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:23.889758   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetState
	I0816 18:19:23.890483   75006 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:19:23.891262   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .DriverName
	I0816 18:19:23.891838   75006 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 18:19:23.891859   75006 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 18:19:23.891877   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:19:23.892581   75006 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0816 18:19:23.893621   75006 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 18:19:23.893684   75006 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0816 18:19:23.893882   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:19:23.894413   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:19:23.894973   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:19:23.894994   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:19:23.895161   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHPort
	I0816 18:19:23.895322   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:19:23.895578   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHUsername
	I0816 18:19:23.895757   75006 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/default-k8s-diff-port-256678/id_rsa Username:docker}
	I0816 18:19:23.897167   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:19:23.897666   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:19:23.897685   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:19:23.897802   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHPort
	I0816 18:19:23.897972   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:19:23.898132   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHUsername
	I0816 18:19:23.898248   75006 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/default-k8s-diff-port-256678/id_rsa Username:docker}
	I0816 18:19:23.906377   75006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43895
	I0816 18:19:23.906708   75006 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:23.907497   75006 main.go:141] libmachine: Using API Version  1
	I0816 18:19:23.907513   75006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:23.907932   75006 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:23.908240   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetState
	I0816 18:19:23.909917   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .DriverName
	I0816 18:19:23.910141   75006 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 18:19:23.910159   75006 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 18:19:23.910177   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:19:23.912435   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:19:23.912678   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:19:23.912710   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:19:23.912858   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHPort
	I0816 18:19:23.912982   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:19:23.913066   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHUsername
	I0816 18:19:23.913138   75006 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/default-k8s-diff-port-256678/id_rsa Username:docker}
	I0816 18:19:24.062487   75006 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 18:19:24.083148   75006 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-256678" to be "Ready" ...
	I0816 18:19:24.092886   75006 node_ready.go:49] node "default-k8s-diff-port-256678" has status "Ready":"True"
	I0816 18:19:24.092907   75006 node_ready.go:38] duration metric: took 9.72996ms for node "default-k8s-diff-port-256678" to be "Ready" ...
	I0816 18:19:24.092916   75006 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:19:24.099123   75006 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-hx7sb" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:24.184211   75006 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 18:19:24.197461   75006 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 18:19:24.197491   75006 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0816 18:19:24.219263   75006 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 18:19:24.258463   75006 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 18:19:24.258498   75006 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0816 18:19:24.355822   75006 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 18:19:24.355902   75006 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0816 18:19:24.436401   75006 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 18:19:24.866038   75006 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:24.866125   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .Close
	I0816 18:19:24.866058   75006 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:24.866163   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .Close
	I0816 18:19:24.866478   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | Closing plugin on server side
	I0816 18:19:24.866517   75006 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:24.866526   75006 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:24.866536   75006 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:24.866546   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .Close
	I0816 18:19:24.866600   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | Closing plugin on server side
	I0816 18:19:24.866626   75006 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:24.866636   75006 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:24.866649   75006 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:24.866676   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .Close
	I0816 18:19:24.866778   75006 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:24.866793   75006 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:24.866810   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | Closing plugin on server side
	I0816 18:19:24.866888   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | Closing plugin on server side
	I0816 18:19:24.866923   75006 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:24.866932   75006 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:24.886041   75006 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:24.886065   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .Close
	I0816 18:19:24.886338   75006 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:24.886359   75006 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:24.886384   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | Closing plugin on server side
	I0816 18:19:25.225367   75006 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:25.225397   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .Close
	I0816 18:19:25.225704   75006 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:25.225720   75006 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:25.225730   75006 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:25.225739   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .Close
	I0816 18:19:25.225961   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | Closing plugin on server side
	I0816 18:19:25.226005   75006 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:25.226025   75006 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:25.226043   75006 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-256678"
	I0816 18:19:25.227605   75006 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0816 18:19:23.934167   74828 pod_ready.go:93] pod "coredns-6f6b679f8f-6zfgr" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:23.934191   74828 pod_ready.go:82] duration metric: took 10.007408518s for pod "coredns-6f6b679f8f-6zfgr" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:23.934200   74828 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-qr4q9" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:23.940226   74828 pod_ready.go:93] pod "coredns-6f6b679f8f-qr4q9" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:23.940249   74828 pod_ready.go:82] duration metric: took 6.040513ms for pod "coredns-6f6b679f8f-qr4q9" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:23.940260   74828 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:23.945330   74828 pod_ready.go:93] pod "etcd-no-preload-864476" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:23.945351   74828 pod_ready.go:82] duration metric: took 5.082362ms for pod "etcd-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:23.945361   74828 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:23.949772   74828 pod_ready.go:93] pod "kube-apiserver-no-preload-864476" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:23.949800   74828 pod_ready.go:82] duration metric: took 4.429575ms for pod "kube-apiserver-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:23.949810   74828 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:23.954308   74828 pod_ready.go:93] pod "kube-controller-manager-no-preload-864476" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:23.954328   74828 pod_ready.go:82] duration metric: took 4.510361ms for pod "kube-controller-manager-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:23.954338   74828 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6g6zx" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:24.331265   74828 pod_ready.go:93] pod "kube-proxy-6g6zx" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:24.331306   74828 pod_ready.go:82] duration metric: took 376.9609ms for pod "kube-proxy-6g6zx" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:24.331320   74828 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:24.730715   74828 pod_ready.go:93] pod "kube-scheduler-no-preload-864476" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:24.730740   74828 pod_ready.go:82] duration metric: took 399.412376ms for pod "kube-scheduler-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:24.730748   74828 pod_ready.go:39] duration metric: took 10.815561534s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:19:24.730761   74828 api_server.go:52] waiting for apiserver process to appear ...
	I0816 18:19:24.730820   74828 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:19:24.746674   74828 api_server.go:72] duration metric: took 11.155216371s to wait for apiserver process to appear ...
	I0816 18:19:24.746697   74828 api_server.go:88] waiting for apiserver healthz status ...
	I0816 18:19:24.746714   74828 api_server.go:253] Checking apiserver healthz at https://192.168.50.50:8443/healthz ...
	I0816 18:19:24.750801   74828 api_server.go:279] https://192.168.50.50:8443/healthz returned 200:
	ok
	I0816 18:19:24.751835   74828 api_server.go:141] control plane version: v1.31.0
	I0816 18:19:24.751864   74828 api_server.go:131] duration metric: took 5.159229ms to wait for apiserver health ...
	I0816 18:19:24.751872   74828 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 18:19:24.935471   74828 system_pods.go:59] 9 kube-system pods found
	I0816 18:19:24.935510   74828 system_pods.go:61] "coredns-6f6b679f8f-6zfgr" [99157766-5089-4abe-a888-ec5992e5720a] Running
	I0816 18:19:24.935520   74828 system_pods.go:61] "coredns-6f6b679f8f-qr4q9" [d20f51f3-6786-496b-a6bc-7457462e46e9] Running
	I0816 18:19:24.935539   74828 system_pods.go:61] "etcd-no-preload-864476" [246e2b57-dbfe-4fd2-bc9d-ef927d48ba0b] Running
	I0816 18:19:24.935548   74828 system_pods.go:61] "kube-apiserver-no-preload-864476" [0e386448-037f-4543-941a-63f07e0d3186] Running
	I0816 18:19:24.935555   74828 system_pods.go:61] "kube-controller-manager-no-preload-864476" [71617b5c-9968-4d49-ac6c-7728712ac880] Running
	I0816 18:19:24.935562   74828 system_pods.go:61] "kube-proxy-6g6zx" [71a027eb-99e3-4b48-b9f1-2fc80cad9d2e] Running
	I0816 18:19:24.935572   74828 system_pods.go:61] "kube-scheduler-no-preload-864476" [c9b6ef2a-41fa-408b-86b7-eae10db4bec6] Running
	I0816 18:19:24.935584   74828 system_pods.go:61] "metrics-server-6867b74b74-r6cph" [a842267c-2c75-4799-aefc-2fb92ccb9129] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 18:19:24.935596   74828 system_pods.go:61] "storage-provisioner" [c05cdb7c-d74e-4008-a0fc-5eb6df9595af] Running
	I0816 18:19:24.935607   74828 system_pods.go:74] duration metric: took 183.727841ms to wait for pod list to return data ...
	I0816 18:19:24.935621   74828 default_sa.go:34] waiting for default service account to be created ...
	I0816 18:19:25.132713   74828 default_sa.go:45] found service account: "default"
	I0816 18:19:25.132740   74828 default_sa.go:55] duration metric: took 197.112152ms for default service account to be created ...
	I0816 18:19:25.132750   74828 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 18:19:25.335012   74828 system_pods.go:86] 9 kube-system pods found
	I0816 18:19:25.335043   74828 system_pods.go:89] "coredns-6f6b679f8f-6zfgr" [99157766-5089-4abe-a888-ec5992e5720a] Running
	I0816 18:19:25.335048   74828 system_pods.go:89] "coredns-6f6b679f8f-qr4q9" [d20f51f3-6786-496b-a6bc-7457462e46e9] Running
	I0816 18:19:25.335052   74828 system_pods.go:89] "etcd-no-preload-864476" [246e2b57-dbfe-4fd2-bc9d-ef927d48ba0b] Running
	I0816 18:19:25.335057   74828 system_pods.go:89] "kube-apiserver-no-preload-864476" [0e386448-037f-4543-941a-63f07e0d3186] Running
	I0816 18:19:25.335061   74828 system_pods.go:89] "kube-controller-manager-no-preload-864476" [71617b5c-9968-4d49-ac6c-7728712ac880] Running
	I0816 18:19:25.335064   74828 system_pods.go:89] "kube-proxy-6g6zx" [71a027eb-99e3-4b48-b9f1-2fc80cad9d2e] Running
	I0816 18:19:25.335068   74828 system_pods.go:89] "kube-scheduler-no-preload-864476" [c9b6ef2a-41fa-408b-86b7-eae10db4bec6] Running
	I0816 18:19:25.335075   74828 system_pods.go:89] "metrics-server-6867b74b74-r6cph" [a842267c-2c75-4799-aefc-2fb92ccb9129] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 18:19:25.335081   74828 system_pods.go:89] "storage-provisioner" [c05cdb7c-d74e-4008-a0fc-5eb6df9595af] Running
	I0816 18:19:25.335089   74828 system_pods.go:126] duration metric: took 202.33381ms to wait for k8s-apps to be running ...
	I0816 18:19:25.335098   74828 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 18:19:25.335141   74828 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 18:19:25.349420   74828 system_svc.go:56] duration metric: took 14.310938ms WaitForService to wait for kubelet
	I0816 18:19:25.349457   74828 kubeadm.go:582] duration metric: took 11.758002576s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 18:19:25.349480   74828 node_conditions.go:102] verifying NodePressure condition ...
	I0816 18:19:25.532145   74828 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 18:19:25.532175   74828 node_conditions.go:123] node cpu capacity is 2
	I0816 18:19:25.532189   74828 node_conditions.go:105] duration metric: took 182.702662ms to run NodePressure ...
	I0816 18:19:25.532200   74828 start.go:241] waiting for startup goroutines ...
	I0816 18:19:25.532209   74828 start.go:246] waiting for cluster config update ...
	I0816 18:19:25.532222   74828 start.go:255] writing updated cluster config ...
	I0816 18:19:25.532529   74828 ssh_runner.go:195] Run: rm -f paused
	I0816 18:19:25.588070   74828 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0816 18:19:25.589615   74828 out.go:177] * Done! kubectl is now configured to use "no-preload-864476" cluster and "default" namespace by default
	I0816 18:19:24.440489   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:25.441683   74510 pod_ready.go:82] duration metric: took 4m0.007816418s for pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace to be "Ready" ...
	E0816 18:19:25.441706   74510 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0816 18:19:25.441714   74510 pod_ready.go:39] duration metric: took 4m6.551547163s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:19:25.441726   74510 api_server.go:52] waiting for apiserver process to appear ...
	I0816 18:19:25.441753   74510 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:19:25.441805   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:19:25.492207   74510 cri.go:89] found id: "8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b"
	I0816 18:19:25.492235   74510 cri.go:89] found id: ""
	I0816 18:19:25.492245   74510 logs.go:276] 1 containers: [8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b]
	I0816 18:19:25.492313   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:25.497307   74510 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:19:25.497388   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:19:25.537185   74510 cri.go:89] found id: "fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468"
	I0816 18:19:25.537211   74510 cri.go:89] found id: ""
	I0816 18:19:25.537220   74510 logs.go:276] 1 containers: [fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468]
	I0816 18:19:25.537422   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:25.546564   74510 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:19:25.546644   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:19:25.602794   74510 cri.go:89] found id: "3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d"
	I0816 18:19:25.602817   74510 cri.go:89] found id: ""
	I0816 18:19:25.602827   74510 logs.go:276] 1 containers: [3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d]
	I0816 18:19:25.602879   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:25.609018   74510 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:19:25.609097   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:19:25.657942   74510 cri.go:89] found id: "99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc"
	I0816 18:19:25.657970   74510 cri.go:89] found id: ""
	I0816 18:19:25.657980   74510 logs.go:276] 1 containers: [99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc]
	I0816 18:19:25.658044   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:25.663485   74510 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:19:25.663551   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:19:25.709526   74510 cri.go:89] found id: "92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf"
	I0816 18:19:25.709554   74510 cri.go:89] found id: ""
	I0816 18:19:25.709564   74510 logs.go:276] 1 containers: [92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf]
	I0816 18:19:25.709612   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:25.715845   74510 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:19:25.715898   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:19:25.766505   74510 cri.go:89] found id: "72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c"
	I0816 18:19:25.766522   74510 cri.go:89] found id: ""
	I0816 18:19:25.766529   74510 logs.go:276] 1 containers: [72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c]
	I0816 18:19:25.766573   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:25.771051   74510 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:19:25.771127   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:19:25.810669   74510 cri.go:89] found id: ""
	I0816 18:19:25.810699   74510 logs.go:276] 0 containers: []
	W0816 18:19:25.810711   74510 logs.go:278] No container was found matching "kindnet"
	I0816 18:19:25.810720   74510 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 18:19:25.810779   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 18:19:25.851412   74510 cri.go:89] found id: "08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970"
	I0816 18:19:25.851432   74510 cri.go:89] found id: "81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e"
	I0816 18:19:25.851438   74510 cri.go:89] found id: ""
	I0816 18:19:25.851454   74510 logs.go:276] 2 containers: [08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970 81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e]
	I0816 18:19:25.851507   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:25.856154   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:25.860812   74510 logs.go:123] Gathering logs for etcd [fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468] ...
	I0816 18:19:25.860837   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468"
	I0816 18:19:25.910929   74510 logs.go:123] Gathering logs for coredns [3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d] ...
	I0816 18:19:25.910957   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d"
	I0816 18:19:25.951932   74510 logs.go:123] Gathering logs for storage-provisioner [08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970] ...
	I0816 18:19:25.951959   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970"
	I0816 18:19:25.999861   74510 logs.go:123] Gathering logs for storage-provisioner [81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e] ...
	I0816 18:19:25.999894   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e"
	I0816 18:19:26.036535   74510 logs.go:123] Gathering logs for container status ...
	I0816 18:19:26.036559   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:19:26.089637   74510 logs.go:123] Gathering logs for kubelet ...
	I0816 18:19:26.089675   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:19:26.157679   74510 logs.go:123] Gathering logs for dmesg ...
	I0816 18:19:26.157714   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:19:26.171402   74510 logs.go:123] Gathering logs for kube-scheduler [99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc] ...
	I0816 18:19:26.171432   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc"
	I0816 18:19:26.209537   74510 logs.go:123] Gathering logs for kube-proxy [92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf] ...
	I0816 18:19:26.209564   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf"
	I0816 18:19:26.252702   74510 logs.go:123] Gathering logs for kube-controller-manager [72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c] ...
	I0816 18:19:26.252732   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c"
	I0816 18:19:26.303169   74510 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:19:26.303203   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:19:26.784058   74510 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:19:26.784090   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 18:19:26.904095   74510 logs.go:123] Gathering logs for kube-apiserver [8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b] ...
	I0816 18:19:26.904137   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b"
	I0816 18:19:25.228674   75006 addons.go:510] duration metric: took 1.37992722s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0816 18:19:26.105147   75006 pod_ready.go:103] pod "coredns-6f6b679f8f-hx7sb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:28.107202   75006 pod_ready.go:103] pod "coredns-6f6b679f8f-hx7sb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:32.607933   75402 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0816 18:19:32.608136   75402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:19:32.608430   75402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:19:29.459100   74510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:19:29.476158   74510 api_server.go:72] duration metric: took 4m17.827179017s to wait for apiserver process to appear ...
	I0816 18:19:29.476183   74510 api_server.go:88] waiting for apiserver healthz status ...
	I0816 18:19:29.476222   74510 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:19:29.476279   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:19:29.509739   74510 cri.go:89] found id: "8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b"
	I0816 18:19:29.509767   74510 cri.go:89] found id: ""
	I0816 18:19:29.509776   74510 logs.go:276] 1 containers: [8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b]
	I0816 18:19:29.509836   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:29.516078   74510 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:19:29.516150   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:19:29.553766   74510 cri.go:89] found id: "fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468"
	I0816 18:19:29.553795   74510 cri.go:89] found id: ""
	I0816 18:19:29.553805   74510 logs.go:276] 1 containers: [fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468]
	I0816 18:19:29.553857   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:29.558145   74510 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:19:29.558210   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:19:29.599559   74510 cri.go:89] found id: "3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d"
	I0816 18:19:29.599583   74510 cri.go:89] found id: ""
	I0816 18:19:29.599594   74510 logs.go:276] 1 containers: [3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d]
	I0816 18:19:29.599651   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:29.604108   74510 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:19:29.604187   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:19:29.641990   74510 cri.go:89] found id: "99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc"
	I0816 18:19:29.642009   74510 cri.go:89] found id: ""
	I0816 18:19:29.642016   74510 logs.go:276] 1 containers: [99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc]
	I0816 18:19:29.642062   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:29.645990   74510 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:19:29.646047   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:19:29.679480   74510 cri.go:89] found id: "92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf"
	I0816 18:19:29.679505   74510 cri.go:89] found id: ""
	I0816 18:19:29.679514   74510 logs.go:276] 1 containers: [92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf]
	I0816 18:19:29.679571   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:29.683361   74510 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:19:29.683425   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:19:29.733167   74510 cri.go:89] found id: "72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c"
	I0816 18:19:29.733197   74510 cri.go:89] found id: ""
	I0816 18:19:29.733208   74510 logs.go:276] 1 containers: [72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c]
	I0816 18:19:29.733266   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:29.737449   74510 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:19:29.737518   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:19:29.771597   74510 cri.go:89] found id: ""
	I0816 18:19:29.771628   74510 logs.go:276] 0 containers: []
	W0816 18:19:29.771639   74510 logs.go:278] No container was found matching "kindnet"
	I0816 18:19:29.771647   74510 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 18:19:29.771714   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 18:19:29.812346   74510 cri.go:89] found id: "08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970"
	I0816 18:19:29.812375   74510 cri.go:89] found id: "81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e"
	I0816 18:19:29.812381   74510 cri.go:89] found id: ""
	I0816 18:19:29.812390   74510 logs.go:276] 2 containers: [08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970 81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e]
	I0816 18:19:29.812447   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:29.817909   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:29.821575   74510 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:19:29.821602   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:19:30.288789   74510 logs.go:123] Gathering logs for container status ...
	I0816 18:19:30.288836   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:19:30.332874   74510 logs.go:123] Gathering logs for dmesg ...
	I0816 18:19:30.332904   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:19:30.347128   74510 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:19:30.347168   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 18:19:30.456809   74510 logs.go:123] Gathering logs for etcd [fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468] ...
	I0816 18:19:30.456845   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468"
	I0816 18:19:30.505332   74510 logs.go:123] Gathering logs for kube-scheduler [99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc] ...
	I0816 18:19:30.505362   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc"
	I0816 18:19:30.540765   74510 logs.go:123] Gathering logs for kube-proxy [92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf] ...
	I0816 18:19:30.540798   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf"
	I0816 18:19:30.576047   74510 logs.go:123] Gathering logs for storage-provisioner [81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e] ...
	I0816 18:19:30.576077   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e"
	I0816 18:19:30.611956   74510 logs.go:123] Gathering logs for kubelet ...
	I0816 18:19:30.611992   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:19:30.678135   74510 logs.go:123] Gathering logs for kube-apiserver [8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b] ...
	I0816 18:19:30.678177   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b"
	I0816 18:19:30.732409   74510 logs.go:123] Gathering logs for coredns [3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d] ...
	I0816 18:19:30.732437   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d"
	I0816 18:19:30.773306   74510 logs.go:123] Gathering logs for kube-controller-manager [72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c] ...
	I0816 18:19:30.773331   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c"
	I0816 18:19:30.827732   74510 logs.go:123] Gathering logs for storage-provisioner [08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970] ...
	I0816 18:19:30.827763   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970"
	I0816 18:19:33.367134   74510 api_server.go:253] Checking apiserver healthz at https://192.168.61.218:8443/healthz ...
	I0816 18:19:33.371523   74510 api_server.go:279] https://192.168.61.218:8443/healthz returned 200:
	ok
	I0816 18:19:33.372537   74510 api_server.go:141] control plane version: v1.31.0
	I0816 18:19:33.372560   74510 api_server.go:131] duration metric: took 3.896368169s to wait for apiserver health ...
	I0816 18:19:33.372568   74510 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 18:19:33.372589   74510 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:19:33.372653   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:19:33.409551   74510 cri.go:89] found id: "8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b"
	I0816 18:19:33.409579   74510 cri.go:89] found id: ""
	I0816 18:19:33.409590   74510 logs.go:276] 1 containers: [8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b]
	I0816 18:19:33.409648   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:33.413727   74510 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:19:33.413802   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:19:33.457246   74510 cri.go:89] found id: "fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468"
	I0816 18:19:33.457268   74510 cri.go:89] found id: ""
	I0816 18:19:33.457277   74510 logs.go:276] 1 containers: [fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468]
	I0816 18:19:33.457337   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:33.461490   74510 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:19:33.461556   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:19:33.497141   74510 cri.go:89] found id: "3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d"
	I0816 18:19:33.497169   74510 cri.go:89] found id: ""
	I0816 18:19:33.497180   74510 logs.go:276] 1 containers: [3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d]
	I0816 18:19:33.497241   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:33.501353   74510 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:19:33.501421   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:19:33.537797   74510 cri.go:89] found id: "99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc"
	I0816 18:19:33.537816   74510 cri.go:89] found id: ""
	I0816 18:19:33.537823   74510 logs.go:276] 1 containers: [99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc]
	I0816 18:19:33.537877   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:33.541727   74510 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:19:33.541784   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:19:33.575882   74510 cri.go:89] found id: "92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf"
	I0816 18:19:33.575905   74510 cri.go:89] found id: ""
	I0816 18:19:33.575913   74510 logs.go:276] 1 containers: [92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf]
	I0816 18:19:33.575964   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:33.579592   74510 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:19:33.579644   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:19:33.614425   74510 cri.go:89] found id: "72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c"
	I0816 18:19:33.614447   74510 cri.go:89] found id: ""
	I0816 18:19:33.614455   74510 logs.go:276] 1 containers: [72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c]
	I0816 18:19:33.614507   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:33.618130   74510 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:19:33.618178   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:19:33.652369   74510 cri.go:89] found id: ""
	I0816 18:19:33.652393   74510 logs.go:276] 0 containers: []
	W0816 18:19:33.652403   74510 logs.go:278] No container was found matching "kindnet"
	I0816 18:19:33.652410   74510 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 18:19:33.652463   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 18:19:33.687276   74510 cri.go:89] found id: "08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970"
	I0816 18:19:33.687295   74510 cri.go:89] found id: "81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e"
	I0816 18:19:33.687301   74510 cri.go:89] found id: ""
	I0816 18:19:33.687309   74510 logs.go:276] 2 containers: [08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970 81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e]
	I0816 18:19:33.687361   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:33.691100   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:33.695148   74510 logs.go:123] Gathering logs for kube-proxy [92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf] ...
	I0816 18:19:33.695179   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf"
	I0816 18:19:30.110901   75006 pod_ready.go:103] pod "coredns-6f6b679f8f-hx7sb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:32.606195   75006 pod_ready.go:103] pod "coredns-6f6b679f8f-hx7sb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:34.110732   75006 pod_ready.go:93] pod "coredns-6f6b679f8f-hx7sb" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:34.110764   75006 pod_ready.go:82] duration metric: took 10.011612904s for pod "coredns-6f6b679f8f-hx7sb" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.110778   75006 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-t74vf" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.116373   75006 pod_ready.go:93] pod "coredns-6f6b679f8f-t74vf" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:34.116392   75006 pod_ready.go:82] duration metric: took 5.607377ms for pod "coredns-6f6b679f8f-t74vf" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.116401   75006 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.124005   75006 pod_ready.go:93] pod "etcd-default-k8s-diff-port-256678" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:34.124027   75006 pod_ready.go:82] duration metric: took 7.618878ms for pod "etcd-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.124039   75006 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.129603   75006 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-256678" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:34.129623   75006 pod_ready.go:82] duration metric: took 5.575452ms for pod "kube-apiserver-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.129633   75006 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.145449   75006 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-256678" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:34.145474   75006 pod_ready.go:82] duration metric: took 15.831669ms for pod "kube-controller-manager-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.145486   75006 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qsskg" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.506455   75006 pod_ready.go:93] pod "kube-proxy-qsskg" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:34.506477   75006 pod_ready.go:82] duration metric: took 360.982998ms for pod "kube-proxy-qsskg" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.506486   75006 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.905345   75006 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-256678" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:34.905365   75006 pod_ready.go:82] duration metric: took 398.872303ms for pod "kube-scheduler-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.905373   75006 pod_ready.go:39] duration metric: took 10.812448791s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:19:34.905386   75006 api_server.go:52] waiting for apiserver process to appear ...
	I0816 18:19:34.905430   75006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:19:34.920554   75006 api_server.go:72] duration metric: took 11.071846456s to wait for apiserver process to appear ...
	I0816 18:19:34.920574   75006 api_server.go:88] waiting for apiserver healthz status ...
	I0816 18:19:34.920589   75006 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I0816 18:19:34.927194   75006 api_server.go:279] https://192.168.72.144:8444/healthz returned 200:
	ok
	I0816 18:19:34.928420   75006 api_server.go:141] control plane version: v1.31.0
	I0816 18:19:34.928437   75006 api_server.go:131] duration metric: took 7.857168ms to wait for apiserver health ...
	I0816 18:19:34.928443   75006 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 18:19:35.107220   75006 system_pods.go:59] 9 kube-system pods found
	I0816 18:19:35.107248   75006 system_pods.go:61] "coredns-6f6b679f8f-hx7sb" [4ebcdf34-c4e8-47bd-83f3-c56ea7bfb7d4] Running
	I0816 18:19:35.107254   75006 system_pods.go:61] "coredns-6f6b679f8f-t74vf" [41afd723-b034-460e-8e5f-197c8d8bcd7a] Running
	I0816 18:19:35.107258   75006 system_pods.go:61] "etcd-default-k8s-diff-port-256678" [46e68942-a5fc-433d-bf35-70f87a1b5962] Running
	I0816 18:19:35.107262   75006 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-256678" [0083826c-61fc-4597-84d9-a529df660696] Running
	I0816 18:19:35.107267   75006 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-256678" [e96435e2-1034-46d7-9f70-ba4435962528] Running
	I0816 18:19:35.107270   75006 system_pods.go:61] "kube-proxy-qsskg" [c863ca3c-8451-4fa7-b22d-c709e67bd26b] Running
	I0816 18:19:35.107274   75006 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-256678" [83bd764c-55ee-4fc4-8ebc-567b3fba1f95] Running
	I0816 18:19:35.107280   75006 system_pods.go:61] "metrics-server-6867b74b74-vmt5v" [8446e983-380f-42a8-ab5b-ce9b6d67ebad] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 18:19:35.107288   75006 system_pods.go:61] "storage-provisioner" [491e3d8e-5a8b-4187-a682-411c6fb9dd92] Running
	I0816 18:19:35.107296   75006 system_pods.go:74] duration metric: took 178.847431ms to wait for pod list to return data ...
	I0816 18:19:35.107302   75006 default_sa.go:34] waiting for default service account to be created ...
	I0816 18:19:35.303619   75006 default_sa.go:45] found service account: "default"
	I0816 18:19:35.303646   75006 default_sa.go:55] duration metric: took 196.337687ms for default service account to be created ...
	I0816 18:19:35.303655   75006 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 18:19:35.508401   75006 system_pods.go:86] 9 kube-system pods found
	I0816 18:19:35.508442   75006 system_pods.go:89] "coredns-6f6b679f8f-hx7sb" [4ebcdf34-c4e8-47bd-83f3-c56ea7bfb7d4] Running
	I0816 18:19:35.508452   75006 system_pods.go:89] "coredns-6f6b679f8f-t74vf" [41afd723-b034-460e-8e5f-197c8d8bcd7a] Running
	I0816 18:19:35.508460   75006 system_pods.go:89] "etcd-default-k8s-diff-port-256678" [46e68942-a5fc-433d-bf35-70f87a1b5962] Running
	I0816 18:19:35.508466   75006 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-256678" [0083826c-61fc-4597-84d9-a529df660696] Running
	I0816 18:19:35.508471   75006 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-256678" [e96435e2-1034-46d7-9f70-ba4435962528] Running
	I0816 18:19:35.508477   75006 system_pods.go:89] "kube-proxy-qsskg" [c863ca3c-8451-4fa7-b22d-c709e67bd26b] Running
	I0816 18:19:35.508483   75006 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-256678" [83bd764c-55ee-4fc4-8ebc-567b3fba1f95] Running
	I0816 18:19:35.508494   75006 system_pods.go:89] "metrics-server-6867b74b74-vmt5v" [8446e983-380f-42a8-ab5b-ce9b6d67ebad] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 18:19:35.508504   75006 system_pods.go:89] "storage-provisioner" [491e3d8e-5a8b-4187-a682-411c6fb9dd92] Running
	I0816 18:19:35.508521   75006 system_pods.go:126] duration metric: took 204.859728ms to wait for k8s-apps to be running ...
	I0816 18:19:35.508544   75006 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 18:19:35.508605   75006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 18:19:35.523660   75006 system_svc.go:56] duration metric: took 15.109288ms WaitForService to wait for kubelet
	I0816 18:19:35.523687   75006 kubeadm.go:582] duration metric: took 11.674985717s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 18:19:35.523704   75006 node_conditions.go:102] verifying NodePressure condition ...
	I0816 18:19:35.704770   75006 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 18:19:35.704797   75006 node_conditions.go:123] node cpu capacity is 2
	I0816 18:19:35.704808   75006 node_conditions.go:105] duration metric: took 181.099433ms to run NodePressure ...
	I0816 18:19:35.704818   75006 start.go:241] waiting for startup goroutines ...
	I0816 18:19:35.704824   75006 start.go:246] waiting for cluster config update ...
	I0816 18:19:35.704834   75006 start.go:255] writing updated cluster config ...
	I0816 18:19:35.705096   75006 ssh_runner.go:195] Run: rm -f paused
	I0816 18:19:35.753637   75006 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0816 18:19:35.755747   75006 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-256678" cluster and "default" namespace by default
	I0816 18:19:33.732856   74510 logs.go:123] Gathering logs for kube-controller-manager [72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c] ...
	I0816 18:19:33.732881   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c"
	I0816 18:19:33.796167   74510 logs.go:123] Gathering logs for storage-provisioner [08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970] ...
	I0816 18:19:33.796215   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970"
	I0816 18:19:33.835842   74510 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:19:33.835869   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 18:19:33.956412   74510 logs.go:123] Gathering logs for kube-apiserver [8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b] ...
	I0816 18:19:33.956450   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b"
	I0816 18:19:34.004102   74510 logs.go:123] Gathering logs for etcd [fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468] ...
	I0816 18:19:34.004137   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468"
	I0816 18:19:34.050504   74510 logs.go:123] Gathering logs for coredns [3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d] ...
	I0816 18:19:34.050548   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d"
	I0816 18:19:34.087815   74510 logs.go:123] Gathering logs for kube-scheduler [99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc] ...
	I0816 18:19:34.087850   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc"
	I0816 18:19:34.124096   74510 logs.go:123] Gathering logs for kubelet ...
	I0816 18:19:34.124127   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:19:34.193377   74510 logs.go:123] Gathering logs for dmesg ...
	I0816 18:19:34.193410   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:19:34.206480   74510 logs.go:123] Gathering logs for storage-provisioner [81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e] ...
	I0816 18:19:34.206505   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e"
	I0816 18:19:34.240262   74510 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:19:34.240305   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:19:34.591979   74510 logs.go:123] Gathering logs for container status ...
	I0816 18:19:34.592014   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:19:37.142552   74510 system_pods.go:59] 8 kube-system pods found
	I0816 18:19:37.142580   74510 system_pods.go:61] "coredns-6f6b679f8f-8njs2" [f29c31e1-4c2a-4dd8-ba60-62998504c55e] Running
	I0816 18:19:37.142585   74510 system_pods.go:61] "etcd-embed-certs-777541" [9cad9a1c-cea5-4271-9a68-3689ecdad607] Running
	I0816 18:19:37.142590   74510 system_pods.go:61] "kube-apiserver-embed-certs-777541" [a5105f98-368f-4687-8eca-ecc66ae59b42] Running
	I0816 18:19:37.142593   74510 system_pods.go:61] "kube-controller-manager-embed-certs-777541" [63cb72fb-167b-446b-874d-6ee665ad8a55] Running
	I0816 18:19:37.142596   74510 system_pods.go:61] "kube-proxy-j5rl7" [fcbc8903-6fa2-4f55-9ec0-92b77e21fb08] Running
	I0816 18:19:37.142600   74510 system_pods.go:61] "kube-scheduler-embed-certs-777541" [f2224375-d2f9-4e85-9eb8-26070a90642f] Running
	I0816 18:19:37.142605   74510 system_pods.go:61] "metrics-server-6867b74b74-6hkzb" [3e01da8d-7ddf-47cc-9079-5162cf2c2b53] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 18:19:37.142609   74510 system_pods.go:61] "storage-provisioner" [6fc6c4da-0e0f-45cc-84a6-bd4907f5e852] Running
	I0816 18:19:37.142616   74510 system_pods.go:74] duration metric: took 3.770043434s to wait for pod list to return data ...
	I0816 18:19:37.142625   74510 default_sa.go:34] waiting for default service account to be created ...
	I0816 18:19:37.145135   74510 default_sa.go:45] found service account: "default"
	I0816 18:19:37.145161   74510 default_sa.go:55] duration metric: took 2.530779ms for default service account to be created ...
	I0816 18:19:37.145169   74510 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 18:19:37.149397   74510 system_pods.go:86] 8 kube-system pods found
	I0816 18:19:37.149423   74510 system_pods.go:89] "coredns-6f6b679f8f-8njs2" [f29c31e1-4c2a-4dd8-ba60-62998504c55e] Running
	I0816 18:19:37.149431   74510 system_pods.go:89] "etcd-embed-certs-777541" [9cad9a1c-cea5-4271-9a68-3689ecdad607] Running
	I0816 18:19:37.149437   74510 system_pods.go:89] "kube-apiserver-embed-certs-777541" [a5105f98-368f-4687-8eca-ecc66ae59b42] Running
	I0816 18:19:37.149443   74510 system_pods.go:89] "kube-controller-manager-embed-certs-777541" [63cb72fb-167b-446b-874d-6ee665ad8a55] Running
	I0816 18:19:37.149451   74510 system_pods.go:89] "kube-proxy-j5rl7" [fcbc8903-6fa2-4f55-9ec0-92b77e21fb08] Running
	I0816 18:19:37.149458   74510 system_pods.go:89] "kube-scheduler-embed-certs-777541" [f2224375-d2f9-4e85-9eb8-26070a90642f] Running
	I0816 18:19:37.149471   74510 system_pods.go:89] "metrics-server-6867b74b74-6hkzb" [3e01da8d-7ddf-47cc-9079-5162cf2c2b53] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 18:19:37.149480   74510 system_pods.go:89] "storage-provisioner" [6fc6c4da-0e0f-45cc-84a6-bd4907f5e852] Running
	I0816 18:19:37.149491   74510 system_pods.go:126] duration metric: took 4.31556ms to wait for k8s-apps to be running ...
	I0816 18:19:37.149502   74510 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 18:19:37.149564   74510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 18:19:37.166663   74510 system_svc.go:56] duration metric: took 17.15398ms WaitForService to wait for kubelet
	I0816 18:19:37.166692   74510 kubeadm.go:582] duration metric: took 4m25.517719342s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 18:19:37.166711   74510 node_conditions.go:102] verifying NodePressure condition ...
	I0816 18:19:37.170081   74510 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 18:19:37.170102   74510 node_conditions.go:123] node cpu capacity is 2
	I0816 18:19:37.170112   74510 node_conditions.go:105] duration metric: took 3.396116ms to run NodePressure ...
	I0816 18:19:37.170122   74510 start.go:241] waiting for startup goroutines ...
	I0816 18:19:37.170129   74510 start.go:246] waiting for cluster config update ...
	I0816 18:19:37.170138   74510 start.go:255] writing updated cluster config ...
	I0816 18:19:37.170406   74510 ssh_runner.go:195] Run: rm -f paused
	I0816 18:19:37.218383   74510 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0816 18:19:37.220397   74510 out.go:177] * Done! kubectl is now configured to use "embed-certs-777541" cluster and "default" namespace by default
	I0816 18:19:37.609143   75402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:19:37.609401   75402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:19:47.609941   75402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:19:47.610185   75402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:20:07.611108   75402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:20:07.611350   75402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:20:47.613446   75402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:20:47.613708   75402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:20:47.613742   75402 kubeadm.go:310] 
	I0816 18:20:47.613809   75402 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0816 18:20:47.613902   75402 kubeadm.go:310] 		timed out waiting for the condition
	I0816 18:20:47.613926   75402 kubeadm.go:310] 
	I0816 18:20:47.613976   75402 kubeadm.go:310] 	This error is likely caused by:
	I0816 18:20:47.614028   75402 kubeadm.go:310] 		- The kubelet is not running
	I0816 18:20:47.614160   75402 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0816 18:20:47.614174   75402 kubeadm.go:310] 
	I0816 18:20:47.614323   75402 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0816 18:20:47.614383   75402 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0816 18:20:47.614432   75402 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0816 18:20:47.614441   75402 kubeadm.go:310] 
	I0816 18:20:47.614601   75402 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0816 18:20:47.614730   75402 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0816 18:20:47.614751   75402 kubeadm.go:310] 
	I0816 18:20:47.614875   75402 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0816 18:20:47.614982   75402 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0816 18:20:47.615101   75402 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0816 18:20:47.615217   75402 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0816 18:20:47.615230   75402 kubeadm.go:310] 
	I0816 18:20:47.616865   75402 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 18:20:47.616971   75402 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0816 18:20:47.617028   75402 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0816 18:20:47.617173   75402 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0816 18:20:47.617226   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 18:20:48.158066   75402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 18:20:48.172568   75402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 18:20:48.182445   75402 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 18:20:48.182468   75402 kubeadm.go:157] found existing configuration files:
	
	I0816 18:20:48.182527   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 18:20:48.191779   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 18:20:48.191847   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 18:20:48.201531   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 18:20:48.210495   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 18:20:48.210568   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 18:20:48.219701   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 18:20:48.228170   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 18:20:48.228242   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 18:20:48.237366   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 18:20:48.246335   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 18:20:48.246393   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 18:20:48.255655   75402 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 18:20:48.321873   75402 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0816 18:20:48.321930   75402 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 18:20:48.462199   75402 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 18:20:48.462324   75402 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 18:20:48.462448   75402 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0816 18:20:48.646565   75402 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 18:20:48.648485   75402 out.go:235]   - Generating certificates and keys ...
	I0816 18:20:48.648605   75402 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 18:20:48.648748   75402 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 18:20:48.648895   75402 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 18:20:48.648994   75402 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 18:20:48.649088   75402 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 18:20:48.649185   75402 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 18:20:48.649282   75402 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 18:20:48.649368   75402 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 18:20:48.649485   75402 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 18:20:48.649595   75402 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 18:20:48.649649   75402 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 18:20:48.649753   75402 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 18:20:48.864525   75402 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 18:20:49.035729   75402 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 18:20:49.086765   75402 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 18:20:49.222612   75402 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 18:20:49.239121   75402 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 18:20:49.240158   75402 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 18:20:49.240200   75402 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 18:20:49.366027   75402 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 18:20:49.367770   75402 out.go:235]   - Booting up control plane ...
	I0816 18:20:49.367907   75402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 18:20:49.373047   75402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 18:20:49.373886   75402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 18:20:49.374691   75402 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 18:20:49.379220   75402 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0816 18:21:29.381362   75402 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0816 18:21:29.381473   75402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:21:29.381700   75402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:21:34.381889   75402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:21:34.382065   75402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:21:44.382765   75402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:21:44.382964   75402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:22:04.383485   75402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:22:04.383748   75402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:22:44.382265   75402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:22:44.382558   75402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:22:44.382572   75402 kubeadm.go:310] 
	I0816 18:22:44.382628   75402 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0816 18:22:44.382715   75402 kubeadm.go:310] 		timed out waiting for the condition
	I0816 18:22:44.382741   75402 kubeadm.go:310] 
	I0816 18:22:44.382789   75402 kubeadm.go:310] 	This error is likely caused by:
	I0816 18:22:44.382837   75402 kubeadm.go:310] 		- The kubelet is not running
	I0816 18:22:44.382986   75402 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0816 18:22:44.382997   75402 kubeadm.go:310] 
	I0816 18:22:44.383149   75402 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0816 18:22:44.383202   75402 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0816 18:22:44.383246   75402 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0816 18:22:44.383258   75402 kubeadm.go:310] 
	I0816 18:22:44.383421   75402 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0816 18:22:44.383534   75402 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0816 18:22:44.383549   75402 kubeadm.go:310] 
	I0816 18:22:44.383743   75402 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0816 18:22:44.383877   75402 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0816 18:22:44.383993   75402 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0816 18:22:44.384092   75402 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0816 18:22:44.384103   75402 kubeadm.go:310] 
	I0816 18:22:44.384783   75402 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 18:22:44.384895   75402 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0816 18:22:44.384986   75402 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0816 18:22:44.385062   75402 kubeadm.go:394] duration metric: took 8m1.372176417s to StartCluster
	I0816 18:22:44.385108   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:22:44.385173   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:22:44.425862   75402 cri.go:89] found id: ""
	I0816 18:22:44.425892   75402 logs.go:276] 0 containers: []
	W0816 18:22:44.425901   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:22:44.425909   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:22:44.425982   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:22:44.461988   75402 cri.go:89] found id: ""
	I0816 18:22:44.462019   75402 logs.go:276] 0 containers: []
	W0816 18:22:44.462030   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:22:44.462038   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:22:44.462109   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:22:44.496063   75402 cri.go:89] found id: ""
	I0816 18:22:44.496095   75402 logs.go:276] 0 containers: []
	W0816 18:22:44.496106   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:22:44.496114   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:22:44.496175   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:22:44.529875   75402 cri.go:89] found id: ""
	I0816 18:22:44.529899   75402 logs.go:276] 0 containers: []
	W0816 18:22:44.529906   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:22:44.529912   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:22:44.529958   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:22:44.565745   75402 cri.go:89] found id: ""
	I0816 18:22:44.565781   75402 logs.go:276] 0 containers: []
	W0816 18:22:44.565791   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:22:44.565798   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:22:44.565860   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:22:44.604122   75402 cri.go:89] found id: ""
	I0816 18:22:44.604149   75402 logs.go:276] 0 containers: []
	W0816 18:22:44.604160   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:22:44.604168   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:22:44.604228   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:22:44.636607   75402 cri.go:89] found id: ""
	I0816 18:22:44.636658   75402 logs.go:276] 0 containers: []
	W0816 18:22:44.636669   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:22:44.636677   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:22:44.636736   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:22:44.670942   75402 cri.go:89] found id: ""
	I0816 18:22:44.670973   75402 logs.go:276] 0 containers: []
	W0816 18:22:44.670981   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:22:44.670989   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:22:44.671001   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:22:44.722403   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:22:44.722433   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:22:44.738587   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:22:44.738627   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:22:44.854530   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:22:44.854563   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:22:44.854579   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:22:44.957308   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:22:44.957342   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0816 18:22:44.997652   75402 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0816 18:22:44.997714   75402 out.go:270] * 
	W0816 18:22:44.997804   75402 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0816 18:22:44.997828   75402 out.go:270] * 
	W0816 18:22:44.998787   75402 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 18:22:45.002189   75402 out.go:201] 
	W0816 18:22:45.003254   75402 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0816 18:22:45.003310   75402 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0816 18:22:45.003340   75402 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0816 18:22:45.004826   75402 out.go:201] 
	
	
	==> CRI-O <==
	Aug 16 18:28:27 no-preload-864476 crio[741]: time="2024-08-16 18:28:27.683697288Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723832907683676187,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2f32d06d-e82f-4178-948b-fea19c04499f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 18:28:27 no-preload-864476 crio[741]: time="2024-08-16 18:28:27.684213861Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bf0a4c97-b990-4611-bb22-ac666bd9401a name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:28:27 no-preload-864476 crio[741]: time="2024-08-16 18:28:27.684316529Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bf0a4c97-b990-4611-bb22-ac666bd9401a name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:28:27 no-preload-864476 crio[741]: time="2024-08-16 18:28:27.684531139Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:253d2d8e44fc5c972f81ff8ad6191a1229971cb9c39eebda22d6da42fbd5f247,PodSandboxId:3531663ca9faff9fef1494473b5cbadc7e98280ed9dccf83433dc703f94980fb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723832355468099317,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c05cdb7c-d74e-4008-a0fc-5eb6df9595af,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c94f1c42210f3dadff31e144ed0fdc59e0e6be31403bd0be8a952e0f261dc7e5,PodSandboxId:c608522a82bb040cb2825a2d38b57765d3decb28382156984a58fce7da6764d9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723832355004952162,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qr4q9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d20f51f3-6786-496b-a6bc-7457462e46e9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af0515870115c4f2eb2b6740f5f78163e519f74d0074b55c05f3b999237a3e92,PodSandboxId:3f614a7790466d04f2a017da09b86df06a8b93fda4ee1d32ee194a68dbc1e911,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723832354820511186,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6zfgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99
157766-5089-4abe-a888-ec5992e5720a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:230fb46a1bbd951425820b63075fc2b72c92f05c2efb94a91b76a01b07a6775b,PodSandboxId:b4cd5a0c33fdd4e96136732c9a6036cc774f873db96a7f66a8e34c1b2ce0e08e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1723832354059037305,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6g6zx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71a027eb-99e3-4b48-b9f1-2fc80cad9d2e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a88473ef80e0ff82a5245a6ff0e5cb8b0dea144f21a3406eb7e7ddc516f7aefa,PodSandboxId:f5b40775bbeeeed4b4e4cc32fdff38c091fb85b541143717f9a042442473054a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723832343401012402,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-864476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6435ac1d94e75156d97949e377dffe47,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b854bb0edfc4f26912b0950a720508eb029032beede67e90e428be1e6dcb193e,PodSandboxId:e13790f458072e3de7198ed06ad8cd54f77f123f44e61e024895882b3919445d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723832343421929415,Labels:map[string]string{io.kubernetes.conta
iner.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-864476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f20321a4e2f5424b7598e3868eada327,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae3099e546ae89ec056933cf6c1b07ab17a5f143c9c7145dceb4f42d813b1cc5,PodSandboxId:ed6e330ed921c76738379aa0ea0215f2a477cecf27ef625074f5a57ee820ec43,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723832343377345874,Labels:map[string]string{io.kubernetes.
container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-864476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f5c3a5b91d0afc01c0c747e5aad20a3,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57f5608b266b010f3a98b8c7d4ef55dff5ea671b5a01e8b5a260ff29521cca6f,PodSandboxId:2c607d3875087a77c393243c3487231071192d67bb205bee3c0836e4dc171236,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723832343384257730,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-864476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b251e07bba58ee03e247d5688bb7dd6f,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb6801af1233b05eace939ac465fc546ff32a4cfcebf1a2f037df2c2b82da34d,PodSandboxId:5ab70ad2d348cae307e652eb0360fb992cd2935792e2165c8d9d2c66e88eeac5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723832055989521948,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-864476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f5c3a5b91d0afc01c0c747e5aad20a3,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bf0a4c97-b990-4611-bb22-ac666bd9401a name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:28:27 no-preload-864476 crio[741]: time="2024-08-16 18:28:27.719532789Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=eafabe8f-c79b-4757-a436-3ee043de1c70 name=/runtime.v1.RuntimeService/Version
	Aug 16 18:28:27 no-preload-864476 crio[741]: time="2024-08-16 18:28:27.719620951Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=eafabe8f-c79b-4757-a436-3ee043de1c70 name=/runtime.v1.RuntimeService/Version
	Aug 16 18:28:27 no-preload-864476 crio[741]: time="2024-08-16 18:28:27.720455967Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=21b08134-ba10-4b0f-8836-72f670fa19df name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 18:28:27 no-preload-864476 crio[741]: time="2024-08-16 18:28:27.720884799Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723832907720861739,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=21b08134-ba10-4b0f-8836-72f670fa19df name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 18:28:27 no-preload-864476 crio[741]: time="2024-08-16 18:28:27.721441538Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7b0ec8cc-8bd9-4ed4-bd2e-29b1d924f5e8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:28:27 no-preload-864476 crio[741]: time="2024-08-16 18:28:27.721496020Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7b0ec8cc-8bd9-4ed4-bd2e-29b1d924f5e8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:28:27 no-preload-864476 crio[741]: time="2024-08-16 18:28:27.721700295Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:253d2d8e44fc5c972f81ff8ad6191a1229971cb9c39eebda22d6da42fbd5f247,PodSandboxId:3531663ca9faff9fef1494473b5cbadc7e98280ed9dccf83433dc703f94980fb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723832355468099317,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c05cdb7c-d74e-4008-a0fc-5eb6df9595af,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c94f1c42210f3dadff31e144ed0fdc59e0e6be31403bd0be8a952e0f261dc7e5,PodSandboxId:c608522a82bb040cb2825a2d38b57765d3decb28382156984a58fce7da6764d9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723832355004952162,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qr4q9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d20f51f3-6786-496b-a6bc-7457462e46e9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af0515870115c4f2eb2b6740f5f78163e519f74d0074b55c05f3b999237a3e92,PodSandboxId:3f614a7790466d04f2a017da09b86df06a8b93fda4ee1d32ee194a68dbc1e911,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723832354820511186,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6zfgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99
157766-5089-4abe-a888-ec5992e5720a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:230fb46a1bbd951425820b63075fc2b72c92f05c2efb94a91b76a01b07a6775b,PodSandboxId:b4cd5a0c33fdd4e96136732c9a6036cc774f873db96a7f66a8e34c1b2ce0e08e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1723832354059037305,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6g6zx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71a027eb-99e3-4b48-b9f1-2fc80cad9d2e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a88473ef80e0ff82a5245a6ff0e5cb8b0dea144f21a3406eb7e7ddc516f7aefa,PodSandboxId:f5b40775bbeeeed4b4e4cc32fdff38c091fb85b541143717f9a042442473054a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723832343401012402,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-864476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6435ac1d94e75156d97949e377dffe47,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b854bb0edfc4f26912b0950a720508eb029032beede67e90e428be1e6dcb193e,PodSandboxId:e13790f458072e3de7198ed06ad8cd54f77f123f44e61e024895882b3919445d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723832343421929415,Labels:map[string]string{io.kubernetes.conta
iner.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-864476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f20321a4e2f5424b7598e3868eada327,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae3099e546ae89ec056933cf6c1b07ab17a5f143c9c7145dceb4f42d813b1cc5,PodSandboxId:ed6e330ed921c76738379aa0ea0215f2a477cecf27ef625074f5a57ee820ec43,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723832343377345874,Labels:map[string]string{io.kubernetes.
container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-864476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f5c3a5b91d0afc01c0c747e5aad20a3,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57f5608b266b010f3a98b8c7d4ef55dff5ea671b5a01e8b5a260ff29521cca6f,PodSandboxId:2c607d3875087a77c393243c3487231071192d67bb205bee3c0836e4dc171236,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723832343384257730,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-864476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b251e07bba58ee03e247d5688bb7dd6f,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb6801af1233b05eace939ac465fc546ff32a4cfcebf1a2f037df2c2b82da34d,PodSandboxId:5ab70ad2d348cae307e652eb0360fb992cd2935792e2165c8d9d2c66e88eeac5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723832055989521948,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-864476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f5c3a5b91d0afc01c0c747e5aad20a3,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7b0ec8cc-8bd9-4ed4-bd2e-29b1d924f5e8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:28:27 no-preload-864476 crio[741]: time="2024-08-16 18:28:27.755688159Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=408ffa50-015a-4239-8465-0ae390886411 name=/runtime.v1.RuntimeService/Version
	Aug 16 18:28:27 no-preload-864476 crio[741]: time="2024-08-16 18:28:27.755773706Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=408ffa50-015a-4239-8465-0ae390886411 name=/runtime.v1.RuntimeService/Version
	Aug 16 18:28:27 no-preload-864476 crio[741]: time="2024-08-16 18:28:27.757079826Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9167eabd-e38f-4aed-b588-d74bedabe4d5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 18:28:27 no-preload-864476 crio[741]: time="2024-08-16 18:28:27.757516417Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723832907757491659,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9167eabd-e38f-4aed-b588-d74bedabe4d5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 18:28:27 no-preload-864476 crio[741]: time="2024-08-16 18:28:27.757970460Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fa9f8c50-166c-41d9-b3e2-d3d46cc05af3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:28:27 no-preload-864476 crio[741]: time="2024-08-16 18:28:27.758034998Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fa9f8c50-166c-41d9-b3e2-d3d46cc05af3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:28:27 no-preload-864476 crio[741]: time="2024-08-16 18:28:27.758301092Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:253d2d8e44fc5c972f81ff8ad6191a1229971cb9c39eebda22d6da42fbd5f247,PodSandboxId:3531663ca9faff9fef1494473b5cbadc7e98280ed9dccf83433dc703f94980fb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723832355468099317,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c05cdb7c-d74e-4008-a0fc-5eb6df9595af,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c94f1c42210f3dadff31e144ed0fdc59e0e6be31403bd0be8a952e0f261dc7e5,PodSandboxId:c608522a82bb040cb2825a2d38b57765d3decb28382156984a58fce7da6764d9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723832355004952162,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qr4q9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d20f51f3-6786-496b-a6bc-7457462e46e9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af0515870115c4f2eb2b6740f5f78163e519f74d0074b55c05f3b999237a3e92,PodSandboxId:3f614a7790466d04f2a017da09b86df06a8b93fda4ee1d32ee194a68dbc1e911,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723832354820511186,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6zfgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99
157766-5089-4abe-a888-ec5992e5720a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:230fb46a1bbd951425820b63075fc2b72c92f05c2efb94a91b76a01b07a6775b,PodSandboxId:b4cd5a0c33fdd4e96136732c9a6036cc774f873db96a7f66a8e34c1b2ce0e08e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1723832354059037305,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6g6zx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71a027eb-99e3-4b48-b9f1-2fc80cad9d2e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a88473ef80e0ff82a5245a6ff0e5cb8b0dea144f21a3406eb7e7ddc516f7aefa,PodSandboxId:f5b40775bbeeeed4b4e4cc32fdff38c091fb85b541143717f9a042442473054a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723832343401012402,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-864476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6435ac1d94e75156d97949e377dffe47,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b854bb0edfc4f26912b0950a720508eb029032beede67e90e428be1e6dcb193e,PodSandboxId:e13790f458072e3de7198ed06ad8cd54f77f123f44e61e024895882b3919445d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723832343421929415,Labels:map[string]string{io.kubernetes.conta
iner.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-864476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f20321a4e2f5424b7598e3868eada327,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae3099e546ae89ec056933cf6c1b07ab17a5f143c9c7145dceb4f42d813b1cc5,PodSandboxId:ed6e330ed921c76738379aa0ea0215f2a477cecf27ef625074f5a57ee820ec43,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723832343377345874,Labels:map[string]string{io.kubernetes.
container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-864476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f5c3a5b91d0afc01c0c747e5aad20a3,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57f5608b266b010f3a98b8c7d4ef55dff5ea671b5a01e8b5a260ff29521cca6f,PodSandboxId:2c607d3875087a77c393243c3487231071192d67bb205bee3c0836e4dc171236,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723832343384257730,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-864476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b251e07bba58ee03e247d5688bb7dd6f,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb6801af1233b05eace939ac465fc546ff32a4cfcebf1a2f037df2c2b82da34d,PodSandboxId:5ab70ad2d348cae307e652eb0360fb992cd2935792e2165c8d9d2c66e88eeac5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723832055989521948,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-864476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f5c3a5b91d0afc01c0c747e5aad20a3,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fa9f8c50-166c-41d9-b3e2-d3d46cc05af3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:28:27 no-preload-864476 crio[741]: time="2024-08-16 18:28:27.790865644Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6de9fe8f-078f-4958-976f-74b78fef246c name=/runtime.v1.RuntimeService/Version
	Aug 16 18:28:27 no-preload-864476 crio[741]: time="2024-08-16 18:28:27.790949699Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6de9fe8f-078f-4958-976f-74b78fef246c name=/runtime.v1.RuntimeService/Version
	Aug 16 18:28:27 no-preload-864476 crio[741]: time="2024-08-16 18:28:27.792404327Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b70dc29b-4dc0-4382-89bb-1b8134ab0322 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 18:28:27 no-preload-864476 crio[741]: time="2024-08-16 18:28:27.792785397Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723832907792764431,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b70dc29b-4dc0-4382-89bb-1b8134ab0322 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 18:28:27 no-preload-864476 crio[741]: time="2024-08-16 18:28:27.793382162Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=403cf5b3-a3f4-4016-bc1d-a3dd225bbcd2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:28:27 no-preload-864476 crio[741]: time="2024-08-16 18:28:27.793434396Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=403cf5b3-a3f4-4016-bc1d-a3dd225bbcd2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:28:27 no-preload-864476 crio[741]: time="2024-08-16 18:28:27.793629013Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:253d2d8e44fc5c972f81ff8ad6191a1229971cb9c39eebda22d6da42fbd5f247,PodSandboxId:3531663ca9faff9fef1494473b5cbadc7e98280ed9dccf83433dc703f94980fb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723832355468099317,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c05cdb7c-d74e-4008-a0fc-5eb6df9595af,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c94f1c42210f3dadff31e144ed0fdc59e0e6be31403bd0be8a952e0f261dc7e5,PodSandboxId:c608522a82bb040cb2825a2d38b57765d3decb28382156984a58fce7da6764d9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723832355004952162,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qr4q9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d20f51f3-6786-496b-a6bc-7457462e46e9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af0515870115c4f2eb2b6740f5f78163e519f74d0074b55c05f3b999237a3e92,PodSandboxId:3f614a7790466d04f2a017da09b86df06a8b93fda4ee1d32ee194a68dbc1e911,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723832354820511186,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6zfgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99
157766-5089-4abe-a888-ec5992e5720a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:230fb46a1bbd951425820b63075fc2b72c92f05c2efb94a91b76a01b07a6775b,PodSandboxId:b4cd5a0c33fdd4e96136732c9a6036cc774f873db96a7f66a8e34c1b2ce0e08e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1723832354059037305,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6g6zx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71a027eb-99e3-4b48-b9f1-2fc80cad9d2e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a88473ef80e0ff82a5245a6ff0e5cb8b0dea144f21a3406eb7e7ddc516f7aefa,PodSandboxId:f5b40775bbeeeed4b4e4cc32fdff38c091fb85b541143717f9a042442473054a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723832343401012402,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-864476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6435ac1d94e75156d97949e377dffe47,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b854bb0edfc4f26912b0950a720508eb029032beede67e90e428be1e6dcb193e,PodSandboxId:e13790f458072e3de7198ed06ad8cd54f77f123f44e61e024895882b3919445d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723832343421929415,Labels:map[string]string{io.kubernetes.conta
iner.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-864476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f20321a4e2f5424b7598e3868eada327,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae3099e546ae89ec056933cf6c1b07ab17a5f143c9c7145dceb4f42d813b1cc5,PodSandboxId:ed6e330ed921c76738379aa0ea0215f2a477cecf27ef625074f5a57ee820ec43,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723832343377345874,Labels:map[string]string{io.kubernetes.
container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-864476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f5c3a5b91d0afc01c0c747e5aad20a3,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57f5608b266b010f3a98b8c7d4ef55dff5ea671b5a01e8b5a260ff29521cca6f,PodSandboxId:2c607d3875087a77c393243c3487231071192d67bb205bee3c0836e4dc171236,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723832343384257730,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-864476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b251e07bba58ee03e247d5688bb7dd6f,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb6801af1233b05eace939ac465fc546ff32a4cfcebf1a2f037df2c2b82da34d,PodSandboxId:5ab70ad2d348cae307e652eb0360fb992cd2935792e2165c8d9d2c66e88eeac5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723832055989521948,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-864476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f5c3a5b91d0afc01c0c747e5aad20a3,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=403cf5b3-a3f4-4016-bc1d-a3dd225bbcd2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	253d2d8e44fc5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   3531663ca9faf       storage-provisioner
	c94f1c42210f3       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   c608522a82bb0       coredns-6f6b679f8f-qr4q9
	af0515870115c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   3f614a7790466       coredns-6f6b679f8f-6zfgr
	230fb46a1bbd9       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   9 minutes ago       Running             kube-proxy                0                   b4cd5a0c33fdd       kube-proxy-6g6zx
	b854bb0edfc4f       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   9 minutes ago       Running             kube-controller-manager   2                   e13790f458072       kube-controller-manager-no-preload-864476
	a88473ef80e0f       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   f5b40775bbeee       etcd-no-preload-864476
	57f5608b266b0       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   9 minutes ago       Running             kube-scheduler            2                   2c607d3875087       kube-scheduler-no-preload-864476
	ae3099e546ae8       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   9 minutes ago       Running             kube-apiserver            2                   ed6e330ed921c       kube-apiserver-no-preload-864476
	fb6801af1233b       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   14 minutes ago      Exited              kube-apiserver            1                   5ab70ad2d348c       kube-apiserver-no-preload-864476
	
	
	==> coredns [af0515870115c4f2eb2b6740f5f78163e519f74d0074b55c05f3b999237a3e92] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [c94f1c42210f3dadff31e144ed0fdc59e0e6be31403bd0be8a952e0f261dc7e5] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-864476
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-864476
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd
	                    minikube.k8s.io/name=no-preload-864476
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_16T18_19_09_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 18:19:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-864476
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 18:28:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 18:24:25 +0000   Fri, 16 Aug 2024 18:19:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 18:24:25 +0000   Fri, 16 Aug 2024 18:19:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 18:24:25 +0000   Fri, 16 Aug 2024 18:19:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 18:24:25 +0000   Fri, 16 Aug 2024 18:19:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.50
	  Hostname:    no-preload-864476
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 98901b82b8c8489f9453902580550602
	  System UUID:                98901b82-b8c8-489f-9453-902580550602
	  Boot ID:                    e954e701-4508-4b66-a634-9625ff35ac85
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-6zfgr                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m15s
	  kube-system                 coredns-6f6b679f8f-qr4q9                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m15s
	  kube-system                 etcd-no-preload-864476                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m20s
	  kube-system                 kube-apiserver-no-preload-864476             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m22s
	  kube-system                 kube-controller-manager-no-preload-864476    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m20s
	  kube-system                 kube-proxy-6g6zx                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m15s
	  kube-system                 kube-scheduler-no-preload-864476             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m20s
	  kube-system                 metrics-server-6867b74b74-r6cph              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m13s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m13s  kube-proxy       
	  Normal  Starting                 9m20s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m20s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m20s  kubelet          Node no-preload-864476 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m20s  kubelet          Node no-preload-864476 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m20s  kubelet          Node no-preload-864476 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m16s  node-controller  Node no-preload-864476 event: Registered Node no-preload-864476 in Controller
	
	
	==> dmesg <==
	[  +0.036150] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.680657] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.834416] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.548155] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000011] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.272897] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +0.060738] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062899] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +0.191981] systemd-fstab-generator[684]: Ignoring "noauto" option for root device
	[  +0.147334] systemd-fstab-generator[696]: Ignoring "noauto" option for root device
	[  +0.280754] systemd-fstab-generator[725]: Ignoring "noauto" option for root device
	[Aug16 18:14] systemd-fstab-generator[1316]: Ignoring "noauto" option for root device
	[  +0.055893] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.200234] systemd-fstab-generator[1439]: Ignoring "noauto" option for root device
	[  +4.651886] kauditd_printk_skb: 100 callbacks suppressed
	[  +7.552124] kauditd_printk_skb: 54 callbacks suppressed
	[ +23.322395] kauditd_printk_skb: 28 callbacks suppressed
	[Aug16 18:19] kauditd_printk_skb: 7 callbacks suppressed
	[  +1.436175] systemd-fstab-generator[3079]: Ignoring "noauto" option for root device
	[  +4.603573] kauditd_printk_skb: 54 callbacks suppressed
	[  +1.447310] systemd-fstab-generator[3405]: Ignoring "noauto" option for root device
	[  +5.417097] systemd-fstab-generator[3521]: Ignoring "noauto" option for root device
	[  +0.133430] kauditd_printk_skb: 14 callbacks suppressed
	[  +9.665594] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [a88473ef80e0ff82a5245a6ff0e5cb8b0dea144f21a3406eb7e7ddc516f7aefa] <==
	{"level":"info","ts":"2024-08-16T18:19:03.831774Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-16T18:19:03.831798Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.50:2380"}
	{"level":"info","ts":"2024-08-16T18:19:03.832048Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.50:2380"}
	{"level":"info","ts":"2024-08-16T18:19:03.832244Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"c0dcbd712fbd8799","initial-advertise-peer-urls":["https://192.168.50.50:2380"],"listen-peer-urls":["https://192.168.50.50:2380"],"advertise-client-urls":["https://192.168.50.50:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.50:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-16T18:19:03.832299Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-16T18:19:03.933356Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c0dcbd712fbd8799 is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-16T18:19:03.933431Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c0dcbd712fbd8799 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-16T18:19:03.933448Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c0dcbd712fbd8799 received MsgPreVoteResp from c0dcbd712fbd8799 at term 1"}
	{"level":"info","ts":"2024-08-16T18:19:03.933467Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c0dcbd712fbd8799 became candidate at term 2"}
	{"level":"info","ts":"2024-08-16T18:19:03.933473Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c0dcbd712fbd8799 received MsgVoteResp from c0dcbd712fbd8799 at term 2"}
	{"level":"info","ts":"2024-08-16T18:19:03.933482Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c0dcbd712fbd8799 became leader at term 2"}
	{"level":"info","ts":"2024-08-16T18:19:03.933489Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c0dcbd712fbd8799 elected leader c0dcbd712fbd8799 at term 2"}
	{"level":"info","ts":"2024-08-16T18:19:03.936788Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"c0dcbd712fbd8799","local-member-attributes":"{Name:no-preload-864476 ClientURLs:[https://192.168.50.50:2379]}","request-path":"/0/members/c0dcbd712fbd8799/attributes","cluster-id":"6b98348baa467fce","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-16T18:19:03.936837Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-16T18:19:03.936909Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-16T18:19:03.937253Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-16T18:19:03.938974Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T18:19:03.941085Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-16T18:19:03.941406Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-16T18:19:03.941522Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-16T18:19:03.942889Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T18:19:03.946229Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.50:2379"}
	{"level":"info","ts":"2024-08-16T18:19:03.946394Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6b98348baa467fce","local-member-id":"c0dcbd712fbd8799","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-16T18:19:03.946489Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-16T18:19:03.946546Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 18:28:28 up 14 min,  0 users,  load average: 0.22, 0.21, 0.15
	Linux no-preload-864476 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [ae3099e546ae89ec056933cf6c1b07ab17a5f143c9c7145dceb4f42d813b1cc5] <==
	W0816 18:24:06.845767       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 18:24:06.845874       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0816 18:24:06.846917       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0816 18:24:06.846951       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0816 18:25:06.847909       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 18:25:06.848061       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0816 18:25:06.847910       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 18:25:06.848120       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0816 18:25:06.849448       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0816 18:25:06.849452       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0816 18:27:06.849888       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 18:27:06.850026       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0816 18:27:06.849911       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 18:27:06.850103       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0816 18:27:06.851058       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0816 18:27:06.851169       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [fb6801af1233b05eace939ac465fc546ff32a4cfcebf1a2f037df2c2b82da34d] <==
	W0816 18:18:56.169155       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:18:56.197654       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:18:56.203189       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:18:56.207608       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:18:56.263439       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:18:56.314705       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:18:56.342513       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:18:56.352543       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:18:56.375557       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:18:56.390060       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:18:56.411526       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:18:56.417410       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:18:56.418806       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:18:56.453219       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:18:56.454622       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:18:56.516419       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:18:56.563152       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:18:56.564424       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:18:56.628741       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:18:56.856057       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:18:56.867542       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:18:56.900446       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:18:57.214721       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:18:57.283741       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:18:59.835194       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [b854bb0edfc4f26912b0950a720508eb029032beede67e90e428be1e6dcb193e] <==
	E0816 18:23:12.859350       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 18:23:13.312500       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 18:23:42.866597       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 18:23:43.322347       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 18:24:12.873109       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 18:24:13.333752       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0816 18:24:25.570384       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-864476"
	E0816 18:24:42.879428       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 18:24:43.342836       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0816 18:25:08.567694       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="239.967µs"
	E0816 18:25:12.886500       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 18:25:13.349711       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0816 18:25:22.563964       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="52.645µs"
	E0816 18:25:42.893029       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 18:25:43.358574       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 18:26:12.899677       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 18:26:13.376038       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 18:26:42.907505       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 18:26:43.383450       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 18:27:12.914519       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 18:27:13.393184       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 18:27:42.920984       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 18:27:43.402230       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 18:28:12.927876       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 18:28:13.415156       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [230fb46a1bbd951425820b63075fc2b72c92f05c2efb94a91b76a01b07a6775b] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0816 18:19:14.593059       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0816 18:19:14.646064       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.50"]
	E0816 18:19:14.646187       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0816 18:19:14.792396       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0816 18:19:14.792451       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0816 18:19:14.792483       1 server_linux.go:169] "Using iptables Proxier"
	I0816 18:19:14.794992       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0816 18:19:14.795260       1 server.go:483] "Version info" version="v1.31.0"
	I0816 18:19:14.795310       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 18:19:14.797685       1 config.go:197] "Starting service config controller"
	I0816 18:19:14.797737       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0816 18:19:14.797771       1 config.go:104] "Starting endpoint slice config controller"
	I0816 18:19:14.797786       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0816 18:19:14.798662       1 config.go:326] "Starting node config controller"
	I0816 18:19:14.798670       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0816 18:19:14.898402       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0816 18:19:14.898491       1 shared_informer.go:320] Caches are synced for service config
	I0816 18:19:14.898716       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [57f5608b266b010f3a98b8c7d4ef55dff5ea671b5a01e8b5a260ff29521cca6f] <==
	W0816 18:19:05.878333       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 18:19:05.879858       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 18:19:05.878369       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0816 18:19:05.879967       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 18:19:05.878404       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0816 18:19:05.880076       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 18:19:05.878435       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 18:19:05.880243       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0816 18:19:05.878822       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 18:19:05.880444       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0816 18:19:05.878859       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 18:19:05.880544       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 18:19:06.781137       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 18:19:06.781251       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 18:19:06.839176       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 18:19:06.840271       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 18:19:06.865091       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0816 18:19:06.865151       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 18:19:06.895383       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 18:19:06.895457       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 18:19:07.071698       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0816 18:19:07.071799       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 18:19:07.096842       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 18:19:07.096891       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0816 18:19:09.359563       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 16 18:27:12 no-preload-864476 kubelet[3412]: E0816 18:27:12.548352    3412 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-r6cph" podUID="a842267c-2c75-4799-aefc-2fb92ccb9129"
	Aug 16 18:27:18 no-preload-864476 kubelet[3412]: E0816 18:27:18.665708    3412 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723832838665209687,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:27:18 no-preload-864476 kubelet[3412]: E0816 18:27:18.670201    3412 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723832838665209687,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:27:27 no-preload-864476 kubelet[3412]: E0816 18:27:27.548848    3412 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-r6cph" podUID="a842267c-2c75-4799-aefc-2fb92ccb9129"
	Aug 16 18:27:28 no-preload-864476 kubelet[3412]: E0816 18:27:28.671646    3412 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723832848671380007,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:27:28 no-preload-864476 kubelet[3412]: E0816 18:27:28.671946    3412 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723832848671380007,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:27:38 no-preload-864476 kubelet[3412]: E0816 18:27:38.673678    3412 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723832858673366943,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:27:38 no-preload-864476 kubelet[3412]: E0816 18:27:38.673994    3412 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723832858673366943,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:27:41 no-preload-864476 kubelet[3412]: E0816 18:27:41.548258    3412 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-r6cph" podUID="a842267c-2c75-4799-aefc-2fb92ccb9129"
	Aug 16 18:27:48 no-preload-864476 kubelet[3412]: E0816 18:27:48.675343    3412 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723832868675057470,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:27:48 no-preload-864476 kubelet[3412]: E0816 18:27:48.675382    3412 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723832868675057470,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:27:56 no-preload-864476 kubelet[3412]: E0816 18:27:56.548561    3412 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-r6cph" podUID="a842267c-2c75-4799-aefc-2fb92ccb9129"
	Aug 16 18:27:58 no-preload-864476 kubelet[3412]: E0816 18:27:58.677195    3412 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723832878676886184,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:27:58 no-preload-864476 kubelet[3412]: E0816 18:27:58.677234    3412 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723832878676886184,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:28:08 no-preload-864476 kubelet[3412]: E0816 18:28:08.550635    3412 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-r6cph" podUID="a842267c-2c75-4799-aefc-2fb92ccb9129"
	Aug 16 18:28:08 no-preload-864476 kubelet[3412]: E0816 18:28:08.585539    3412 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 18:28:08 no-preload-864476 kubelet[3412]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 18:28:08 no-preload-864476 kubelet[3412]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 18:28:08 no-preload-864476 kubelet[3412]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 18:28:08 no-preload-864476 kubelet[3412]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 18:28:08 no-preload-864476 kubelet[3412]: E0816 18:28:08.679820    3412 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723832888678777555,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:28:08 no-preload-864476 kubelet[3412]: E0816 18:28:08.679873    3412 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723832888678777555,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:28:18 no-preload-864476 kubelet[3412]: E0816 18:28:18.681586    3412 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723832898681257018,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:28:18 no-preload-864476 kubelet[3412]: E0816 18:28:18.681980    3412 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723832898681257018,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:28:20 no-preload-864476 kubelet[3412]: E0816 18:28:20.550173    3412 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-r6cph" podUID="a842267c-2c75-4799-aefc-2fb92ccb9129"
	
	
	==> storage-provisioner [253d2d8e44fc5c972f81ff8ad6191a1229971cb9c39eebda22d6da42fbd5f247] <==
	I0816 18:19:15.568890       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0816 18:19:15.587753       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0816 18:19:15.587807       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0816 18:19:15.604117       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0816 18:19:15.604992       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-864476_1bb0b690-6376-46ef-9ce2-3ed8222c67dd!
	I0816 18:19:15.605403       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4a87f0ca-7fd5-417d-81d9-efa74cb5b7ce", APIVersion:"v1", ResourceVersion:"394", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-864476_1bb0b690-6376-46ef-9ce2-3ed8222c67dd became leader
	I0816 18:19:15.713733       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-864476_1bb0b690-6376-46ef-9ce2-3ed8222c67dd!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-864476 -n no-preload-864476
E0816 18:28:29.899506   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/enable-default-cni-791304/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-864476 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-r6cph
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-864476 describe pod metrics-server-6867b74b74-r6cph
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-864476 describe pod metrics-server-6867b74b74-r6cph: exit status 1 (59.613808ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-r6cph" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-864476 describe pod metrics-server-6867b74b74-r6cph: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.67s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-256678 -n default-k8s-diff-port-256678
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-08-16 18:28:36.267764569 +0000 UTC m=+6016.586309582
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-256678 -n default-k8s-diff-port-256678
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-256678 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-256678 logs -n 25: (2.32029839s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p custom-flannel-791304 sudo cat                      | custom-flannel-791304        | jenkins | v1.33.1 | 16 Aug 24 18:05 UTC | 16 Aug 24 18:05 UTC |
	|         | /lib/systemd/system/containerd.service                 |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-791304                               | custom-flannel-791304        | jenkins | v1.33.1 | 16 Aug 24 18:05 UTC | 16 Aug 24 18:05 UTC |
	|         | sudo cat                                               |                              |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-791304 sudo                          | custom-flannel-791304        | jenkins | v1.33.1 | 16 Aug 24 18:05 UTC | 16 Aug 24 18:05 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-791304 sudo                          | custom-flannel-791304        | jenkins | v1.33.1 | 16 Aug 24 18:05 UTC | 16 Aug 24 18:05 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-791304 sudo                          | custom-flannel-791304        | jenkins | v1.33.1 | 16 Aug 24 18:05 UTC | 16 Aug 24 18:05 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-791304 sudo                          | custom-flannel-791304        | jenkins | v1.33.1 | 16 Aug 24 18:05 UTC | 16 Aug 24 18:05 UTC |
	|         | find /etc/crio -type f -exec                           |                              |         |         |                     |                     |
	|         | sh -c 'echo {}; cat {}' \;                             |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-791304 sudo                          | custom-flannel-791304        | jenkins | v1.33.1 | 16 Aug 24 18:05 UTC | 16 Aug 24 18:05 UTC |
	|         | crio config                                            |                              |         |         |                     |                     |
	| delete  | -p custom-flannel-791304                               | custom-flannel-791304        | jenkins | v1.33.1 | 16 Aug 24 18:05 UTC | 16 Aug 24 18:05 UTC |
	| start   | -p                                                     | default-k8s-diff-port-256678 | jenkins | v1.33.1 | 16 Aug 24 18:05 UTC | 16 Aug 24 18:07 UTC |
	|         | default-k8s-diff-port-256678                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-777541            | embed-certs-777541           | jenkins | v1.33.1 | 16 Aug 24 18:06 UTC | 16 Aug 24 18:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-777541                                  | embed-certs-777541           | jenkins | v1.33.1 | 16 Aug 24 18:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-864476             | no-preload-864476            | jenkins | v1.33.1 | 16 Aug 24 18:07 UTC | 16 Aug 24 18:07 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-864476                                   | no-preload-864476            | jenkins | v1.33.1 | 16 Aug 24 18:07 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-256678  | default-k8s-diff-port-256678 | jenkins | v1.33.1 | 16 Aug 24 18:07 UTC | 16 Aug 24 18:07 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-256678 | jenkins | v1.33.1 | 16 Aug 24 18:07 UTC |                     |
	|         | default-k8s-diff-port-256678                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-777541                 | embed-certs-777541           | jenkins | v1.33.1 | 16 Aug 24 18:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-783465        | old-k8s-version-783465       | jenkins | v1.33.1 | 16 Aug 24 18:08 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-777541                                  | embed-certs-777541           | jenkins | v1.33.1 | 16 Aug 24 18:08 UTC | 16 Aug 24 18:19 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-864476                  | no-preload-864476            | jenkins | v1.33.1 | 16 Aug 24 18:09 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-864476                                   | no-preload-864476            | jenkins | v1.33.1 | 16 Aug 24 18:09 UTC | 16 Aug 24 18:19 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-256678       | default-k8s-diff-port-256678 | jenkins | v1.33.1 | 16 Aug 24 18:09 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-256678 | jenkins | v1.33.1 | 16 Aug 24 18:09 UTC | 16 Aug 24 18:19 UTC |
	|         | default-k8s-diff-port-256678                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-783465                              | old-k8s-version-783465       | jenkins | v1.33.1 | 16 Aug 24 18:10 UTC | 16 Aug 24 18:10 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-783465             | old-k8s-version-783465       | jenkins | v1.33.1 | 16 Aug 24 18:10 UTC | 16 Aug 24 18:10 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-783465                              | old-k8s-version-783465       | jenkins | v1.33.1 | 16 Aug 24 18:10 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 18:10:53
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 18:10:53.101149   75402 out.go:345] Setting OutFile to fd 1 ...
	I0816 18:10:53.101401   75402 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 18:10:53.101412   75402 out.go:358] Setting ErrFile to fd 2...
	I0816 18:10:53.101418   75402 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 18:10:53.101600   75402 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-9545/.minikube/bin
	I0816 18:10:53.102131   75402 out.go:352] Setting JSON to false
	I0816 18:10:53.103018   75402 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6751,"bootTime":1723825102,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0816 18:10:53.103076   75402 start.go:139] virtualization: kvm guest
	I0816 18:10:53.105216   75402 out.go:177] * [old-k8s-version-783465] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0816 18:10:53.106496   75402 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 18:10:53.106504   75402 notify.go:220] Checking for updates...
	I0816 18:10:53.109235   75402 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 18:10:53.110572   75402 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19461-9545/kubeconfig
	I0816 18:10:53.111747   75402 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19461-9545/.minikube
	I0816 18:10:53.113164   75402 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 18:10:53.114589   75402 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 18:10:53.116284   75402 config.go:182] Loaded profile config "old-k8s-version-783465": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0816 18:10:53.116746   75402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:10:53.116806   75402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:10:53.132445   75402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46631
	I0816 18:10:53.132886   75402 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:10:53.133456   75402 main.go:141] libmachine: Using API Version  1
	I0816 18:10:53.133494   75402 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:10:53.133836   75402 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:10:53.134015   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	I0816 18:10:53.135791   75402 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0816 18:10:53.136942   75402 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 18:10:53.137229   75402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:10:53.137260   75402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:10:53.151853   75402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34617
	I0816 18:10:53.152327   75402 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:10:53.152881   75402 main.go:141] libmachine: Using API Version  1
	I0816 18:10:53.152905   75402 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:10:53.153159   75402 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:10:53.153307   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	I0816 18:10:53.188002   75402 out.go:177] * Using the kvm2 driver based on existing profile
	I0816 18:10:53.189287   75402 start.go:297] selected driver: kvm2
	I0816 18:10:53.189309   75402 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-783465 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-783465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 18:10:53.189432   75402 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 18:10:53.190098   75402 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 18:10:53.190187   75402 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19461-9545/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 18:10:53.205024   75402 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0816 18:10:53.205386   75402 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 18:10:53.205417   75402 cni.go:84] Creating CNI manager for ""
	I0816 18:10:53.205425   75402 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 18:10:53.205458   75402 start.go:340] cluster config:
	{Name:old-k8s-version-783465 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-783465 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 18:10:53.205557   75402 iso.go:125] acquiring lock: {Name:mke35866c1d0f078e1f027c2d727692722810e04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 18:10:53.207241   75402 out.go:177] * Starting "old-k8s-version-783465" primary control-plane node in "old-k8s-version-783465" cluster
	I0816 18:10:53.208254   75402 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0816 18:10:53.208286   75402 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0816 18:10:53.208298   75402 cache.go:56] Caching tarball of preloaded images
	I0816 18:10:53.208386   75402 preload.go:172] Found /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 18:10:53.208400   75402 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0816 18:10:53.208510   75402 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/config.json ...
	I0816 18:10:53.208736   75402 start.go:360] acquireMachinesLock for old-k8s-version-783465: {Name:mke1ffa1f2d6d714bdd85e184816ba8f4dfd08f1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 18:10:54.604889   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:10:57.676891   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:03.756940   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:06.828911   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:12.908885   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:15.980925   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:22.060891   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:25.132961   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:31.212919   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:34.284876   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:40.365032   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:43.436910   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:49.516914   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:52.588969   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:58.668915   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:01.740965   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:07.820898   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:10.892922   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:16.972913   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:20.044913   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:26.124921   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:29.196968   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:35.276952   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:38.348971   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:44.428932   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:47.500897   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:53.580923   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:56.652927   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:13:02.732992   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:13:05.804929   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:13:11.884953   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:13:14.956943   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:13:21.036963   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:13:24.108915   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:13:30.188851   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:13:33.260936   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:13:36.264963   74828 start.go:364] duration metric: took 4m2.37855556s to acquireMachinesLock for "no-preload-864476"
	I0816 18:13:36.265020   74828 start.go:96] Skipping create...Using existing machine configuration
	I0816 18:13:36.265027   74828 fix.go:54] fixHost starting: 
	I0816 18:13:36.265379   74828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:13:36.265409   74828 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:13:36.280707   74828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35227
	I0816 18:13:36.281167   74828 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:13:36.281747   74828 main.go:141] libmachine: Using API Version  1
	I0816 18:13:36.281778   74828 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:13:36.282122   74828 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:13:36.282330   74828 main.go:141] libmachine: (no-preload-864476) Calling .DriverName
	I0816 18:13:36.282457   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetState
	I0816 18:13:36.284064   74828 fix.go:112] recreateIfNeeded on no-preload-864476: state=Stopped err=<nil>
	I0816 18:13:36.284084   74828 main.go:141] libmachine: (no-preload-864476) Calling .DriverName
	W0816 18:13:36.284217   74828 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 18:13:36.286749   74828 out.go:177] * Restarting existing kvm2 VM for "no-preload-864476" ...
	I0816 18:13:36.262619   74510 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 18:13:36.262654   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetMachineName
	I0816 18:13:36.262944   74510 buildroot.go:166] provisioning hostname "embed-certs-777541"
	I0816 18:13:36.262967   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetMachineName
	I0816 18:13:36.263222   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:13:36.264803   74510 machine.go:96] duration metric: took 4m37.429582668s to provisionDockerMachine
	I0816 18:13:36.264858   74510 fix.go:56] duration metric: took 4m37.449862851s for fixHost
	I0816 18:13:36.264867   74510 start.go:83] releasing machines lock for "embed-certs-777541", held for 4m37.449881856s
	W0816 18:13:36.264895   74510 start.go:714] error starting host: provision: host is not running
	W0816 18:13:36.264994   74510 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0816 18:13:36.265005   74510 start.go:729] Will try again in 5 seconds ...
	I0816 18:13:36.288329   74828 main.go:141] libmachine: (no-preload-864476) Calling .Start
	I0816 18:13:36.288484   74828 main.go:141] libmachine: (no-preload-864476) Ensuring networks are active...
	I0816 18:13:36.289285   74828 main.go:141] libmachine: (no-preload-864476) Ensuring network default is active
	I0816 18:13:36.289912   74828 main.go:141] libmachine: (no-preload-864476) Ensuring network mk-no-preload-864476 is active
	I0816 18:13:36.290318   74828 main.go:141] libmachine: (no-preload-864476) Getting domain xml...
	I0816 18:13:36.291176   74828 main.go:141] libmachine: (no-preload-864476) Creating domain...
	I0816 18:13:37.504191   74828 main.go:141] libmachine: (no-preload-864476) Waiting to get IP...
	I0816 18:13:37.505110   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:37.505575   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:37.505621   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:37.505543   75973 retry.go:31] will retry after 308.411866ms: waiting for machine to come up
	I0816 18:13:37.816219   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:37.816877   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:37.816931   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:37.816852   75973 retry.go:31] will retry after 321.445064ms: waiting for machine to come up
	I0816 18:13:38.140594   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:38.141059   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:38.141082   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:38.141018   75973 retry.go:31] will retry after 337.935433ms: waiting for machine to come up
	I0816 18:13:38.480699   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:38.481110   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:38.481135   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:38.481033   75973 retry.go:31] will retry after 449.775503ms: waiting for machine to come up
	I0816 18:13:41.266589   74510 start.go:360] acquireMachinesLock for embed-certs-777541: {Name:mke1ffa1f2d6d714bdd85e184816ba8f4dfd08f1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 18:13:38.932812   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:38.933232   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:38.933259   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:38.933171   75973 retry.go:31] will retry after 482.676832ms: waiting for machine to come up
	I0816 18:13:39.417939   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:39.418323   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:39.418350   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:39.418276   75973 retry.go:31] will retry after 740.37516ms: waiting for machine to come up
	I0816 18:13:40.160491   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:40.160917   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:40.160942   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:40.160867   75973 retry.go:31] will retry after 1.10464436s: waiting for machine to come up
	I0816 18:13:41.267213   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:41.267654   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:41.267680   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:41.267613   75973 retry.go:31] will retry after 1.395131164s: waiting for machine to come up
	I0816 18:13:42.664731   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:42.665229   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:42.665252   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:42.665181   75973 retry.go:31] will retry after 1.560403289s: waiting for machine to come up
	I0816 18:13:44.226847   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:44.227375   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:44.227404   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:44.227342   75973 retry.go:31] will retry after 1.647944685s: waiting for machine to come up
	I0816 18:13:45.876965   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:45.877411   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:45.877440   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:45.877366   75973 retry.go:31] will retry after 1.971325886s: waiting for machine to come up
	I0816 18:13:47.849950   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:47.850457   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:47.850490   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:47.850383   75973 retry.go:31] will retry after 2.95642392s: waiting for machine to come up
	I0816 18:13:50.810560   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:50.811013   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:50.811045   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:50.810930   75973 retry.go:31] will retry after 4.510008193s: waiting for machine to come up
	I0816 18:13:56.529339   75006 start.go:364] duration metric: took 4m6.515818295s to acquireMachinesLock for "default-k8s-diff-port-256678"
	I0816 18:13:56.529444   75006 start.go:96] Skipping create...Using existing machine configuration
	I0816 18:13:56.529459   75006 fix.go:54] fixHost starting: 
	I0816 18:13:56.529851   75006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:13:56.529890   75006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:13:56.547077   75006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45661
	I0816 18:13:56.547585   75006 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:13:56.548068   75006 main.go:141] libmachine: Using API Version  1
	I0816 18:13:56.548091   75006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:13:56.548421   75006 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:13:56.548610   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .DriverName
	I0816 18:13:56.548766   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetState
	I0816 18:13:56.550373   75006 fix.go:112] recreateIfNeeded on default-k8s-diff-port-256678: state=Stopped err=<nil>
	I0816 18:13:56.550414   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .DriverName
	W0816 18:13:56.550604   75006 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 18:13:56.552781   75006 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-256678" ...
	I0816 18:13:55.326062   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.326558   74828 main.go:141] libmachine: (no-preload-864476) Found IP for machine: 192.168.50.50
	I0816 18:13:55.326576   74828 main.go:141] libmachine: (no-preload-864476) Reserving static IP address...
	I0816 18:13:55.326593   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has current primary IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.327109   74828 main.go:141] libmachine: (no-preload-864476) Reserved static IP address: 192.168.50.50
	I0816 18:13:55.327142   74828 main.go:141] libmachine: (no-preload-864476) Waiting for SSH to be available...
	I0816 18:13:55.327167   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "no-preload-864476", mac: "52:54:00:f3:50:53", ip: "192.168.50.50"} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:55.327191   74828 main.go:141] libmachine: (no-preload-864476) DBG | skip adding static IP to network mk-no-preload-864476 - found existing host DHCP lease matching {name: "no-preload-864476", mac: "52:54:00:f3:50:53", ip: "192.168.50.50"}
	I0816 18:13:55.327205   74828 main.go:141] libmachine: (no-preload-864476) DBG | Getting to WaitForSSH function...
	I0816 18:13:55.329001   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.329350   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:55.329378   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.329534   74828 main.go:141] libmachine: (no-preload-864476) DBG | Using SSH client type: external
	I0816 18:13:55.329574   74828 main.go:141] libmachine: (no-preload-864476) DBG | Using SSH private key: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/no-preload-864476/id_rsa (-rw-------)
	I0816 18:13:55.329604   74828 main.go:141] libmachine: (no-preload-864476) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.50 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19461-9545/.minikube/machines/no-preload-864476/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 18:13:55.329622   74828 main.go:141] libmachine: (no-preload-864476) DBG | About to run SSH command:
	I0816 18:13:55.329636   74828 main.go:141] libmachine: (no-preload-864476) DBG | exit 0
	I0816 18:13:55.452553   74828 main.go:141] libmachine: (no-preload-864476) DBG | SSH cmd err, output: <nil>: 
	I0816 18:13:55.452964   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetConfigRaw
	I0816 18:13:55.453557   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetIP
	I0816 18:13:55.455951   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.456334   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:55.456370   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.456564   74828 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/no-preload-864476/config.json ...
	I0816 18:13:55.456782   74828 machine.go:93] provisionDockerMachine start ...
	I0816 18:13:55.456801   74828 main.go:141] libmachine: (no-preload-864476) Calling .DriverName
	I0816 18:13:55.456983   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:13:55.459149   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.459547   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:55.459570   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.459730   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHPort
	I0816 18:13:55.459918   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:55.460068   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:55.460207   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHUsername
	I0816 18:13:55.460418   74828 main.go:141] libmachine: Using SSH client type: native
	I0816 18:13:55.460603   74828 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.50 22 <nil> <nil>}
	I0816 18:13:55.460637   74828 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 18:13:55.564875   74828 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 18:13:55.564903   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetMachineName
	I0816 18:13:55.565203   74828 buildroot.go:166] provisioning hostname "no-preload-864476"
	I0816 18:13:55.565229   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetMachineName
	I0816 18:13:55.565455   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:13:55.568114   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.568578   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:55.568612   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.568777   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHPort
	I0816 18:13:55.568912   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:55.569023   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:55.569200   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHUsername
	I0816 18:13:55.569448   74828 main.go:141] libmachine: Using SSH client type: native
	I0816 18:13:55.569649   74828 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.50 22 <nil> <nil>}
	I0816 18:13:55.569667   74828 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-864476 && echo "no-preload-864476" | sudo tee /etc/hostname
	I0816 18:13:55.686349   74828 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-864476
	
	I0816 18:13:55.686378   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:13:55.689171   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.689572   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:55.689608   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.689792   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHPort
	I0816 18:13:55.690008   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:55.690183   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:55.690418   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHUsername
	I0816 18:13:55.690623   74828 main.go:141] libmachine: Using SSH client type: native
	I0816 18:13:55.690782   74828 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.50 22 <nil> <nil>}
	I0816 18:13:55.690798   74828 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-864476' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-864476/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-864476' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 18:13:55.800352   74828 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 18:13:55.800386   74828 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19461-9545/.minikube CaCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19461-9545/.minikube}
	I0816 18:13:55.800436   74828 buildroot.go:174] setting up certificates
	I0816 18:13:55.800452   74828 provision.go:84] configureAuth start
	I0816 18:13:55.800470   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetMachineName
	I0816 18:13:55.800793   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetIP
	I0816 18:13:55.803388   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.803786   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:55.803822   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.804025   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:13:55.806567   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.806977   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:55.807003   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.807129   74828 provision.go:143] copyHostCerts
	I0816 18:13:55.807178   74828 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem, removing ...
	I0816 18:13:55.807198   74828 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem
	I0816 18:13:55.807286   74828 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem (1082 bytes)
	I0816 18:13:55.807401   74828 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem, removing ...
	I0816 18:13:55.807412   74828 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem
	I0816 18:13:55.807439   74828 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem (1123 bytes)
	I0816 18:13:55.807554   74828 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem, removing ...
	I0816 18:13:55.807565   74828 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem
	I0816 18:13:55.807588   74828 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem (1679 bytes)
	I0816 18:13:55.807648   74828 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem org=jenkins.no-preload-864476 san=[127.0.0.1 192.168.50.50 localhost minikube no-preload-864476]
	I0816 18:13:55.881474   74828 provision.go:177] copyRemoteCerts
	I0816 18:13:55.881529   74828 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 18:13:55.881558   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:13:55.884424   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.884952   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:55.884983   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.885138   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHPort
	I0816 18:13:55.885335   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:55.885486   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHUsername
	I0816 18:13:55.885669   74828 sshutil.go:53] new ssh client: &{IP:192.168.50.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/no-preload-864476/id_rsa Username:docker}
	I0816 18:13:55.966915   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0816 18:13:55.989812   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 18:13:56.011744   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 18:13:56.032745   74828 provision.go:87] duration metric: took 232.276991ms to configureAuth
	I0816 18:13:56.032778   74828 buildroot.go:189] setting minikube options for container-runtime
	I0816 18:13:56.033001   74828 config.go:182] Loaded profile config "no-preload-864476": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 18:13:56.033096   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:13:56.035919   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:56.036283   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:56.036311   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:56.036499   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHPort
	I0816 18:13:56.036713   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:56.036861   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:56.036975   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHUsername
	I0816 18:13:56.037100   74828 main.go:141] libmachine: Using SSH client type: native
	I0816 18:13:56.037275   74828 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.50 22 <nil> <nil>}
	I0816 18:13:56.037294   74828 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 18:13:56.296112   74828 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 18:13:56.296140   74828 machine.go:96] duration metric: took 839.343895ms to provisionDockerMachine
	I0816 18:13:56.296152   74828 start.go:293] postStartSetup for "no-preload-864476" (driver="kvm2")
	I0816 18:13:56.296162   74828 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 18:13:56.296177   74828 main.go:141] libmachine: (no-preload-864476) Calling .DriverName
	I0816 18:13:56.296537   74828 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 18:13:56.296570   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:13:56.299838   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:56.300364   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:56.300396   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:56.300603   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHPort
	I0816 18:13:56.300833   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:56.300985   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHUsername
	I0816 18:13:56.301187   74828 sshutil.go:53] new ssh client: &{IP:192.168.50.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/no-preload-864476/id_rsa Username:docker}
	I0816 18:13:56.383095   74828 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 18:13:56.387172   74828 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 18:13:56.387200   74828 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/addons for local assets ...
	I0816 18:13:56.387286   74828 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/files for local assets ...
	I0816 18:13:56.387392   74828 filesync.go:149] local asset: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem -> 167532.pem in /etc/ssl/certs
	I0816 18:13:56.387550   74828 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 18:13:56.396072   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /etc/ssl/certs/167532.pem (1708 bytes)
	I0816 18:13:56.419470   74828 start.go:296] duration metric: took 123.306644ms for postStartSetup
	I0816 18:13:56.419509   74828 fix.go:56] duration metric: took 20.154482872s for fixHost
	I0816 18:13:56.419529   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:13:56.422047   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:56.422454   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:56.422503   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:56.422573   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHPort
	I0816 18:13:56.422764   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:56.422963   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:56.423150   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHUsername
	I0816 18:13:56.423388   74828 main.go:141] libmachine: Using SSH client type: native
	I0816 18:13:56.423597   74828 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.50 22 <nil> <nil>}
	I0816 18:13:56.423610   74828 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 18:13:56.529164   74828 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723832036.506687395
	
	I0816 18:13:56.529190   74828 fix.go:216] guest clock: 1723832036.506687395
	I0816 18:13:56.529200   74828 fix.go:229] Guest: 2024-08-16 18:13:56.506687395 +0000 UTC Remote: 2024-08-16 18:13:56.419513163 +0000 UTC m=+262.671840210 (delta=87.174232ms)
	I0816 18:13:56.529229   74828 fix.go:200] guest clock delta is within tolerance: 87.174232ms
	I0816 18:13:56.529246   74828 start.go:83] releasing machines lock for "no-preload-864476", held for 20.264231324s
	I0816 18:13:56.529276   74828 main.go:141] libmachine: (no-preload-864476) Calling .DriverName
	I0816 18:13:56.529645   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetIP
	I0816 18:13:56.532279   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:56.532599   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:56.532660   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:56.532824   74828 main.go:141] libmachine: (no-preload-864476) Calling .DriverName
	I0816 18:13:56.533348   74828 main.go:141] libmachine: (no-preload-864476) Calling .DriverName
	I0816 18:13:56.533522   74828 main.go:141] libmachine: (no-preload-864476) Calling .DriverName
	I0816 18:13:56.533604   74828 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 18:13:56.533663   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:13:56.533759   74828 ssh_runner.go:195] Run: cat /version.json
	I0816 18:13:56.533786   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:13:56.536427   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:56.536711   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:56.536822   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:56.536845   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:56.536996   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHPort
	I0816 18:13:56.537071   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:56.537105   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:56.537191   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:56.537334   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHUsername
	I0816 18:13:56.537430   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHPort
	I0816 18:13:56.537497   74828 sshutil.go:53] new ssh client: &{IP:192.168.50.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/no-preload-864476/id_rsa Username:docker}
	I0816 18:13:56.537582   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:56.537728   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHUsername
	I0816 18:13:56.537964   74828 sshutil.go:53] new ssh client: &{IP:192.168.50.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/no-preload-864476/id_rsa Username:docker}
	I0816 18:13:56.654319   74828 ssh_runner.go:195] Run: systemctl --version
	I0816 18:13:56.660640   74828 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 18:13:56.806359   74828 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 18:13:56.812415   74828 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 18:13:56.812489   74828 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 18:13:56.828095   74828 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 18:13:56.828122   74828 start.go:495] detecting cgroup driver to use...
	I0816 18:13:56.828186   74828 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 18:13:56.843041   74828 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 18:13:56.856322   74828 docker.go:217] disabling cri-docker service (if available) ...
	I0816 18:13:56.856386   74828 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 18:13:56.869899   74828 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 18:13:56.884609   74828 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 18:13:56.990986   74828 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 18:13:57.134218   74828 docker.go:233] disabling docker service ...
	I0816 18:13:57.134283   74828 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 18:13:57.156415   74828 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 18:13:57.172969   74828 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 18:13:57.328279   74828 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 18:13:57.448217   74828 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 18:13:57.461630   74828 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 18:13:57.478199   74828 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 18:13:57.478271   74828 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:13:57.487845   74828 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 18:13:57.487918   74828 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:13:57.497895   74828 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:13:57.509260   74828 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:13:57.519090   74828 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 18:13:57.529351   74828 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:13:57.539816   74828 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:13:57.559271   74828 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:13:57.573027   74828 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 18:13:57.583410   74828 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 18:13:57.583490   74828 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 18:13:57.598762   74828 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 18:13:57.609589   74828 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:13:57.727016   74828 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 18:13:57.876815   74828 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 18:13:57.876876   74828 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 18:13:57.882172   74828 start.go:563] Will wait 60s for crictl version
	I0816 18:13:57.882241   74828 ssh_runner.go:195] Run: which crictl
	I0816 18:13:57.885706   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 18:13:57.926981   74828 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 18:13:57.927070   74828 ssh_runner.go:195] Run: crio --version
	I0816 18:13:57.957802   74828 ssh_runner.go:195] Run: crio --version
	I0816 18:13:57.984920   74828 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 18:13:57.986450   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetIP
	I0816 18:13:57.989584   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:57.990205   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:57.990257   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:57.990552   74828 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0816 18:13:57.994584   74828 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 18:13:58.007996   74828 kubeadm.go:883] updating cluster {Name:no-preload-864476 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-864476 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.50 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 18:13:58.008137   74828 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 18:13:58.008184   74828 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 18:13:58.041643   74828 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0816 18:13:58.041672   74828 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0816 18:13:58.041751   74828 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:13:58.041778   74828 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0816 18:13:58.041794   74828 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0816 18:13:58.041741   74828 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0816 18:13:58.041779   74828 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 18:13:58.041899   74828 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0816 18:13:58.041918   74828 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0816 18:13:58.041798   74828 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0816 18:13:58.043387   74828 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0816 18:13:58.043471   74828 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0816 18:13:58.043386   74828 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:13:58.043471   74828 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 18:13:58.043388   74828 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0816 18:13:58.043387   74828 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0816 18:13:58.043386   74828 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0816 18:13:58.043394   74828 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0816 18:13:58.289223   74828 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0816 18:13:58.299125   74828 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0816 18:13:58.308703   74828 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0816 18:13:58.339031   74828 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 18:13:58.351467   74828 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0816 18:13:58.351514   74828 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0816 18:13:58.351572   74828 ssh_runner.go:195] Run: which crictl
	I0816 18:13:58.358019   74828 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0816 18:13:58.359198   74828 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0816 18:13:58.385487   74828 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0816 18:13:58.385529   74828 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0816 18:13:58.385571   74828 ssh_runner.go:195] Run: which crictl
	I0816 18:13:58.392417   74828 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0816 18:13:58.506834   74828 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0816 18:13:58.506886   74828 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 18:13:58.506896   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0816 18:13:58.506924   74828 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0816 18:13:58.506963   74828 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0816 18:13:58.507003   74828 ssh_runner.go:195] Run: which crictl
	I0816 18:13:58.506928   74828 ssh_runner.go:195] Run: which crictl
	I0816 18:13:58.507072   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0816 18:13:58.507004   74828 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0816 18:13:58.507099   74828 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0816 18:13:58.507124   74828 ssh_runner.go:195] Run: which crictl
	I0816 18:13:58.507160   74828 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0816 18:13:58.507181   74828 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0816 18:13:58.507228   74828 ssh_runner.go:195] Run: which crictl
	I0816 18:13:58.562410   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0816 18:13:58.562469   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0816 18:13:58.562481   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0816 18:13:58.562554   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 18:13:58.562590   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0816 18:13:58.562628   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0816 18:13:58.686069   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0816 18:13:58.690288   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0816 18:13:58.690352   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0816 18:13:58.692851   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 18:13:58.692911   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0816 18:13:58.693027   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0816 18:13:58.777263   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0816 18:13:56.554238   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .Start
	I0816 18:13:56.554426   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Ensuring networks are active...
	I0816 18:13:56.555221   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Ensuring network default is active
	I0816 18:13:56.555599   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Ensuring network mk-default-k8s-diff-port-256678 is active
	I0816 18:13:56.556004   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Getting domain xml...
	I0816 18:13:56.556809   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Creating domain...
	I0816 18:13:57.825641   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting to get IP...
	I0816 18:13:57.826681   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:13:57.827158   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:13:57.827219   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:13:57.827129   76107 retry.go:31] will retry after 267.923612ms: waiting for machine to come up
	I0816 18:13:58.096794   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:13:58.097184   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:13:58.097219   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:13:58.097158   76107 retry.go:31] will retry after 286.726817ms: waiting for machine to come up
	I0816 18:13:58.386213   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:13:58.386757   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:13:58.386782   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:13:58.386704   76107 retry.go:31] will retry after 386.697374ms: waiting for machine to come up
	I0816 18:13:58.775483   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:13:58.775989   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:13:58.776014   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:13:58.775949   76107 retry.go:31] will retry after 554.398617ms: waiting for machine to come up
	I0816 18:13:59.331517   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:13:59.332002   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:13:59.332024   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:13:59.331943   76107 retry.go:31] will retry after 589.24333ms: waiting for machine to come up
	I0816 18:13:58.823309   74828 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0816 18:13:58.823318   74828 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0816 18:13:58.823410   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 18:13:58.823434   74828 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0816 18:13:58.823437   74828 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0816 18:13:58.823549   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0816 18:13:58.836312   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0816 18:13:58.894363   74828 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0816 18:13:58.894428   74828 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0816 18:13:58.894447   74828 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0816 18:13:58.894495   74828 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0816 18:13:58.894495   74828 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0816 18:13:58.933183   74828 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0816 18:13:58.933290   74828 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0816 18:13:58.934389   74828 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0816 18:13:58.934456   74828 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0816 18:13:58.934491   74828 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0816 18:13:58.934550   74828 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0816 18:13:58.934569   74828 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0816 18:13:58.934682   74828 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:14:00.792156   74828 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (1.897633034s)
	I0816 18:14:00.792196   74828 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0816 18:14:00.792224   74828 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (1.89763588s)
	I0816 18:14:00.792257   74828 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0816 18:14:00.792230   74828 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0816 18:14:00.792281   74828 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.858968807s)
	I0816 18:14:00.792300   74828 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0816 18:14:00.792317   74828 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0816 18:14:00.792355   74828 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1: (1.85778817s)
	I0816 18:14:00.792370   74828 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0816 18:14:00.792415   74828 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.857704749s)
	I0816 18:14:00.792422   74828 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.857843473s)
	I0816 18:14:00.792436   74828 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0816 18:14:00.792457   74828 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0816 18:14:00.792491   74828 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:14:00.792528   74828 ssh_runner.go:195] Run: which crictl
	I0816 18:14:00.797103   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:14:03.171070   74828 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.378727123s)
	I0816 18:14:03.171118   74828 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0816 18:14:03.171149   74828 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.374004458s)
	I0816 18:14:03.171155   74828 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0816 18:14:03.171274   74828 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0816 18:14:03.171225   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:13:59.922834   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:13:59.923439   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:13:59.923467   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:13:59.923368   76107 retry.go:31] will retry after 779.656786ms: waiting for machine to come up
	I0816 18:14:00.704929   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:00.705395   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:14:00.705417   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:14:00.705344   76107 retry.go:31] will retry after 790.87115ms: waiting for machine to come up
	I0816 18:14:01.497557   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:01.497999   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:14:01.498052   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:14:01.497981   76107 retry.go:31] will retry after 919.825072ms: waiting for machine to come up
	I0816 18:14:02.419821   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:02.420280   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:14:02.420312   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:14:02.420227   76107 retry.go:31] will retry after 1.304504009s: waiting for machine to come up
	I0816 18:14:03.725928   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:03.726378   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:14:03.726400   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:14:03.726344   76107 retry.go:31] will retry after 2.105251359s: waiting for machine to come up
	I0816 18:14:06.879864   74828 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.708558161s)
	I0816 18:14:06.879904   74828 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0816 18:14:06.879905   74828 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.708563338s)
	I0816 18:14:06.879935   74828 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0816 18:14:06.879981   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:14:06.879991   74828 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0816 18:14:08.769077   74828 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.889063218s)
	I0816 18:14:08.769114   74828 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0816 18:14:08.769145   74828 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0816 18:14:08.769231   74828 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0816 18:14:08.769146   74828 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.889146748s)
	I0816 18:14:08.769343   74828 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0816 18:14:08.769431   74828 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0816 18:14:05.833605   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:05.834078   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:14:05.834109   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:14:05.834025   76107 retry.go:31] will retry after 2.042421539s: waiting for machine to come up
	I0816 18:14:07.878000   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:07.878510   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:14:07.878541   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:14:07.878432   76107 retry.go:31] will retry after 2.777402825s: waiting for machine to come up
	I0816 18:14:10.627286   74828 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.858028746s)
	I0816 18:14:10.627331   74828 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0816 18:14:10.627346   74828 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.857891086s)
	I0816 18:14:10.627358   74828 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0816 18:14:10.627378   74828 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0816 18:14:10.627402   74828 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0816 18:14:11.977277   74828 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.349851948s)
	I0816 18:14:11.977314   74828 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0816 18:14:11.977339   74828 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0816 18:14:11.977389   74828 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0816 18:14:12.630939   74828 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0816 18:14:12.630999   74828 cache_images.go:123] Successfully loaded all cached images
	I0816 18:14:12.631004   74828 cache_images.go:92] duration metric: took 14.589319022s to LoadCachedImages
	I0816 18:14:12.631016   74828 kubeadm.go:934] updating node { 192.168.50.50 8443 v1.31.0 crio true true} ...
	I0816 18:14:12.631132   74828 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-864476 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.50
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-864476 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 18:14:12.631207   74828 ssh_runner.go:195] Run: crio config
	I0816 18:14:12.683072   74828 cni.go:84] Creating CNI manager for ""
	I0816 18:14:12.683094   74828 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 18:14:12.683107   74828 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 18:14:12.683129   74828 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.50 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-864476 NodeName:no-preload-864476 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.50"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.50 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 18:14:12.683276   74828 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.50
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-864476"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.50
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.50"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 18:14:12.683345   74828 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 18:14:12.693879   74828 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 18:14:12.693941   74828 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 18:14:12.702601   74828 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0816 18:14:12.718235   74828 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 18:14:12.733455   74828 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0816 18:14:12.748878   74828 ssh_runner.go:195] Run: grep 192.168.50.50	control-plane.minikube.internal$ /etc/hosts
	I0816 18:14:12.752276   74828 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.50	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 18:14:12.763390   74828 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:14:12.872450   74828 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 18:14:12.888531   74828 certs.go:68] Setting up /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/no-preload-864476 for IP: 192.168.50.50
	I0816 18:14:12.888569   74828 certs.go:194] generating shared ca certs ...
	I0816 18:14:12.888589   74828 certs.go:226] acquiring lock for ca certs: {Name:mk1e000575c94c138513704c2900b8a68810eb65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:14:12.888783   74828 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key
	I0816 18:14:12.888845   74828 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key
	I0816 18:14:12.888860   74828 certs.go:256] generating profile certs ...
	I0816 18:14:12.888971   74828 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/no-preload-864476/client.key
	I0816 18:14:12.889070   74828 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/no-preload-864476/apiserver.key.30cf6dcb
	I0816 18:14:12.889136   74828 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/no-preload-864476/proxy-client.key
	I0816 18:14:12.889298   74828 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem (1338 bytes)
	W0816 18:14:12.889339   74828 certs.go:480] ignoring /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753_empty.pem, impossibly tiny 0 bytes
	I0816 18:14:12.889351   74828 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 18:14:12.889391   74828 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem (1082 bytes)
	I0816 18:14:12.889421   74828 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem (1123 bytes)
	I0816 18:14:12.889452   74828 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem (1679 bytes)
	I0816 18:14:12.889507   74828 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem (1708 bytes)
	I0816 18:14:12.890441   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 18:14:12.919571   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 18:14:12.947375   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 18:14:12.975197   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 18:14:13.007308   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/no-preload-864476/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0816 18:14:13.056151   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/no-preload-864476/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 18:14:13.080317   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/no-preload-864476/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 18:14:13.102231   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/no-preload-864476/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 18:14:13.124045   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 18:14:13.145312   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem --> /usr/share/ca-certificates/16753.pem (1338 bytes)
	I0816 18:14:13.166806   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /usr/share/ca-certificates/167532.pem (1708 bytes)
	I0816 18:14:13.188173   74828 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 18:14:13.203594   74828 ssh_runner.go:195] Run: openssl version
	I0816 18:14:13.209148   74828 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16753.pem && ln -fs /usr/share/ca-certificates/16753.pem /etc/ssl/certs/16753.pem"
	I0816 18:14:13.220266   74828 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16753.pem
	I0816 18:14:13.224569   74828 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 17:00 /usr/share/ca-certificates/16753.pem
	I0816 18:14:13.224635   74828 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16753.pem
	I0816 18:14:13.230141   74828 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16753.pem /etc/ssl/certs/51391683.0"
	I0816 18:14:13.241362   74828 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167532.pem && ln -fs /usr/share/ca-certificates/167532.pem /etc/ssl/certs/167532.pem"
	I0816 18:14:13.252437   74828 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167532.pem
	I0816 18:14:13.256658   74828 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 17:00 /usr/share/ca-certificates/167532.pem
	I0816 18:14:13.256712   74828 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167532.pem
	I0816 18:14:13.262006   74828 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167532.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 18:14:13.273168   74828 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 18:14:13.284518   74828 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:14:13.288566   74828 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:49 /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:14:13.288611   74828 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:14:13.293944   74828 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 18:14:13.305148   74828 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 18:14:13.309460   74828 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 18:14:13.315123   74828 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 18:14:13.320854   74828 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 18:14:13.326676   74828 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 18:14:13.332183   74828 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 18:14:13.337794   74828 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 18:14:13.343369   74828 kubeadm.go:392] StartCluster: {Name:no-preload-864476 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-864476 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.50 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 18:14:13.343470   74828 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 18:14:13.343527   74828 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 18:14:13.384490   74828 cri.go:89] found id: ""
	I0816 18:14:13.384567   74828 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 18:14:13.395094   74828 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 18:14:13.395116   74828 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 18:14:13.395183   74828 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 18:14:13.406605   74828 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 18:14:13.407898   74828 kubeconfig.go:125] found "no-preload-864476" server: "https://192.168.50.50:8443"
	I0816 18:14:13.410808   74828 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 18:14:13.420516   74828 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.50
	I0816 18:14:13.420541   74828 kubeadm.go:1160] stopping kube-system containers ...
	I0816 18:14:13.420554   74828 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 18:14:13.420589   74828 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 18:14:13.459174   74828 cri.go:89] found id: ""
	I0816 18:14:13.459242   74828 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 18:14:13.475598   74828 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 18:14:13.484685   74828 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 18:14:13.484707   74828 kubeadm.go:157] found existing configuration files:
	
	I0816 18:14:13.484756   74828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 18:14:13.493092   74828 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 18:14:13.493147   74828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 18:14:13.501649   74828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 18:14:13.509987   74828 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 18:14:13.510028   74828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 18:14:13.518500   74828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 18:14:13.526689   74828 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 18:14:13.526737   74828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 18:14:13.535606   74828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 18:14:13.545130   74828 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 18:14:13.545185   74828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 18:14:13.553947   74828 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 18:14:13.562763   74828 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:13.663383   74828 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:10.657652   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:10.658062   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:14:10.658105   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:14:10.657999   76107 retry.go:31] will retry after 3.856225979s: waiting for machine to come up
	I0816 18:14:14.518358   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:14.518875   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Found IP for machine: 192.168.72.144
	I0816 18:14:14.518896   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Reserving static IP address...
	I0816 18:14:14.518915   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has current primary IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:14.519296   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Reserved static IP address: 192.168.72.144
	I0816 18:14:14.519334   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-256678", mac: "52:54:00:76:32:d8", ip: "192.168.72.144"} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:14.519346   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for SSH to be available...
	I0816 18:14:14.519377   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | skip adding static IP to network mk-default-k8s-diff-port-256678 - found existing host DHCP lease matching {name: "default-k8s-diff-port-256678", mac: "52:54:00:76:32:d8", ip: "192.168.72.144"}
	I0816 18:14:14.519391   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | Getting to WaitForSSH function...
	I0816 18:14:14.521566   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:14.521926   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:14.521969   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:14.522133   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | Using SSH client type: external
	I0816 18:14:14.522160   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | Using SSH private key: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/default-k8s-diff-port-256678/id_rsa (-rw-------)
	I0816 18:14:14.522202   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.144 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19461-9545/.minikube/machines/default-k8s-diff-port-256678/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 18:14:14.522221   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | About to run SSH command:
	I0816 18:14:14.522235   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | exit 0
	I0816 18:14:14.648603   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | SSH cmd err, output: <nil>: 
	I0816 18:14:14.649005   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetConfigRaw
	I0816 18:14:14.649616   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetIP
	I0816 18:14:14.652340   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:14.652767   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:14.652796   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:14.653116   75006 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/default-k8s-diff-port-256678/config.json ...
	I0816 18:14:14.653337   75006 machine.go:93] provisionDockerMachine start ...
	I0816 18:14:14.653361   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .DriverName
	I0816 18:14:14.653598   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:14:14.656062   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:14.656412   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:14.656442   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:14.656565   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHPort
	I0816 18:14:14.656757   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:14.656895   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:14.657015   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHUsername
	I0816 18:14:14.657128   75006 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:14.657312   75006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I0816 18:14:14.657321   75006 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 18:14:14.768721   75006 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 18:14:14.768749   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetMachineName
	I0816 18:14:14.768990   75006 buildroot.go:166] provisioning hostname "default-k8s-diff-port-256678"
	I0816 18:14:14.769021   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetMachineName
	I0816 18:14:14.769246   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:14:14.772310   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:14.772675   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:14.772704   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:14.772922   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHPort
	I0816 18:14:14.773084   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:14.773242   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:14.773361   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHUsername
	I0816 18:14:14.773564   75006 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:14.773764   75006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I0816 18:14:14.773783   75006 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-256678 && echo "default-k8s-diff-port-256678" | sudo tee /etc/hostname
	I0816 18:14:14.894016   75006 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-256678
	
	I0816 18:14:14.894047   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:14:14.896797   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:14.897150   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:14.897184   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:14.897424   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHPort
	I0816 18:14:14.897613   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:14.897800   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:14.897933   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHUsername
	I0816 18:14:14.898124   75006 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:14.898286   75006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I0816 18:14:14.898303   75006 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-256678' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-256678/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-256678' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 18:14:15.814480   75402 start.go:364] duration metric: took 3m22.605706427s to acquireMachinesLock for "old-k8s-version-783465"
	I0816 18:14:15.814546   75402 start.go:96] Skipping create...Using existing machine configuration
	I0816 18:14:15.814554   75402 fix.go:54] fixHost starting: 
	I0816 18:14:15.815001   75402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:14:15.815062   75402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:14:15.834710   75402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46611
	I0816 18:14:15.835124   75402 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:14:15.835653   75402 main.go:141] libmachine: Using API Version  1
	I0816 18:14:15.835676   75402 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:14:15.836005   75402 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:14:15.836258   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	I0816 18:14:15.836392   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetState
	I0816 18:14:15.838010   75402 fix.go:112] recreateIfNeeded on old-k8s-version-783465: state=Stopped err=<nil>
	I0816 18:14:15.838043   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	W0816 18:14:15.838200   75402 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 18:14:15.840214   75402 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-783465" ...
	I0816 18:14:15.016150   75006 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 18:14:15.016176   75006 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19461-9545/.minikube CaCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19461-9545/.minikube}
	I0816 18:14:15.016200   75006 buildroot.go:174] setting up certificates
	I0816 18:14:15.016213   75006 provision.go:84] configureAuth start
	I0816 18:14:15.016231   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetMachineName
	I0816 18:14:15.016518   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetIP
	I0816 18:14:15.019132   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.019687   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:15.019725   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.019907   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:14:15.022758   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.023192   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:15.023233   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.023408   75006 provision.go:143] copyHostCerts
	I0816 18:14:15.023468   75006 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem, removing ...
	I0816 18:14:15.023489   75006 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem
	I0816 18:14:15.023552   75006 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem (1082 bytes)
	I0816 18:14:15.023649   75006 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem, removing ...
	I0816 18:14:15.023659   75006 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem
	I0816 18:14:15.023681   75006 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem (1123 bytes)
	I0816 18:14:15.023733   75006 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem, removing ...
	I0816 18:14:15.023740   75006 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem
	I0816 18:14:15.023756   75006 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem (1679 bytes)
	I0816 18:14:15.023802   75006 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-256678 san=[127.0.0.1 192.168.72.144 default-k8s-diff-port-256678 localhost minikube]
	I0816 18:14:15.142549   75006 provision.go:177] copyRemoteCerts
	I0816 18:14:15.142601   75006 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 18:14:15.142625   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:14:15.145515   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.145867   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:15.145903   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.146029   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHPort
	I0816 18:14:15.146250   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:15.146436   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHUsername
	I0816 18:14:15.146604   75006 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/default-k8s-diff-port-256678/id_rsa Username:docker}
	I0816 18:14:15.230785   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 18:14:15.258450   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0816 18:14:15.286008   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 18:14:15.308690   75006 provision.go:87] duration metric: took 292.45797ms to configureAuth
	I0816 18:14:15.308725   75006 buildroot.go:189] setting minikube options for container-runtime
	I0816 18:14:15.308927   75006 config.go:182] Loaded profile config "default-k8s-diff-port-256678": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 18:14:15.308996   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:14:15.311959   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.312310   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:15.312332   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.312492   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHPort
	I0816 18:14:15.312713   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:15.312890   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:15.313028   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHUsername
	I0816 18:14:15.313184   75006 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:15.313369   75006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I0816 18:14:15.313387   75006 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 18:14:15.574487   75006 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 18:14:15.574517   75006 machine.go:96] duration metric: took 921.166622ms to provisionDockerMachine
	I0816 18:14:15.574529   75006 start.go:293] postStartSetup for "default-k8s-diff-port-256678" (driver="kvm2")
	I0816 18:14:15.574538   75006 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 18:14:15.574552   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .DriverName
	I0816 18:14:15.574835   75006 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 18:14:15.574854   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:14:15.577944   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.578266   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:15.578295   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.578469   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHPort
	I0816 18:14:15.578651   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:15.578800   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHUsername
	I0816 18:14:15.578912   75006 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/default-k8s-diff-port-256678/id_rsa Username:docker}
	I0816 18:14:15.664404   75006 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 18:14:15.668362   75006 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 18:14:15.668389   75006 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/addons for local assets ...
	I0816 18:14:15.668459   75006 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/files for local assets ...
	I0816 18:14:15.668562   75006 filesync.go:149] local asset: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem -> 167532.pem in /etc/ssl/certs
	I0816 18:14:15.668705   75006 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 18:14:15.678830   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /etc/ssl/certs/167532.pem (1708 bytes)
	I0816 18:14:15.702087   75006 start.go:296] duration metric: took 127.545675ms for postStartSetup
	I0816 18:14:15.702129   75006 fix.go:56] duration metric: took 19.172678011s for fixHost
	I0816 18:14:15.702152   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:14:15.704680   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.705117   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:15.705154   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.705288   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHPort
	I0816 18:14:15.705479   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:15.705643   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:15.705766   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHUsername
	I0816 18:14:15.705922   75006 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:15.706084   75006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I0816 18:14:15.706095   75006 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 18:14:15.814313   75006 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723832055.788948458
	
	I0816 18:14:15.814337   75006 fix.go:216] guest clock: 1723832055.788948458
	I0816 18:14:15.814348   75006 fix.go:229] Guest: 2024-08-16 18:14:15.788948458 +0000 UTC Remote: 2024-08-16 18:14:15.702133997 +0000 UTC m=+265.826862410 (delta=86.814461ms)
	I0816 18:14:15.814372   75006 fix.go:200] guest clock delta is within tolerance: 86.814461ms
	I0816 18:14:15.814382   75006 start.go:83] releasing machines lock for "default-k8s-diff-port-256678", held for 19.284958633s
	I0816 18:14:15.814416   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .DriverName
	I0816 18:14:15.814723   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetIP
	I0816 18:14:15.817995   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.818426   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:15.818467   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.818620   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .DriverName
	I0816 18:14:15.819299   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .DriverName
	I0816 18:14:15.819518   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .DriverName
	I0816 18:14:15.819616   75006 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 18:14:15.819656   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:14:15.819840   75006 ssh_runner.go:195] Run: cat /version.json
	I0816 18:14:15.819869   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:14:15.822797   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.823189   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.823478   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:15.823521   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.823659   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHPort
	I0816 18:14:15.823804   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:15.823811   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:15.823828   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.823965   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHUsername
	I0816 18:14:15.824064   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHPort
	I0816 18:14:15.824177   75006 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/default-k8s-diff-port-256678/id_rsa Username:docker}
	I0816 18:14:15.824234   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:15.824368   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHUsername
	I0816 18:14:15.824486   75006 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/default-k8s-diff-port-256678/id_rsa Username:docker}
	I0816 18:14:15.948709   75006 ssh_runner.go:195] Run: systemctl --version
	I0816 18:14:15.956239   75006 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 18:14:16.103538   75006 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 18:14:16.109299   75006 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 18:14:16.109385   75006 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 18:14:16.125056   75006 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 18:14:16.125092   75006 start.go:495] detecting cgroup driver to use...
	I0816 18:14:16.125188   75006 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 18:14:16.141741   75006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 18:14:16.158917   75006 docker.go:217] disabling cri-docker service (if available) ...
	I0816 18:14:16.158993   75006 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 18:14:16.173256   75006 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 18:14:16.187026   75006 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 18:14:16.332452   75006 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 18:14:16.503181   75006 docker.go:233] disabling docker service ...
	I0816 18:14:16.503254   75006 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 18:14:16.517961   75006 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 18:14:16.535991   75006 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 18:14:16.667874   75006 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 18:14:16.799300   75006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 18:14:16.813852   75006 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 18:14:16.832891   75006 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 18:14:16.832953   75006 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:16.845621   75006 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 18:14:16.845716   75006 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:16.856045   75006 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:16.866117   75006 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:16.877586   75006 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 18:14:16.887643   75006 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:16.897164   75006 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:16.915247   75006 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:16.924887   75006 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 18:14:16.933645   75006 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 18:14:16.933709   75006 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 18:14:16.946920   75006 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 18:14:16.955928   75006 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:14:17.090148   75006 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 18:14:17.241434   75006 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 18:14:17.241531   75006 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 18:14:17.246730   75006 start.go:563] Will wait 60s for crictl version
	I0816 18:14:17.246796   75006 ssh_runner.go:195] Run: which crictl
	I0816 18:14:17.250397   75006 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 18:14:17.289194   75006 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 18:14:17.289295   75006 ssh_runner.go:195] Run: crio --version
	I0816 18:14:17.324401   75006 ssh_runner.go:195] Run: crio --version
	I0816 18:14:17.361220   75006 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 18:14:15.841411   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .Start
	I0816 18:14:15.841576   75402 main.go:141] libmachine: (old-k8s-version-783465) Ensuring networks are active...
	I0816 18:14:15.842263   75402 main.go:141] libmachine: (old-k8s-version-783465) Ensuring network default is active
	I0816 18:14:15.842609   75402 main.go:141] libmachine: (old-k8s-version-783465) Ensuring network mk-old-k8s-version-783465 is active
	I0816 18:14:15.843023   75402 main.go:141] libmachine: (old-k8s-version-783465) Getting domain xml...
	I0816 18:14:15.844141   75402 main.go:141] libmachine: (old-k8s-version-783465) Creating domain...
	I0816 18:14:17.215163   75402 main.go:141] libmachine: (old-k8s-version-783465) Waiting to get IP...
	I0816 18:14:17.216445   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:17.216933   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:17.217029   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:17.216922   76298 retry.go:31] will retry after 286.243503ms: waiting for machine to come up
	I0816 18:14:17.504645   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:17.505240   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:17.505262   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:17.505175   76298 retry.go:31] will retry after 275.715235ms: waiting for machine to come up
	I0816 18:14:17.782804   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:17.783365   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:17.783392   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:17.783292   76298 retry.go:31] will retry after 343.088129ms: waiting for machine to come up
	I0816 18:14:14.936549   74828 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.273126441s)
	I0816 18:14:14.936584   74828 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:15.139778   74828 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:15.201814   74828 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:15.270552   74828 api_server.go:52] waiting for apiserver process to appear ...
	I0816 18:14:15.270667   74828 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:15.771379   74828 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:16.271296   74828 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:16.335242   74828 api_server.go:72] duration metric: took 1.064710561s to wait for apiserver process to appear ...
	I0816 18:14:16.335265   74828 api_server.go:88] waiting for apiserver healthz status ...
	I0816 18:14:16.335282   74828 api_server.go:253] Checking apiserver healthz at https://192.168.50.50:8443/healthz ...
	I0816 18:14:16.335727   74828 api_server.go:269] stopped: https://192.168.50.50:8443/healthz: Get "https://192.168.50.50:8443/healthz": dial tcp 192.168.50.50:8443: connect: connection refused
	I0816 18:14:16.835361   74828 api_server.go:253] Checking apiserver healthz at https://192.168.50.50:8443/healthz ...
	I0816 18:14:17.362436   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetIP
	I0816 18:14:17.365728   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:17.366122   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:17.366154   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:17.366403   75006 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0816 18:14:17.370322   75006 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 18:14:17.383153   75006 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-256678 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-256678 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.144 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 18:14:17.383303   75006 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 18:14:17.383364   75006 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 18:14:17.420269   75006 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0816 18:14:17.420339   75006 ssh_runner.go:195] Run: which lz4
	I0816 18:14:17.424477   75006 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 18:14:17.428507   75006 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 18:14:17.428547   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0816 18:14:18.717202   75006 crio.go:462] duration metric: took 1.292754157s to copy over tarball
	I0816 18:14:18.717278   75006 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 18:14:19.241691   74828 api_server.go:279] https://192.168.50.50:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 18:14:19.241729   74828 api_server.go:103] status: https://192.168.50.50:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 18:14:19.241746   74828 api_server.go:253] Checking apiserver healthz at https://192.168.50.50:8443/healthz ...
	I0816 18:14:19.292883   74828 api_server.go:279] https://192.168.50.50:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 18:14:19.292924   74828 api_server.go:103] status: https://192.168.50.50:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 18:14:19.336097   74828 api_server.go:253] Checking apiserver healthz at https://192.168.50.50:8443/healthz ...
	I0816 18:14:19.363715   74828 api_server.go:279] https://192.168.50.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:14:19.363753   74828 api_server.go:103] status: https://192.168.50.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:14:19.835848   74828 api_server.go:253] Checking apiserver healthz at https://192.168.50.50:8443/healthz ...
	I0816 18:14:19.840615   74828 api_server.go:279] https://192.168.50.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:14:19.840666   74828 api_server.go:103] status: https://192.168.50.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:14:20.336291   74828 api_server.go:253] Checking apiserver healthz at https://192.168.50.50:8443/healthz ...
	I0816 18:14:20.343751   74828 api_server.go:279] https://192.168.50.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:14:20.343785   74828 api_server.go:103] status: https://192.168.50.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:14:20.835470   74828 api_server.go:253] Checking apiserver healthz at https://192.168.50.50:8443/healthz ...
	I0816 18:14:20.841217   74828 api_server.go:279] https://192.168.50.50:8443/healthz returned 200:
	ok
	I0816 18:14:20.849609   74828 api_server.go:141] control plane version: v1.31.0
	I0816 18:14:20.849642   74828 api_server.go:131] duration metric: took 4.514370955s to wait for apiserver health ...
	I0816 18:14:20.849653   74828 cni.go:84] Creating CNI manager for ""
	I0816 18:14:20.849662   74828 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 18:14:20.851828   74828 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 18:14:18.127538   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:18.128044   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:18.128077   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:18.127958   76298 retry.go:31] will retry after 543.91951ms: waiting for machine to come up
	I0816 18:14:18.673778   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:18.674328   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:18.674351   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:18.674274   76298 retry.go:31] will retry after 694.978788ms: waiting for machine to come up
	I0816 18:14:19.370976   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:19.371577   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:19.371605   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:19.371538   76298 retry.go:31] will retry after 578.640883ms: waiting for machine to come up
	I0816 18:14:19.952328   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:19.952917   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:19.952941   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:19.952863   76298 retry.go:31] will retry after 820.19233ms: waiting for machine to come up
	I0816 18:14:20.774767   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:20.775175   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:20.775200   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:20.775134   76298 retry.go:31] will retry after 1.262201815s: waiting for machine to come up
	I0816 18:14:22.038872   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:22.039357   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:22.039385   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:22.039302   76298 retry.go:31] will retry after 1.164593889s: waiting for machine to come up
	I0816 18:14:20.853121   74828 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 18:14:20.866117   74828 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 18:14:20.888451   74828 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 18:14:20.902482   74828 system_pods.go:59] 8 kube-system pods found
	I0816 18:14:20.902530   74828 system_pods.go:61] "coredns-6f6b679f8f-w9cbm" [9b50c913-f492-4432-a50a-e0f727a7b856] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 18:14:20.902545   74828 system_pods.go:61] "etcd-no-preload-864476" [e45a11b8-fa3e-4a6e-9d06-5d82fdaf20dc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 18:14:20.902557   74828 system_pods.go:61] "kube-apiserver-no-preload-864476" [1cf82575-b520-4bc0-9e90-d40c02b4468d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 18:14:20.902568   74828 system_pods.go:61] "kube-controller-manager-no-preload-864476" [8c9123e0-16a4-4940-8464-4bec383bac90] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 18:14:20.902577   74828 system_pods.go:61] "kube-proxy-vdqxz" [0332e87e-5c0c-41f5-88a9-31b7f8494eb6] Running
	I0816 18:14:20.902587   74828 system_pods.go:61] "kube-scheduler-no-preload-864476" [6139753f-b5cf-4af5-a9fa-03fb220e3dc9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 18:14:20.902606   74828 system_pods.go:61] "metrics-server-6867b74b74-rxtwg" [f0d04fc9-24c0-47e3-afdc-f250ef07900c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 18:14:20.902620   74828 system_pods.go:61] "storage-provisioner" [65303dd8-27d7-4bf3-ae58-ff5fe556f17f] Running
	I0816 18:14:20.902631   74828 system_pods.go:74] duration metric: took 14.150825ms to wait for pod list to return data ...
	I0816 18:14:20.902645   74828 node_conditions.go:102] verifying NodePressure condition ...
	I0816 18:14:20.909305   74828 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 18:14:20.909342   74828 node_conditions.go:123] node cpu capacity is 2
	I0816 18:14:20.909355   74828 node_conditions.go:105] duration metric: took 6.699359ms to run NodePressure ...
	I0816 18:14:20.909377   74828 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:21.193348   74828 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0816 18:14:21.198555   74828 kubeadm.go:739] kubelet initialised
	I0816 18:14:21.198585   74828 kubeadm.go:740] duration metric: took 5.20722ms waiting for restarted kubelet to initialise ...
	I0816 18:14:21.198595   74828 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:14:21.204695   74828 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-w9cbm" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:21.212855   74828 pod_ready.go:98] node "no-preload-864476" hosting pod "coredns-6f6b679f8f-w9cbm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-864476" has status "Ready":"False"
	I0816 18:14:21.212877   74828 pod_ready.go:82] duration metric: took 8.157781ms for pod "coredns-6f6b679f8f-w9cbm" in "kube-system" namespace to be "Ready" ...
	E0816 18:14:21.212889   74828 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-864476" hosting pod "coredns-6f6b679f8f-w9cbm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-864476" has status "Ready":"False"
	I0816 18:14:21.212899   74828 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:21.220125   74828 pod_ready.go:98] node "no-preload-864476" hosting pod "etcd-no-preload-864476" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-864476" has status "Ready":"False"
	I0816 18:14:21.220150   74828 pod_ready.go:82] duration metric: took 7.241861ms for pod "etcd-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	E0816 18:14:21.220158   74828 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-864476" hosting pod "etcd-no-preload-864476" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-864476" has status "Ready":"False"
	I0816 18:14:21.220166   74828 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:21.226930   74828 pod_ready.go:98] node "no-preload-864476" hosting pod "kube-apiserver-no-preload-864476" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-864476" has status "Ready":"False"
	I0816 18:14:21.226957   74828 pod_ready.go:82] duration metric: took 6.783402ms for pod "kube-apiserver-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	E0816 18:14:21.226967   74828 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-864476" hosting pod "kube-apiserver-no-preload-864476" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-864476" has status "Ready":"False"
	I0816 18:14:21.226976   74828 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:21.292011   74828 pod_ready.go:98] node "no-preload-864476" hosting pod "kube-controller-manager-no-preload-864476" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-864476" has status "Ready":"False"
	I0816 18:14:21.292054   74828 pod_ready.go:82] duration metric: took 65.066708ms for pod "kube-controller-manager-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	E0816 18:14:21.292066   74828 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-864476" hosting pod "kube-controller-manager-no-preload-864476" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-864476" has status "Ready":"False"
	I0816 18:14:21.292075   74828 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-vdqxz" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:21.692536   74828 pod_ready.go:93] pod "kube-proxy-vdqxz" in "kube-system" namespace has status "Ready":"True"
	I0816 18:14:21.692564   74828 pod_ready.go:82] duration metric: took 400.476293ms for pod "kube-proxy-vdqxz" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:21.692577   74828 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:21.155261   75006 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.437939279s)
	I0816 18:14:21.155296   75006 crio.go:469] duration metric: took 2.438065212s to extract the tarball
	I0816 18:14:21.155325   75006 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 18:14:21.199451   75006 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 18:14:21.249963   75006 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 18:14:21.249990   75006 cache_images.go:84] Images are preloaded, skipping loading
	I0816 18:14:21.250002   75006 kubeadm.go:934] updating node { 192.168.72.144 8444 v1.31.0 crio true true} ...
	I0816 18:14:21.250129   75006 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-256678 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.144
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-256678 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 18:14:21.250211   75006 ssh_runner.go:195] Run: crio config
	I0816 18:14:21.299619   75006 cni.go:84] Creating CNI manager for ""
	I0816 18:14:21.299644   75006 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 18:14:21.299663   75006 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 18:14:21.299684   75006 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.144 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-256678 NodeName:default-k8s-diff-port-256678 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.144"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.144 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 18:14:21.299813   75006 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.144
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-256678"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.144
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.144"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 18:14:21.299880   75006 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 18:14:21.310127   75006 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 18:14:21.310205   75006 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 18:14:21.319566   75006 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0816 18:14:21.337043   75006 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 18:14:21.352319   75006 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0816 18:14:21.370117   75006 ssh_runner.go:195] Run: grep 192.168.72.144	control-plane.minikube.internal$ /etc/hosts
	I0816 18:14:21.373986   75006 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.144	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 18:14:21.386518   75006 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:14:21.508855   75006 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 18:14:21.525184   75006 certs.go:68] Setting up /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/default-k8s-diff-port-256678 for IP: 192.168.72.144
	I0816 18:14:21.525209   75006 certs.go:194] generating shared ca certs ...
	I0816 18:14:21.525230   75006 certs.go:226] acquiring lock for ca certs: {Name:mk1e000575c94c138513704c2900b8a68810eb65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:14:21.525413   75006 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key
	I0816 18:14:21.525468   75006 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key
	I0816 18:14:21.525481   75006 certs.go:256] generating profile certs ...
	I0816 18:14:21.525604   75006 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/default-k8s-diff-port-256678/client.key
	I0816 18:14:21.525688   75006 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/default-k8s-diff-port-256678/apiserver.key.ac6d83aa
	I0816 18:14:21.525738   75006 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/default-k8s-diff-port-256678/proxy-client.key
	I0816 18:14:21.525888   75006 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem (1338 bytes)
	W0816 18:14:21.525931   75006 certs.go:480] ignoring /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753_empty.pem, impossibly tiny 0 bytes
	I0816 18:14:21.525944   75006 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 18:14:21.525991   75006 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem (1082 bytes)
	I0816 18:14:21.526028   75006 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem (1123 bytes)
	I0816 18:14:21.526052   75006 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem (1679 bytes)
	I0816 18:14:21.526101   75006 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem (1708 bytes)
	I0816 18:14:21.526719   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 18:14:21.556992   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 18:14:21.590311   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 18:14:21.624782   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 18:14:21.655118   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/default-k8s-diff-port-256678/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0816 18:14:21.695431   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/default-k8s-diff-port-256678/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 18:14:21.722575   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/default-k8s-diff-port-256678/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 18:14:21.744870   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/default-k8s-diff-port-256678/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 18:14:21.770850   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 18:14:21.793906   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem --> /usr/share/ca-certificates/16753.pem (1338 bytes)
	I0816 18:14:21.817643   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /usr/share/ca-certificates/167532.pem (1708 bytes)
	I0816 18:14:21.839584   75006 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 18:14:21.856447   75006 ssh_runner.go:195] Run: openssl version
	I0816 18:14:21.862104   75006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16753.pem && ln -fs /usr/share/ca-certificates/16753.pem /etc/ssl/certs/16753.pem"
	I0816 18:14:21.872584   75006 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16753.pem
	I0816 18:14:21.876886   75006 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 17:00 /usr/share/ca-certificates/16753.pem
	I0816 18:14:21.876945   75006 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16753.pem
	I0816 18:14:21.882424   75006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16753.pem /etc/ssl/certs/51391683.0"
	I0816 18:14:21.892761   75006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167532.pem && ln -fs /usr/share/ca-certificates/167532.pem /etc/ssl/certs/167532.pem"
	I0816 18:14:21.904506   75006 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167532.pem
	I0816 18:14:21.909624   75006 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 17:00 /usr/share/ca-certificates/167532.pem
	I0816 18:14:21.909687   75006 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167532.pem
	I0816 18:14:21.915765   75006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167532.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 18:14:21.927160   75006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 18:14:21.937381   75006 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:14:21.941423   75006 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:49 /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:14:21.941477   75006 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:14:21.946741   75006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 18:14:21.958082   75006 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 18:14:21.962431   75006 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 18:14:21.969889   75006 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 18:14:21.977302   75006 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 18:14:21.983468   75006 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 18:14:21.989115   75006 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 18:14:21.994569   75006 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 18:14:21.999962   75006 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-256678 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-256678 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.144 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 18:14:22.000090   75006 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 18:14:22.000139   75006 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 18:14:22.034063   75006 cri.go:89] found id: ""
	I0816 18:14:22.034158   75006 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 18:14:22.043988   75006 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 18:14:22.044003   75006 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 18:14:22.044040   75006 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 18:14:22.053276   75006 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 18:14:22.054255   75006 kubeconfig.go:125] found "default-k8s-diff-port-256678" server: "https://192.168.72.144:8444"
	I0816 18:14:22.056408   75006 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 18:14:22.065394   75006 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.144
	I0816 18:14:22.065429   75006 kubeadm.go:1160] stopping kube-system containers ...
	I0816 18:14:22.065443   75006 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 18:14:22.065496   75006 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 18:14:22.112797   75006 cri.go:89] found id: ""
	I0816 18:14:22.112889   75006 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 18:14:22.130231   75006 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 18:14:22.139432   75006 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 18:14:22.139451   75006 kubeadm.go:157] found existing configuration files:
	
	I0816 18:14:22.139493   75006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0816 18:14:22.148118   75006 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 18:14:22.148168   75006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 18:14:22.158088   75006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0816 18:14:22.166741   75006 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 18:14:22.166803   75006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 18:14:22.175578   75006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0816 18:14:22.185238   75006 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 18:14:22.185286   75006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 18:14:22.194074   75006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0816 18:14:22.205053   75006 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 18:14:22.205105   75006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 18:14:22.216506   75006 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 18:14:22.228754   75006 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:22.344597   75006 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:23.006750   75006 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:23.275587   75006 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:23.356515   75006 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:23.432890   75006 api_server.go:52] waiting for apiserver process to appear ...
	I0816 18:14:23.432991   75006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:23.933834   75006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:24.433736   75006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:23.205567   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:23.206051   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:23.206078   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:23.206007   76298 retry.go:31] will retry after 2.304886921s: waiting for machine to come up
	I0816 18:14:25.512748   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:25.513295   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:25.513321   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:25.513261   76298 retry.go:31] will retry after 2.603393394s: waiting for machine to come up
	I0816 18:14:23.801346   74828 pod_ready.go:103] pod "kube-scheduler-no-preload-864476" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:26.199045   74828 pod_ready.go:103] pod "kube-scheduler-no-preload-864476" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:28.205981   74828 pod_ready.go:103] pod "kube-scheduler-no-preload-864476" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:24.933846   75006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:24.954190   75006 api_server.go:72] duration metric: took 1.521307594s to wait for apiserver process to appear ...
	I0816 18:14:24.954219   75006 api_server.go:88] waiting for apiserver healthz status ...
	I0816 18:14:24.954242   75006 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I0816 18:14:27.835517   75006 api_server.go:279] https://192.168.72.144:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W0816 18:14:27.835552   75006 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I0816 18:14:27.835567   75006 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I0816 18:14:27.842961   75006 api_server.go:279] https://192.168.72.144:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W0816 18:14:27.842992   75006 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I0816 18:14:27.954290   75006 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I0816 18:14:27.963372   75006 api_server.go:279] https://192.168.72.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:14:27.963400   75006 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:14:28.455035   75006 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I0816 18:14:28.460244   75006 api_server.go:279] https://192.168.72.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:14:28.460279   75006 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:14:28.954475   75006 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I0816 18:14:28.962766   75006 api_server.go:279] https://192.168.72.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:14:28.962802   75006 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:14:29.454298   75006 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I0816 18:14:29.458650   75006 api_server.go:279] https://192.168.72.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:14:29.458681   75006 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:14:29.954582   75006 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I0816 18:14:29.959359   75006 api_server.go:279] https://192.168.72.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:14:29.959384   75006 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:14:30.455077   75006 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I0816 18:14:30.461068   75006 api_server.go:279] https://192.168.72.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:14:30.461099   75006 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:14:30.954772   75006 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I0816 18:14:30.960557   75006 api_server.go:279] https://192.168.72.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:14:30.960588   75006 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:14:31.455232   75006 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I0816 18:14:31.460157   75006 api_server.go:279] https://192.168.72.144:8444/healthz returned 200:
	ok
	I0816 18:14:31.471015   75006 api_server.go:141] control plane version: v1.31.0
	I0816 18:14:31.471046   75006 api_server.go:131] duration metric: took 6.516819341s to wait for apiserver health ...
	I0816 18:14:31.471056   75006 cni.go:84] Creating CNI manager for ""
	I0816 18:14:31.471064   75006 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 18:14:31.472930   75006 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 18:14:28.118105   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:28.118675   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:28.118706   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:28.118637   76298 retry.go:31] will retry after 2.400714985s: waiting for machine to come up
	I0816 18:14:30.521623   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:30.522157   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:30.522196   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:30.522111   76298 retry.go:31] will retry after 3.210603239s: waiting for machine to come up
	I0816 18:14:30.699930   74828 pod_ready.go:103] pod "kube-scheduler-no-preload-864476" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:33.200755   74828 pod_ready.go:103] pod "kube-scheduler-no-preload-864476" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:31.474388   75006 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 18:14:31.484723   75006 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 18:14:31.502094   75006 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 18:14:31.511169   75006 system_pods.go:59] 8 kube-system pods found
	I0816 18:14:31.511207   75006 system_pods.go:61] "coredns-6f6b679f8f-2sgmk" [3c98207c-ab70-435e-a725-3d6b108515d9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 18:14:31.511215   75006 system_pods.go:61] "etcd-default-k8s-diff-port-256678" [c6d0dbe2-8b80-4fb2-8408-7b2e668cf4cc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 18:14:31.511221   75006 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-256678" [4506e38e-6685-41f8-98b1-738b35476ad7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 18:14:31.511228   75006 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-256678" [14282ea5-2ebc-4ea6-8e06-829e86296333] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 18:14:31.511232   75006 system_pods.go:61] "kube-proxy-l4lr2" [880ceec6-c3d1-4934-b02a-7a175ded8a02] Running
	I0816 18:14:31.511236   75006 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-256678" [b122d1cd-12e8-4b87-a179-c50baf4c89d4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 18:14:31.511241   75006 system_pods.go:61] "metrics-server-6867b74b74-fc4h4" [3cb9624e-98b4-4edb-a2de-d6a971520cac] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 18:14:31.511244   75006 system_pods.go:61] "storage-provisioner" [79442d12-c28b-447e-ae96-e4c2ddb5c4da] Running
	I0816 18:14:31.511250   75006 system_pods.go:74] duration metric: took 9.137933ms to wait for pod list to return data ...
	I0816 18:14:31.511256   75006 node_conditions.go:102] verifying NodePressure condition ...
	I0816 18:14:31.515339   75006 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 18:14:31.515361   75006 node_conditions.go:123] node cpu capacity is 2
	I0816 18:14:31.515370   75006 node_conditions.go:105] duration metric: took 4.110442ms to run NodePressure ...
	I0816 18:14:31.515387   75006 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:31.774197   75006 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0816 18:14:31.778258   75006 kubeadm.go:739] kubelet initialised
	I0816 18:14:31.778276   75006 kubeadm.go:740] duration metric: took 4.052927ms waiting for restarted kubelet to initialise ...
	I0816 18:14:31.778283   75006 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:14:31.782595   75006 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-2sgmk" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:33.788205   75006 pod_ready.go:103] pod "coredns-6f6b679f8f-2sgmk" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:35.053312   74510 start.go:364] duration metric: took 53.786665535s to acquireMachinesLock for "embed-certs-777541"
	I0816 18:14:35.053367   74510 start.go:96] Skipping create...Using existing machine configuration
	I0816 18:14:35.053372   74510 fix.go:54] fixHost starting: 
	I0816 18:14:35.053687   74510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:14:35.053718   74510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:14:35.073509   74510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33223
	I0816 18:14:35.073935   74510 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:14:35.074396   74510 main.go:141] libmachine: Using API Version  1
	I0816 18:14:35.074420   74510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:14:35.074749   74510 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:14:35.074928   74510 main.go:141] libmachine: (embed-certs-777541) Calling .DriverName
	I0816 18:14:35.075102   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetState
	I0816 18:14:35.076710   74510 fix.go:112] recreateIfNeeded on embed-certs-777541: state=Stopped err=<nil>
	I0816 18:14:35.076738   74510 main.go:141] libmachine: (embed-certs-777541) Calling .DriverName
	W0816 18:14:35.076903   74510 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 18:14:35.078759   74510 out.go:177] * Restarting existing kvm2 VM for "embed-certs-777541" ...
	I0816 18:14:33.735394   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:33.735898   75402 main.go:141] libmachine: (old-k8s-version-783465) Found IP for machine: 192.168.39.211
	I0816 18:14:33.735925   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has current primary IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:33.735933   75402 main.go:141] libmachine: (old-k8s-version-783465) Reserving static IP address...
	I0816 18:14:33.736407   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "old-k8s-version-783465", mac: "52:54:00:d1:97:35", ip: "192.168.39.211"} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:33.736439   75402 main.go:141] libmachine: (old-k8s-version-783465) Reserved static IP address: 192.168.39.211
	I0816 18:14:33.736459   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | skip adding static IP to network mk-old-k8s-version-783465 - found existing host DHCP lease matching {name: "old-k8s-version-783465", mac: "52:54:00:d1:97:35", ip: "192.168.39.211"}
	I0816 18:14:33.736478   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | Getting to WaitForSSH function...
	I0816 18:14:33.736492   75402 main.go:141] libmachine: (old-k8s-version-783465) Waiting for SSH to be available...
	I0816 18:14:33.739028   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:33.739377   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:33.739397   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:33.739596   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | Using SSH client type: external
	I0816 18:14:33.739689   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | Using SSH private key: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/old-k8s-version-783465/id_rsa (-rw-------)
	I0816 18:14:33.739724   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.211 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19461-9545/.minikube/machines/old-k8s-version-783465/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 18:14:33.739747   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | About to run SSH command:
	I0816 18:14:33.739785   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | exit 0
	I0816 18:14:33.861036   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | SSH cmd err, output: <nil>: 
	I0816 18:14:33.861405   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetConfigRaw
	I0816 18:14:33.862105   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetIP
	I0816 18:14:33.864850   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:33.865245   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:33.865272   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:33.865542   75402 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/config.json ...
	I0816 18:14:33.865796   75402 machine.go:93] provisionDockerMachine start ...
	I0816 18:14:33.865820   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	I0816 18:14:33.866053   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:14:33.868422   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:33.868761   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:33.868795   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:33.868911   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHPort
	I0816 18:14:33.869095   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:33.869267   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:33.869415   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHUsername
	I0816 18:14:33.869579   75402 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:33.869796   75402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0816 18:14:33.869810   75402 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 18:14:33.972880   75402 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 18:14:33.972907   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetMachineName
	I0816 18:14:33.973141   75402 buildroot.go:166] provisioning hostname "old-k8s-version-783465"
	I0816 18:14:33.973172   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetMachineName
	I0816 18:14:33.973378   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:14:33.976198   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:33.976530   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:33.976563   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:33.976747   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHPort
	I0816 18:14:33.976945   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:33.977086   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:33.977228   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHUsername
	I0816 18:14:33.977369   75402 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:33.977529   75402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0816 18:14:33.977540   75402 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-783465 && echo "old-k8s-version-783465" | sudo tee /etc/hostname
	I0816 18:14:34.086092   75402 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-783465
	
	I0816 18:14:34.086123   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:14:34.088785   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.089107   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:34.089132   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.089285   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHPort
	I0816 18:14:34.089527   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:34.089684   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:34.089828   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHUsername
	I0816 18:14:34.089997   75402 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:34.090152   75402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0816 18:14:34.090168   75402 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-783465' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-783465/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-783465' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 18:14:34.200744   75402 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 18:14:34.200779   75402 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19461-9545/.minikube CaCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19461-9545/.minikube}
	I0816 18:14:34.200834   75402 buildroot.go:174] setting up certificates
	I0816 18:14:34.200848   75402 provision.go:84] configureAuth start
	I0816 18:14:34.200862   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetMachineName
	I0816 18:14:34.201175   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetIP
	I0816 18:14:34.203868   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.204297   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:34.204344   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.204506   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:14:34.207067   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.207441   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:34.207464   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.207810   75402 provision.go:143] copyHostCerts
	I0816 18:14:34.207869   75402 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem, removing ...
	I0816 18:14:34.207892   75402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem
	I0816 18:14:34.207951   75402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem (1082 bytes)
	I0816 18:14:34.208058   75402 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem, removing ...
	I0816 18:14:34.208069   75402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem
	I0816 18:14:34.208103   75402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem (1123 bytes)
	I0816 18:14:34.208180   75402 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem, removing ...
	I0816 18:14:34.208192   75402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem
	I0816 18:14:34.208220   75402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem (1679 bytes)
	I0816 18:14:34.208291   75402 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-783465 san=[127.0.0.1 192.168.39.211 localhost minikube old-k8s-version-783465]
	I0816 18:14:34.413800   75402 provision.go:177] copyRemoteCerts
	I0816 18:14:34.413857   75402 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 18:14:34.413881   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:14:34.416724   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.417138   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:34.417173   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.417345   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHPort
	I0816 18:14:34.417673   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:34.417894   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHUsername
	I0816 18:14:34.418089   75402 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/old-k8s-version-783465/id_rsa Username:docker}
	I0816 18:14:34.495519   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 18:14:34.517414   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0816 18:14:34.540423   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 18:14:34.563983   75402 provision.go:87] duration metric: took 363.122639ms to configureAuth
	I0816 18:14:34.564019   75402 buildroot.go:189] setting minikube options for container-runtime
	I0816 18:14:34.564229   75402 config.go:182] Loaded profile config "old-k8s-version-783465": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0816 18:14:34.564299   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:14:34.567149   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.567550   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:34.567580   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.567753   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHPort
	I0816 18:14:34.567935   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:34.568098   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:34.568255   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHUsername
	I0816 18:14:34.568448   75402 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:34.568659   75402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0816 18:14:34.568680   75402 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 18:14:34.824064   75402 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 18:14:34.824091   75402 machine.go:96] duration metric: took 958.278616ms to provisionDockerMachine
	I0816 18:14:34.824106   75402 start.go:293] postStartSetup for "old-k8s-version-783465" (driver="kvm2")
	I0816 18:14:34.824120   75402 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 18:14:34.824169   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	I0816 18:14:34.824556   75402 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 18:14:34.824599   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:14:34.827203   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.827517   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:34.827547   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.827677   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHPort
	I0816 18:14:34.827869   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:34.828033   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHUsername
	I0816 18:14:34.828171   75402 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/old-k8s-version-783465/id_rsa Username:docker}
	I0816 18:14:34.912148   75402 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 18:14:34.916652   75402 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 18:14:34.916681   75402 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/addons for local assets ...
	I0816 18:14:34.916755   75402 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/files for local assets ...
	I0816 18:14:34.916864   75402 filesync.go:149] local asset: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem -> 167532.pem in /etc/ssl/certs
	I0816 18:14:34.916989   75402 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 18:14:34.927061   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /etc/ssl/certs/167532.pem (1708 bytes)
	I0816 18:14:34.949703   75402 start.go:296] duration metric: took 125.581331ms for postStartSetup
	I0816 18:14:34.949743   75402 fix.go:56] duration metric: took 19.13519024s for fixHost
	I0816 18:14:34.949763   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:14:34.952740   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.953090   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:34.953124   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.953307   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHPort
	I0816 18:14:34.953532   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:34.953715   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:34.953861   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHUsername
	I0816 18:14:34.954029   75402 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:34.954229   75402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0816 18:14:34.954242   75402 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 18:14:35.053143   75402 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723832075.025252523
	
	I0816 18:14:35.053171   75402 fix.go:216] guest clock: 1723832075.025252523
	I0816 18:14:35.053180   75402 fix.go:229] Guest: 2024-08-16 18:14:35.025252523 +0000 UTC Remote: 2024-08-16 18:14:34.949747236 +0000 UTC m=+221.880938919 (delta=75.505287ms)
	I0816 18:14:35.053204   75402 fix.go:200] guest clock delta is within tolerance: 75.505287ms
	I0816 18:14:35.053211   75402 start.go:83] releasing machines lock for "old-k8s-version-783465", held for 19.238692888s
	I0816 18:14:35.053243   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	I0816 18:14:35.053549   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetIP
	I0816 18:14:35.056365   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:35.056792   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:35.056823   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:35.057009   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	I0816 18:14:35.057509   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	I0816 18:14:35.057731   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	I0816 18:14:35.057831   75402 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 18:14:35.057892   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:14:35.057951   75402 ssh_runner.go:195] Run: cat /version.json
	I0816 18:14:35.057972   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:14:35.060543   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:35.060733   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:35.060987   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:35.061016   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:35.061126   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:35.061148   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:35.061154   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHPort
	I0816 18:14:35.061319   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHPort
	I0816 18:14:35.061339   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:35.061456   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:35.061518   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHUsername
	I0816 18:14:35.061639   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHUsername
	I0816 18:14:35.061720   75402 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/old-k8s-version-783465/id_rsa Username:docker}
	I0816 18:14:35.061773   75402 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/old-k8s-version-783465/id_rsa Username:docker}
	I0816 18:14:35.174137   75402 ssh_runner.go:195] Run: systemctl --version
	I0816 18:14:35.181704   75402 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 18:14:35.323490   75402 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 18:14:35.330733   75402 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 18:14:35.330807   75402 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 18:14:35.350653   75402 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 18:14:35.350679   75402 start.go:495] detecting cgroup driver to use...
	I0816 18:14:35.350763   75402 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 18:14:35.372307   75402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 18:14:35.386513   75402 docker.go:217] disabling cri-docker service (if available) ...
	I0816 18:14:35.386598   75402 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 18:14:35.400406   75402 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 18:14:35.414761   75402 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 18:14:35.540356   75402 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 18:14:35.675726   75402 docker.go:233] disabling docker service ...
	I0816 18:14:35.675793   75402 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 18:14:35.691169   75402 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 18:14:35.707288   75402 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 18:14:35.858149   75402 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 18:14:35.981654   75402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 18:14:35.996396   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 18:14:36.013656   75402 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0816 18:14:36.013711   75402 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:36.023839   75402 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 18:14:36.023907   75402 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:36.033889   75402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:36.043727   75402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:36.053496   75402 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 18:14:36.063694   75402 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 18:14:36.072919   75402 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 18:14:36.072979   75402 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 18:14:36.085707   75402 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 18:14:36.095377   75402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:14:36.219235   75402 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 18:14:36.384915   75402 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 18:14:36.384990   75402 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 18:14:36.392122   75402 start.go:563] Will wait 60s for crictl version
	I0816 18:14:36.392196   75402 ssh_runner.go:195] Run: which crictl
	I0816 18:14:36.397589   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 18:14:36.443581   75402 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 18:14:36.443710   75402 ssh_runner.go:195] Run: crio --version
	I0816 18:14:36.473740   75402 ssh_runner.go:195] Run: crio --version
	I0816 18:14:36.512542   75402 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0816 18:14:36.513678   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetIP
	I0816 18:14:36.517404   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:36.517912   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:36.517948   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:36.518190   75402 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0816 18:14:36.523577   75402 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 18:14:36.536188   75402 kubeadm.go:883] updating cluster {Name:old-k8s-version-783465 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-783465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 18:14:36.536361   75402 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0816 18:14:36.536425   75402 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 18:14:36.587027   75402 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0816 18:14:36.587085   75402 ssh_runner.go:195] Run: which lz4
	I0816 18:14:36.590780   75402 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 18:14:36.594635   75402 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 18:14:36.594673   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0816 18:14:35.080033   74510 main.go:141] libmachine: (embed-certs-777541) Calling .Start
	I0816 18:14:35.080220   74510 main.go:141] libmachine: (embed-certs-777541) Ensuring networks are active...
	I0816 18:14:35.080971   74510 main.go:141] libmachine: (embed-certs-777541) Ensuring network default is active
	I0816 18:14:35.081366   74510 main.go:141] libmachine: (embed-certs-777541) Ensuring network mk-embed-certs-777541 is active
	I0816 18:14:35.081887   74510 main.go:141] libmachine: (embed-certs-777541) Getting domain xml...
	I0816 18:14:35.082634   74510 main.go:141] libmachine: (embed-certs-777541) Creating domain...
	I0816 18:14:36.459300   74510 main.go:141] libmachine: (embed-certs-777541) Waiting to get IP...
	I0816 18:14:36.460282   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:36.460801   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:36.460883   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:36.460778   76422 retry.go:31] will retry after 291.491491ms: waiting for machine to come up
	I0816 18:14:36.754548   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:36.755372   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:36.755412   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:36.755313   76422 retry.go:31] will retry after 356.347467ms: waiting for machine to come up
	I0816 18:14:37.113124   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:37.113704   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:37.113739   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:37.113676   76422 retry.go:31] will retry after 386.244375ms: waiting for machine to come up
	I0816 18:14:37.502241   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:37.502796   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:37.502826   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:37.502750   76422 retry.go:31] will retry after 437.69847ms: waiting for machine to come up
	I0816 18:14:37.942667   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:37.943423   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:37.943456   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:37.943378   76422 retry.go:31] will retry after 709.064032ms: waiting for machine to come up
	I0816 18:14:38.653840   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:38.654349   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:38.654386   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:38.654297   76422 retry.go:31] will retry after 594.417028ms: waiting for machine to come up
	I0816 18:14:34.700134   74828 pod_ready.go:93] pod "kube-scheduler-no-preload-864476" in "kube-system" namespace has status "Ready":"True"
	I0816 18:14:34.700158   74828 pod_ready.go:82] duration metric: took 13.007571631s for pod "kube-scheduler-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:34.700171   74828 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:36.707977   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:38.708527   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:35.790842   75006 pod_ready.go:103] pod "coredns-6f6b679f8f-2sgmk" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:37.791236   75006 pod_ready.go:93] pod "coredns-6f6b679f8f-2sgmk" in "kube-system" namespace has status "Ready":"True"
	I0816 18:14:37.791278   75006 pod_ready.go:82] duration metric: took 6.008656328s for pod "coredns-6f6b679f8f-2sgmk" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:37.791294   75006 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:39.798513   75006 pod_ready.go:93] pod "etcd-default-k8s-diff-port-256678" in "kube-system" namespace has status "Ready":"True"
	I0816 18:14:39.798543   75006 pod_ready.go:82] duration metric: took 2.007240233s for pod "etcd-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:39.798557   75006 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:38.127403   75402 crio.go:462] duration metric: took 1.536659915s to copy over tarball
	I0816 18:14:38.127504   75402 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 18:14:41.109575   75402 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.982013621s)
	I0816 18:14:41.109639   75402 crio.go:469] duration metric: took 2.982198625s to extract the tarball
	I0816 18:14:41.109650   75402 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 18:14:41.152940   75402 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 18:14:41.185863   75402 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0816 18:14:41.185892   75402 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0816 18:14:41.185982   75402 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:14:41.186003   75402 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0816 18:14:41.186036   75402 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0816 18:14:41.186044   75402 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 18:14:41.186103   75402 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 18:14:41.185993   75402 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0816 18:14:41.186171   75402 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 18:14:41.185993   75402 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 18:14:41.187521   75402 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0816 18:14:41.187532   75402 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 18:14:41.187542   75402 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0816 18:14:41.187527   75402 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 18:14:41.187595   75402 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:14:41.187605   75402 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 18:14:41.187688   75402 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 18:14:41.187840   75402 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0816 18:14:41.421551   75402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0816 18:14:41.462506   75402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0816 18:14:41.467716   75402 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0816 18:14:41.467758   75402 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0816 18:14:41.467810   75402 ssh_runner.go:195] Run: which crictl
	I0816 18:14:41.508571   75402 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0816 18:14:41.508638   75402 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 18:14:41.508687   75402 ssh_runner.go:195] Run: which crictl
	I0816 18:14:41.508691   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 18:14:41.514560   75402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 18:14:41.520003   75402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0816 18:14:41.526475   75402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0816 18:14:41.526892   75402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0816 18:14:41.533271   75402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0816 18:14:41.569269   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 18:14:41.569426   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 18:14:41.694043   75402 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0816 18:14:41.694100   75402 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 18:14:41.694049   75402 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0816 18:14:41.694210   75402 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 18:14:41.694173   75402 ssh_runner.go:195] Run: which crictl
	I0816 18:14:41.694268   75402 ssh_runner.go:195] Run: which crictl
	I0816 18:14:41.701292   75402 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0816 18:14:41.701337   75402 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0816 18:14:41.701389   75402 ssh_runner.go:195] Run: which crictl
	I0816 18:14:41.707345   75402 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0816 18:14:41.707415   75402 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 18:14:41.707467   75402 ssh_runner.go:195] Run: which crictl
	I0816 18:14:41.711820   75402 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0816 18:14:41.711854   75402 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0816 18:14:41.711896   75402 ssh_runner.go:195] Run: which crictl
	I0816 18:14:41.723813   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 18:14:41.723850   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 18:14:41.723814   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 18:14:41.723939   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 18:14:41.723951   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 18:14:41.724003   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 18:14:41.724060   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 18:14:41.872645   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 18:14:41.872674   75402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0816 18:14:41.873747   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 18:14:41.873786   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 18:14:41.873891   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 18:14:41.873899   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 18:14:41.873960   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 18:14:41.997519   75402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0816 18:14:42.002048   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 18:14:42.002091   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 18:14:42.002140   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 18:14:42.002178   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 18:14:42.002218   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 18:14:42.070993   75402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:14:42.115418   75402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0816 18:14:42.115527   75402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0816 18:14:42.115623   75402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0816 18:14:42.115631   75402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0816 18:14:42.115689   75402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0816 18:14:42.235706   75402 cache_images.go:92] duration metric: took 1.049784726s to LoadCachedImages
	W0816 18:14:42.235807   75402 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0816 18:14:42.235821   75402 kubeadm.go:934] updating node { 192.168.39.211 8443 v1.20.0 crio true true} ...
	I0816 18:14:42.235939   75402 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-783465 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.211
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-783465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 18:14:42.236024   75402 ssh_runner.go:195] Run: crio config
	I0816 18:14:42.286742   75402 cni.go:84] Creating CNI manager for ""
	I0816 18:14:42.286763   75402 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 18:14:42.286771   75402 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 18:14:42.286789   75402 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.211 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-783465 NodeName:old-k8s-version-783465 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.211"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.211 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0816 18:14:42.286904   75402 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.211
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-783465"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.211
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.211"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 18:14:42.286961   75402 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0816 18:14:42.297015   75402 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 18:14:42.297098   75402 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 18:14:42.306400   75402 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0816 18:14:42.322812   75402 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 18:14:42.339791   75402 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0816 18:14:42.356930   75402 ssh_runner.go:195] Run: grep 192.168.39.211	control-plane.minikube.internal$ /etc/hosts
	I0816 18:14:42.360578   75402 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.211	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 18:14:42.373248   75402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:14:42.495499   75402 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 18:14:42.511910   75402 certs.go:68] Setting up /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465 for IP: 192.168.39.211
	I0816 18:14:42.511942   75402 certs.go:194] generating shared ca certs ...
	I0816 18:14:42.511964   75402 certs.go:226] acquiring lock for ca certs: {Name:mk1e000575c94c138513704c2900b8a68810eb65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:14:42.512147   75402 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key
	I0816 18:14:42.512206   75402 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key
	I0816 18:14:42.512220   75402 certs.go:256] generating profile certs ...
	I0816 18:14:42.512361   75402 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/client.key
	I0816 18:14:42.512431   75402 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/apiserver.key.94c45fb6
	I0816 18:14:42.512483   75402 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/proxy-client.key
	I0816 18:14:42.512664   75402 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem (1338 bytes)
	W0816 18:14:42.512709   75402 certs.go:480] ignoring /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753_empty.pem, impossibly tiny 0 bytes
	I0816 18:14:42.512724   75402 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 18:14:42.512754   75402 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem (1082 bytes)
	I0816 18:14:42.512794   75402 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem (1123 bytes)
	I0816 18:14:42.512825   75402 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem (1679 bytes)
	I0816 18:14:42.512881   75402 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem (1708 bytes)
	I0816 18:14:42.513660   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 18:14:42.552291   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 18:14:42.585617   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 18:14:42.611017   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 18:14:42.638092   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0816 18:14:42.676877   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 18:14:42.710091   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 18:14:42.743734   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 18:14:42.779905   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /usr/share/ca-certificates/167532.pem (1708 bytes)
	I0816 18:14:42.802779   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 18:14:42.826432   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem --> /usr/share/ca-certificates/16753.pem (1338 bytes)
	I0816 18:14:42.849286   75402 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 18:14:42.866901   75402 ssh_runner.go:195] Run: openssl version
	I0816 18:14:42.872283   75402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167532.pem && ln -fs /usr/share/ca-certificates/167532.pem /etc/ssl/certs/167532.pem"
	I0816 18:14:42.882976   75402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167532.pem
	I0816 18:14:42.887432   75402 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 17:00 /usr/share/ca-certificates/167532.pem
	I0816 18:14:42.887504   75402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167532.pem
	I0816 18:14:42.893275   75402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167532.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 18:14:42.903687   75402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 18:14:42.915232   75402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:14:42.919669   75402 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:49 /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:14:42.919735   75402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:14:42.925282   75402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 18:14:42.937888   75402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16753.pem && ln -fs /usr/share/ca-certificates/16753.pem /etc/ssl/certs/16753.pem"
	I0816 18:14:42.949994   75402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16753.pem
	I0816 18:14:42.954495   75402 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 17:00 /usr/share/ca-certificates/16753.pem
	I0816 18:14:42.954548   75402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16753.pem
	I0816 18:14:42.960295   75402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16753.pem /etc/ssl/certs/51391683.0"
	I0816 18:14:42.972006   75402 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 18:14:42.976450   75402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 18:14:42.982741   75402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 18:14:42.988649   75402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 18:14:42.995021   75402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 18:14:43.000965   75402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 18:14:43.007030   75402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 18:14:43.012891   75402 kubeadm.go:392] StartCluster: {Name:old-k8s-version-783465 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-783465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 18:14:43.012983   75402 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 18:14:43.013071   75402 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 18:14:43.050670   75402 cri.go:89] found id: ""
	I0816 18:14:43.050741   75402 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 18:14:43.060748   75402 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 18:14:43.060773   75402 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 18:14:43.060825   75402 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 18:14:43.070299   75402 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 18:14:43.071251   75402 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-783465" does not appear in /home/jenkins/minikube-integration/19461-9545/kubeconfig
	I0816 18:14:43.071945   75402 kubeconfig.go:62] /home/jenkins/minikube-integration/19461-9545/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-783465" cluster setting kubeconfig missing "old-k8s-version-783465" context setting]
	I0816 18:14:43.072870   75402 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/kubeconfig: {Name:mk7d5abb3a1b9b654c5d7490d32aeae2c9a1bdd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:14:39.250064   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:39.250979   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:39.251028   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:39.250914   76422 retry.go:31] will retry after 1.014851653s: waiting for machine to come up
	I0816 18:14:40.266811   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:40.267287   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:40.267323   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:40.267238   76422 retry.go:31] will retry after 1.333311972s: waiting for machine to come up
	I0816 18:14:41.602031   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:41.602532   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:41.602565   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:41.602480   76422 retry.go:31] will retry after 1.525496469s: waiting for machine to come up
	I0816 18:14:43.130136   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:43.130620   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:43.130661   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:43.130563   76422 retry.go:31] will retry after 2.206344656s: waiting for machine to come up
	I0816 18:14:41.206879   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:43.706278   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:41.806382   75006 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-256678" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:43.927145   75006 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-256678" in "kube-system" namespace has status "Ready":"True"
	I0816 18:14:43.927173   75006 pod_ready.go:82] duration metric: took 4.128607781s for pod "kube-apiserver-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:43.927182   75006 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:43.932293   75006 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-256678" in "kube-system" namespace has status "Ready":"True"
	I0816 18:14:43.932314   75006 pod_ready.go:82] duration metric: took 5.122737ms for pod "kube-controller-manager-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:43.932326   75006 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-l4lr2" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:43.937128   75006 pod_ready.go:93] pod "kube-proxy-l4lr2" in "kube-system" namespace has status "Ready":"True"
	I0816 18:14:43.937146   75006 pod_ready.go:82] duration metric: took 4.812798ms for pod "kube-proxy-l4lr2" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:43.937154   75006 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:43.941992   75006 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-256678" in "kube-system" namespace has status "Ready":"True"
	I0816 18:14:43.942018   75006 pod_ready.go:82] duration metric: took 4.856588ms for pod "kube-scheduler-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:43.942030   75006 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:43.141753   75402 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 18:14:43.154269   75402 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.211
	I0816 18:14:43.154324   75402 kubeadm.go:1160] stopping kube-system containers ...
	I0816 18:14:43.154341   75402 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 18:14:43.154404   75402 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 18:14:43.192966   75402 cri.go:89] found id: ""
	I0816 18:14:43.193035   75402 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 18:14:43.213101   75402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 18:14:43.222811   75402 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 18:14:43.222826   75402 kubeadm.go:157] found existing configuration files:
	
	I0816 18:14:43.222870   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 18:14:43.232196   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 18:14:43.232261   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 18:14:43.241633   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 18:14:43.250751   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 18:14:43.250800   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 18:14:43.260197   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 18:14:43.268943   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 18:14:43.269000   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 18:14:43.277887   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 18:14:43.286281   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 18:14:43.286391   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 18:14:43.295899   75402 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 18:14:43.306026   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:43.441487   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:44.213457   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:44.431649   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:44.553955   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:44.646817   75402 api_server.go:52] waiting for apiserver process to appear ...
	I0816 18:14:44.646923   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:45.147202   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:45.648050   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:46.147958   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:46.647398   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:47.147403   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:47.646992   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:45.338228   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:45.338729   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:45.338763   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:45.338660   76422 retry.go:31] will retry after 2.526891535s: waiting for machine to come up
	I0816 18:14:47.868326   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:47.868821   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:47.868853   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:47.868774   76422 retry.go:31] will retry after 2.866643935s: waiting for machine to come up
	I0816 18:14:45.706669   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:47.707062   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:45.948791   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:48.447930   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:48.147987   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:48.646974   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:49.147114   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:49.647020   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:50.147765   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:50.647135   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:51.147506   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:51.647568   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:52.147648   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:52.647865   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:50.736760   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:50.737295   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:50.737331   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:50.737245   76422 retry.go:31] will retry after 3.824271015s: waiting for machine to come up
	I0816 18:14:50.206249   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:52.206435   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:50.449586   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:52.948577   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:54.566285   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:54.566784   74510 main.go:141] libmachine: (embed-certs-777541) Found IP for machine: 192.168.61.218
	I0816 18:14:54.566809   74510 main.go:141] libmachine: (embed-certs-777541) Reserving static IP address...
	I0816 18:14:54.566825   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has current primary IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:54.567171   74510 main.go:141] libmachine: (embed-certs-777541) Reserved static IP address: 192.168.61.218
	I0816 18:14:54.567193   74510 main.go:141] libmachine: (embed-certs-777541) Waiting for SSH to be available...
	I0816 18:14:54.567211   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "embed-certs-777541", mac: "52:54:00:54:9a:0c", ip: "192.168.61.218"} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:54.567231   74510 main.go:141] libmachine: (embed-certs-777541) DBG | skip adding static IP to network mk-embed-certs-777541 - found existing host DHCP lease matching {name: "embed-certs-777541", mac: "52:54:00:54:9a:0c", ip: "192.168.61.218"}
	I0816 18:14:54.567245   74510 main.go:141] libmachine: (embed-certs-777541) DBG | Getting to WaitForSSH function...
	I0816 18:14:54.569546   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:54.569864   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:54.569890   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:54.570019   74510 main.go:141] libmachine: (embed-certs-777541) DBG | Using SSH client type: external
	I0816 18:14:54.570046   74510 main.go:141] libmachine: (embed-certs-777541) DBG | Using SSH private key: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/embed-certs-777541/id_rsa (-rw-------)
	I0816 18:14:54.570073   74510 main.go:141] libmachine: (embed-certs-777541) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.218 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19461-9545/.minikube/machines/embed-certs-777541/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 18:14:54.570082   74510 main.go:141] libmachine: (embed-certs-777541) DBG | About to run SSH command:
	I0816 18:14:54.570109   74510 main.go:141] libmachine: (embed-certs-777541) DBG | exit 0
	I0816 18:14:54.692450   74510 main.go:141] libmachine: (embed-certs-777541) DBG | SSH cmd err, output: <nil>: 
	I0816 18:14:54.692828   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetConfigRaw
	I0816 18:14:54.693486   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetIP
	I0816 18:14:54.696565   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:54.696943   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:54.696987   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:54.697248   74510 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/embed-certs-777541/config.json ...
	I0816 18:14:54.697455   74510 machine.go:93] provisionDockerMachine start ...
	I0816 18:14:54.697475   74510 main.go:141] libmachine: (embed-certs-777541) Calling .DriverName
	I0816 18:14:54.697686   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:14:54.700172   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:54.700491   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:54.700520   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:54.700716   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHPort
	I0816 18:14:54.700906   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:54.701102   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:54.701239   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHUsername
	I0816 18:14:54.701440   74510 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:54.701650   74510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.218 22 <nil> <nil>}
	I0816 18:14:54.701662   74510 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 18:14:54.800770   74510 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 18:14:54.800805   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetMachineName
	I0816 18:14:54.801047   74510 buildroot.go:166] provisioning hostname "embed-certs-777541"
	I0816 18:14:54.801079   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetMachineName
	I0816 18:14:54.801264   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:14:54.804313   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:54.804734   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:54.804761   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:54.804940   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHPort
	I0816 18:14:54.805132   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:54.805322   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:54.805485   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHUsername
	I0816 18:14:54.805711   74510 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:54.805869   74510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.218 22 <nil> <nil>}
	I0816 18:14:54.805886   74510 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-777541 && echo "embed-certs-777541" | sudo tee /etc/hostname
	I0816 18:14:54.918908   74510 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-777541
	
	I0816 18:14:54.918949   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:14:54.921760   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:54.922117   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:54.922146   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:54.922338   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHPort
	I0816 18:14:54.922511   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:54.922681   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:54.922843   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHUsername
	I0816 18:14:54.923033   74510 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:54.923243   74510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.218 22 <nil> <nil>}
	I0816 18:14:54.923261   74510 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-777541' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-777541/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-777541' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 18:14:55.028983   74510 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 18:14:55.029016   74510 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19461-9545/.minikube CaCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19461-9545/.minikube}
	I0816 18:14:55.029040   74510 buildroot.go:174] setting up certificates
	I0816 18:14:55.029051   74510 provision.go:84] configureAuth start
	I0816 18:14:55.029064   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetMachineName
	I0816 18:14:55.029320   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetIP
	I0816 18:14:55.032273   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.032693   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:55.032743   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.032983   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:14:55.035257   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.035581   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:55.035606   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.035742   74510 provision.go:143] copyHostCerts
	I0816 18:14:55.035797   74510 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem, removing ...
	I0816 18:14:55.035814   74510 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem
	I0816 18:14:55.035899   74510 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem (1123 bytes)
	I0816 18:14:55.035996   74510 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem, removing ...
	I0816 18:14:55.036004   74510 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem
	I0816 18:14:55.036024   74510 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem (1679 bytes)
	I0816 18:14:55.036081   74510 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem, removing ...
	I0816 18:14:55.036087   74510 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem
	I0816 18:14:55.036106   74510 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem (1082 bytes)
	I0816 18:14:55.036155   74510 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem org=jenkins.embed-certs-777541 san=[127.0.0.1 192.168.61.218 embed-certs-777541 localhost minikube]
	I0816 18:14:55.182540   74510 provision.go:177] copyRemoteCerts
	I0816 18:14:55.182606   74510 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 18:14:55.182633   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:14:55.185807   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.186179   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:55.186229   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.186429   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHPort
	I0816 18:14:55.186619   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:55.186770   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHUsername
	I0816 18:14:55.186884   74510 sshutil.go:53] new ssh client: &{IP:192.168.61.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/embed-certs-777541/id_rsa Username:docker}
	I0816 18:14:55.262494   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0816 18:14:55.285186   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 18:14:55.307082   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 18:14:55.328912   74510 provision.go:87] duration metric: took 299.848734ms to configureAuth
	I0816 18:14:55.328934   74510 buildroot.go:189] setting minikube options for container-runtime
	I0816 18:14:55.329140   74510 config.go:182] Loaded profile config "embed-certs-777541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 18:14:55.329215   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:14:55.331989   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.332366   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:55.332414   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.332594   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHPort
	I0816 18:14:55.332801   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:55.333006   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:55.333158   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHUsername
	I0816 18:14:55.333312   74510 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:55.333501   74510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.218 22 <nil> <nil>}
	I0816 18:14:55.333522   74510 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 18:14:55.579734   74510 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 18:14:55.579765   74510 machine.go:96] duration metric: took 882.296402ms to provisionDockerMachine
	I0816 18:14:55.579781   74510 start.go:293] postStartSetup for "embed-certs-777541" (driver="kvm2")
	I0816 18:14:55.579793   74510 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 18:14:55.579814   74510 main.go:141] libmachine: (embed-certs-777541) Calling .DriverName
	I0816 18:14:55.580182   74510 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 18:14:55.580216   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:14:55.582826   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.583250   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:55.583285   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.583374   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHPort
	I0816 18:14:55.583574   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:55.583739   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHUsername
	I0816 18:14:55.583972   74510 sshutil.go:53] new ssh client: &{IP:192.168.61.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/embed-certs-777541/id_rsa Username:docker}
	I0816 18:14:55.663379   74510 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 18:14:55.667205   74510 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 18:14:55.667231   74510 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/addons for local assets ...
	I0816 18:14:55.667321   74510 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/files for local assets ...
	I0816 18:14:55.667426   74510 filesync.go:149] local asset: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem -> 167532.pem in /etc/ssl/certs
	I0816 18:14:55.667560   74510 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 18:14:55.676427   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /etc/ssl/certs/167532.pem (1708 bytes)
	I0816 18:14:55.698188   74510 start.go:296] duration metric: took 118.396211ms for postStartSetup
	I0816 18:14:55.698226   74510 fix.go:56] duration metric: took 20.644852989s for fixHost
	I0816 18:14:55.698245   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:14:55.701014   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.701359   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:55.701390   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.701587   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHPort
	I0816 18:14:55.701755   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:55.701924   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:55.702070   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHUsername
	I0816 18:14:55.702241   74510 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:55.702452   74510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.218 22 <nil> <nil>}
	I0816 18:14:55.702464   74510 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 18:14:55.801397   74510 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723832095.756052952
	
	I0816 18:14:55.801431   74510 fix.go:216] guest clock: 1723832095.756052952
	I0816 18:14:55.801443   74510 fix.go:229] Guest: 2024-08-16 18:14:55.756052952 +0000 UTC Remote: 2024-08-16 18:14:55.698231489 +0000 UTC m=+357.018707788 (delta=57.821463ms)
	I0816 18:14:55.801492   74510 fix.go:200] guest clock delta is within tolerance: 57.821463ms
	I0816 18:14:55.801504   74510 start.go:83] releasing machines lock for "embed-certs-777541", held for 20.74815396s
	I0816 18:14:55.801528   74510 main.go:141] libmachine: (embed-certs-777541) Calling .DriverName
	I0816 18:14:55.801781   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetIP
	I0816 18:14:55.804216   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.804617   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:55.804659   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.804795   74510 main.go:141] libmachine: (embed-certs-777541) Calling .DriverName
	I0816 18:14:55.805395   74510 main.go:141] libmachine: (embed-certs-777541) Calling .DriverName
	I0816 18:14:55.805622   74510 main.go:141] libmachine: (embed-certs-777541) Calling .DriverName
	I0816 18:14:55.805730   74510 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 18:14:55.805781   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:14:55.805849   74510 ssh_runner.go:195] Run: cat /version.json
	I0816 18:14:55.805877   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:14:55.808587   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.808946   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:55.808978   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.809080   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.809249   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHPort
	I0816 18:14:55.809415   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:55.809417   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:55.809442   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.809575   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHPort
	I0816 18:14:55.809597   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHUsername
	I0816 18:14:55.809720   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:55.809766   74510 sshutil.go:53] new ssh client: &{IP:192.168.61.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/embed-certs-777541/id_rsa Username:docker}
	I0816 18:14:55.809857   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHUsername
	I0816 18:14:55.809970   74510 sshutil.go:53] new ssh client: &{IP:192.168.61.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/embed-certs-777541/id_rsa Username:docker}
	I0816 18:14:55.885026   74510 ssh_runner.go:195] Run: systemctl --version
	I0816 18:14:55.927940   74510 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 18:14:56.072936   74510 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 18:14:56.080952   74510 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 18:14:56.081029   74510 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 18:14:56.100709   74510 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 18:14:56.100734   74510 start.go:495] detecting cgroup driver to use...
	I0816 18:14:56.100791   74510 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 18:14:56.115759   74510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 18:14:56.129714   74510 docker.go:217] disabling cri-docker service (if available) ...
	I0816 18:14:56.129774   74510 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 18:14:56.142909   74510 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 18:14:56.156413   74510 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 18:14:56.268818   74510 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 18:14:56.424536   74510 docker.go:233] disabling docker service ...
	I0816 18:14:56.424612   74510 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 18:14:56.438033   74510 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 18:14:56.450479   74510 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 18:14:56.560132   74510 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 18:14:56.683671   74510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 18:14:56.697636   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 18:14:56.716486   74510 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 18:14:56.716560   74510 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:56.726082   74510 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 18:14:56.726144   74510 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:56.735971   74510 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:56.745410   74510 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:56.754952   74510 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 18:14:56.764717   74510 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:56.774153   74510 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:56.789843   74510 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:56.799399   74510 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 18:14:56.807679   74510 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 18:14:56.807743   74510 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 18:14:56.819873   74510 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 18:14:56.829921   74510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:14:56.936372   74510 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 18:14:57.073931   74510 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 18:14:57.073998   74510 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 18:14:57.078254   74510 start.go:563] Will wait 60s for crictl version
	I0816 18:14:57.078327   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:14:57.081833   74510 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 18:14:57.121402   74510 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 18:14:57.121476   74510 ssh_runner.go:195] Run: crio --version
	I0816 18:14:57.149262   74510 ssh_runner.go:195] Run: crio --version
	I0816 18:14:57.183015   74510 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 18:14:53.146986   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:53.647279   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:54.147587   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:54.647911   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:55.147322   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:55.647765   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:56.147695   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:56.647296   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:57.147031   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:57.647108   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:57.184157   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetIP
	I0816 18:14:57.186758   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:57.187177   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:57.187206   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:57.187439   74510 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0816 18:14:57.191152   74510 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 18:14:57.203073   74510 kubeadm.go:883] updating cluster {Name:embed-certs-777541 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-777541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.218 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 18:14:57.203240   74510 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 18:14:57.203332   74510 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 18:14:57.238289   74510 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0816 18:14:57.238348   74510 ssh_runner.go:195] Run: which lz4
	I0816 18:14:57.242251   74510 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 18:14:57.246081   74510 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 18:14:57.246124   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0816 18:14:58.459887   74510 crio.go:462] duration metric: took 1.217672418s to copy over tarball
	I0816 18:14:58.459960   74510 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 18:14:54.707069   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:57.206750   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:55.449391   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:57.449830   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:59.451338   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:58.147661   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:58.647270   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:59.147355   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:59.647821   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:00.148023   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:00.647165   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:01.147669   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:01.647960   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:02.147721   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:02.647932   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:00.545989   74510 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.085985152s)
	I0816 18:15:00.546028   74510 crio.go:469] duration metric: took 2.086110527s to extract the tarball
	I0816 18:15:00.546039   74510 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 18:15:00.587096   74510 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 18:15:00.630366   74510 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 18:15:00.630394   74510 cache_images.go:84] Images are preloaded, skipping loading
	I0816 18:15:00.630405   74510 kubeadm.go:934] updating node { 192.168.61.218 8443 v1.31.0 crio true true} ...
	I0816 18:15:00.630540   74510 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-777541 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.218
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-777541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 18:15:00.630630   74510 ssh_runner.go:195] Run: crio config
	I0816 18:15:00.681196   74510 cni.go:84] Creating CNI manager for ""
	I0816 18:15:00.681224   74510 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 18:15:00.681235   74510 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 18:15:00.681262   74510 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.218 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-777541 NodeName:embed-certs-777541 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.218"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.218 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 18:15:00.681439   74510 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.218
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-777541"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.218
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.218"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 18:15:00.681534   74510 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 18:15:00.691239   74510 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 18:15:00.691294   74510 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 18:15:00.700059   74510 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0816 18:15:00.717826   74510 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 18:15:00.733475   74510 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0816 18:15:00.750175   74510 ssh_runner.go:195] Run: grep 192.168.61.218	control-plane.minikube.internal$ /etc/hosts
	I0816 18:15:00.753865   74510 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.218	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 18:15:00.765531   74510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:15:00.875234   74510 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 18:15:00.893095   74510 certs.go:68] Setting up /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/embed-certs-777541 for IP: 192.168.61.218
	I0816 18:15:00.893115   74510 certs.go:194] generating shared ca certs ...
	I0816 18:15:00.893131   74510 certs.go:226] acquiring lock for ca certs: {Name:mk1e000575c94c138513704c2900b8a68810eb65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:15:00.893274   74510 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key
	I0816 18:15:00.893318   74510 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key
	I0816 18:15:00.893327   74510 certs.go:256] generating profile certs ...
	I0816 18:15:00.893403   74510 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/embed-certs-777541/client.key
	I0816 18:15:00.893459   74510 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/embed-certs-777541/apiserver.key.dd0c1a01
	I0816 18:15:00.893503   74510 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/embed-certs-777541/proxy-client.key
	I0816 18:15:00.893617   74510 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem (1338 bytes)
	W0816 18:15:00.893645   74510 certs.go:480] ignoring /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753_empty.pem, impossibly tiny 0 bytes
	I0816 18:15:00.893655   74510 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 18:15:00.893675   74510 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem (1082 bytes)
	I0816 18:15:00.893698   74510 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem (1123 bytes)
	I0816 18:15:00.893721   74510 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem (1679 bytes)
	I0816 18:15:00.893759   74510 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem (1708 bytes)
	I0816 18:15:00.894445   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 18:15:00.936535   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 18:15:00.969775   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 18:15:01.013053   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 18:15:01.046087   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/embed-certs-777541/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0816 18:15:01.073290   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/embed-certs-777541/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 18:15:01.097033   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/embed-certs-777541/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 18:15:01.119859   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/embed-certs-777541/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 18:15:01.141943   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem --> /usr/share/ca-certificates/16753.pem (1338 bytes)
	I0816 18:15:01.168752   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /usr/share/ca-certificates/167532.pem (1708 bytes)
	I0816 18:15:01.191193   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 18:15:01.213691   74510 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 18:15:01.229374   74510 ssh_runner.go:195] Run: openssl version
	I0816 18:15:01.234563   74510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16753.pem && ln -fs /usr/share/ca-certificates/16753.pem /etc/ssl/certs/16753.pem"
	I0816 18:15:01.244301   74510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16753.pem
	I0816 18:15:01.248156   74510 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 17:00 /usr/share/ca-certificates/16753.pem
	I0816 18:15:01.248220   74510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16753.pem
	I0816 18:15:01.253468   74510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16753.pem /etc/ssl/certs/51391683.0"
	I0816 18:15:01.262917   74510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167532.pem && ln -fs /usr/share/ca-certificates/167532.pem /etc/ssl/certs/167532.pem"
	I0816 18:15:01.272577   74510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167532.pem
	I0816 18:15:01.276790   74510 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 17:00 /usr/share/ca-certificates/167532.pem
	I0816 18:15:01.276841   74510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167532.pem
	I0816 18:15:01.281847   74510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167532.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 18:15:01.291789   74510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 18:15:01.302422   74510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:15:01.306320   74510 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:49 /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:15:01.306364   74510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:15:01.311335   74510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 18:15:01.320713   74510 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 18:15:01.324442   74510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 18:15:01.330137   74510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 18:15:01.335693   74510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 18:15:01.340987   74510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 18:15:01.346071   74510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 18:15:01.351280   74510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 18:15:01.357275   74510 kubeadm.go:392] StartCluster: {Name:embed-certs-777541 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-777541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.218 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 18:15:01.357388   74510 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 18:15:01.357427   74510 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 18:15:01.400422   74510 cri.go:89] found id: ""
	I0816 18:15:01.400497   74510 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 18:15:01.410142   74510 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 18:15:01.410162   74510 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 18:15:01.410211   74510 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 18:15:01.419129   74510 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 18:15:01.420130   74510 kubeconfig.go:125] found "embed-certs-777541" server: "https://192.168.61.218:8443"
	I0816 18:15:01.422036   74510 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 18:15:01.430665   74510 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.218
	I0816 18:15:01.430694   74510 kubeadm.go:1160] stopping kube-system containers ...
	I0816 18:15:01.430705   74510 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 18:15:01.430762   74510 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 18:15:01.469108   74510 cri.go:89] found id: ""
	I0816 18:15:01.469182   74510 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 18:15:01.486125   74510 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 18:15:01.495311   74510 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 18:15:01.495335   74510 kubeadm.go:157] found existing configuration files:
	
	I0816 18:15:01.495384   74510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 18:15:01.504066   74510 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 18:15:01.504128   74510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 18:15:01.513222   74510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 18:15:01.521593   74510 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 18:15:01.521692   74510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 18:15:01.530413   74510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 18:15:01.539027   74510 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 18:15:01.539101   74510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 18:15:01.547802   74510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 18:15:01.557143   74510 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 18:15:01.557203   74510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 18:15:01.568616   74510 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 18:15:01.578091   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:15:01.700661   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:15:02.631047   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:15:02.833132   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:15:02.900476   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:15:02.972431   74510 api_server.go:52] waiting for apiserver process to appear ...
	I0816 18:15:02.972514   74510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:03.473296   74510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:59.707731   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:02.206825   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:01.948070   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:03.948398   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:03.147098   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:03.646983   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:04.147320   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:04.647649   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:05.147258   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:05.647999   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:06.147901   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:06.647340   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:07.147339   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:07.648033   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:03.973603   74510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:04.472779   74510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:04.972846   74510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:05.473594   74510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:05.487878   74510 api_server.go:72] duration metric: took 2.51545841s to wait for apiserver process to appear ...
	I0816 18:15:05.487914   74510 api_server.go:88] waiting for apiserver healthz status ...
	I0816 18:15:05.487937   74510 api_server.go:253] Checking apiserver healthz at https://192.168.61.218:8443/healthz ...
	I0816 18:15:08.450583   74510 api_server.go:279] https://192.168.61.218:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 18:15:08.450618   74510 api_server.go:103] status: https://192.168.61.218:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 18:15:08.450635   74510 api_server.go:253] Checking apiserver healthz at https://192.168.61.218:8443/healthz ...
	I0816 18:15:08.495625   74510 api_server.go:279] https://192.168.61.218:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 18:15:08.495656   74510 api_server.go:103] status: https://192.168.61.218:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 18:15:08.495669   74510 api_server.go:253] Checking apiserver healthz at https://192.168.61.218:8443/healthz ...
	I0816 18:15:08.516711   74510 api_server.go:279] https://192.168.61.218:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:15:08.516744   74510 api_server.go:103] status: https://192.168.61.218:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:15:04.836663   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:07.206999   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:06.447839   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:08.449939   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:08.988897   74510 api_server.go:253] Checking apiserver healthz at https://192.168.61.218:8443/healthz ...
	I0816 18:15:08.996347   74510 api_server.go:279] https://192.168.61.218:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:15:08.996374   74510 api_server.go:103] status: https://192.168.61.218:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:15:09.488013   74510 api_server.go:253] Checking apiserver healthz at https://192.168.61.218:8443/healthz ...
	I0816 18:15:09.499514   74510 api_server.go:279] https://192.168.61.218:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:15:09.499559   74510 api_server.go:103] status: https://192.168.61.218:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:15:09.988080   74510 api_server.go:253] Checking apiserver healthz at https://192.168.61.218:8443/healthz ...
	I0816 18:15:09.992106   74510 api_server.go:279] https://192.168.61.218:8443/healthz returned 200:
	ok
	I0816 18:15:09.998515   74510 api_server.go:141] control plane version: v1.31.0
	I0816 18:15:09.998542   74510 api_server.go:131] duration metric: took 4.510619176s to wait for apiserver health ...
	I0816 18:15:09.998555   74510 cni.go:84] Creating CNI manager for ""
	I0816 18:15:09.998563   74510 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 18:15:10.000470   74510 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 18:15:10.001870   74510 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 18:15:10.011805   74510 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 18:15:10.032349   74510 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 18:15:10.046765   74510 system_pods.go:59] 8 kube-system pods found
	I0816 18:15:10.046798   74510 system_pods.go:61] "coredns-6f6b679f8f-8njs2" [f29c31e1-4c2a-4dd8-ba60-62998504c55e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 18:15:10.046808   74510 system_pods.go:61] "etcd-embed-certs-777541" [9cad9a1c-cea5-4271-9a68-3689ecdad607] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 18:15:10.046817   74510 system_pods.go:61] "kube-apiserver-embed-certs-777541" [a5105f98-368f-4687-8eca-ecc66ae59b42] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 18:15:10.046829   74510 system_pods.go:61] "kube-controller-manager-embed-certs-777541" [63cb72fb-167b-446b-874d-6ee665ad8a55] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 18:15:10.046838   74510 system_pods.go:61] "kube-proxy-j5rl7" [fcbc8903-6fa2-4f55-9ec0-92b77e21fb08] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0816 18:15:10.046847   74510 system_pods.go:61] "kube-scheduler-embed-certs-777541" [f2224375-d2f9-4e85-9eb8-26070a90642f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 18:15:10.046855   74510 system_pods.go:61] "metrics-server-6867b74b74-6hkzb" [3e01da8d-7ddf-47cc-9079-5162cf2c2b53] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 18:15:10.046867   74510 system_pods.go:61] "storage-provisioner" [6fc6c4da-0e0f-45cc-84a6-bd4907f5e852] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0816 18:15:10.046876   74510 system_pods.go:74] duration metric: took 14.506593ms to wait for pod list to return data ...
	I0816 18:15:10.046889   74510 node_conditions.go:102] verifying NodePressure condition ...
	I0816 18:15:10.050663   74510 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 18:15:10.050686   74510 node_conditions.go:123] node cpu capacity is 2
	I0816 18:15:10.050699   74510 node_conditions.go:105] duration metric: took 3.805313ms to run NodePressure ...
	I0816 18:15:10.050717   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:15:10.344177   74510 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0816 18:15:10.348795   74510 kubeadm.go:739] kubelet initialised
	I0816 18:15:10.348820   74510 kubeadm.go:740] duration metric: took 4.612695ms waiting for restarted kubelet to initialise ...
	I0816 18:15:10.348830   74510 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:15:10.355270   74510 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-8njs2" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:10.361564   74510 pod_ready.go:98] node "embed-certs-777541" hosting pod "coredns-6f6b679f8f-8njs2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:10.361584   74510 pod_ready.go:82] duration metric: took 6.2936ms for pod "coredns-6f6b679f8f-8njs2" in "kube-system" namespace to be "Ready" ...
	E0816 18:15:10.361592   74510 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-777541" hosting pod "coredns-6f6b679f8f-8njs2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:10.361598   74510 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:10.367126   74510 pod_ready.go:98] node "embed-certs-777541" hosting pod "etcd-embed-certs-777541" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:10.367149   74510 pod_ready.go:82] duration metric: took 5.542782ms for pod "etcd-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	E0816 18:15:10.367159   74510 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-777541" hosting pod "etcd-embed-certs-777541" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:10.367166   74510 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:10.372241   74510 pod_ready.go:98] node "embed-certs-777541" hosting pod "kube-apiserver-embed-certs-777541" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:10.372262   74510 pod_ready.go:82] duration metric: took 5.086551ms for pod "kube-apiserver-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	E0816 18:15:10.372273   74510 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-777541" hosting pod "kube-apiserver-embed-certs-777541" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:10.372301   74510 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:10.436397   74510 pod_ready.go:98] node "embed-certs-777541" hosting pod "kube-controller-manager-embed-certs-777541" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:10.436423   74510 pod_ready.go:82] duration metric: took 64.108858ms for pod "kube-controller-manager-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	E0816 18:15:10.436432   74510 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-777541" hosting pod "kube-controller-manager-embed-certs-777541" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:10.436443   74510 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-j5rl7" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:10.836116   74510 pod_ready.go:98] node "embed-certs-777541" hosting pod "kube-proxy-j5rl7" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:10.836146   74510 pod_ready.go:82] duration metric: took 399.693364ms for pod "kube-proxy-j5rl7" in "kube-system" namespace to be "Ready" ...
	E0816 18:15:10.836158   74510 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-777541" hosting pod "kube-proxy-j5rl7" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:10.836165   74510 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:11.235403   74510 pod_ready.go:98] node "embed-certs-777541" hosting pod "kube-scheduler-embed-certs-777541" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:11.235426   74510 pod_ready.go:82] duration metric: took 399.255693ms for pod "kube-scheduler-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	E0816 18:15:11.235439   74510 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-777541" hosting pod "kube-scheduler-embed-certs-777541" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:11.235445   74510 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:11.635717   74510 pod_ready.go:98] node "embed-certs-777541" hosting pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:11.635746   74510 pod_ready.go:82] duration metric: took 400.29283ms for pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace to be "Ready" ...
	E0816 18:15:11.635756   74510 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-777541" hosting pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:11.635762   74510 pod_ready.go:39] duration metric: took 1.286923943s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:15:11.635784   74510 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 18:15:11.646221   74510 ops.go:34] apiserver oom_adj: -16
	I0816 18:15:11.646248   74510 kubeadm.go:597] duration metric: took 10.23607804s to restartPrimaryControlPlane
	I0816 18:15:11.646269   74510 kubeadm.go:394] duration metric: took 10.288999278s to StartCluster
	I0816 18:15:11.646322   74510 settings.go:142] acquiring lock: {Name:mk7d6bbff611e99a9222156e58c7ea3b0a5541f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:15:11.646405   74510 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19461-9545/kubeconfig
	I0816 18:15:11.648652   74510 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/kubeconfig: {Name:mk7d5abb3a1b9b654c5d7490d32aeae2c9a1bdd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:15:11.648939   74510 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.218 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 18:15:11.649056   74510 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 18:15:11.649124   74510 config.go:182] Loaded profile config "embed-certs-777541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 18:15:11.649155   74510 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-777541"
	I0816 18:15:11.649165   74510 addons.go:69] Setting metrics-server=true in profile "embed-certs-777541"
	I0816 18:15:11.649192   74510 addons.go:234] Setting addon metrics-server=true in "embed-certs-777541"
	I0816 18:15:11.649201   74510 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-777541"
	W0816 18:15:11.649205   74510 addons.go:243] addon metrics-server should already be in state true
	I0816 18:15:11.649193   74510 addons.go:69] Setting default-storageclass=true in profile "embed-certs-777541"
	I0816 18:15:11.649252   74510 host.go:66] Checking if "embed-certs-777541" exists ...
	I0816 18:15:11.649254   74510 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-777541"
	W0816 18:15:11.649209   74510 addons.go:243] addon storage-provisioner should already be in state true
	I0816 18:15:11.649332   74510 host.go:66] Checking if "embed-certs-777541" exists ...
	I0816 18:15:11.649702   74510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:15:11.649706   74510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:15:11.649742   74510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:15:11.649772   74510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:15:11.649877   74510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:15:11.649930   74510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:15:11.651580   74510 out.go:177] * Verifying Kubernetes components...
	I0816 18:15:11.652903   74510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:15:11.665975   74510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33631
	I0816 18:15:11.666041   74510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44231
	I0816 18:15:11.666404   74510 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:15:11.666439   74510 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:15:11.666986   74510 main.go:141] libmachine: Using API Version  1
	I0816 18:15:11.667005   74510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:15:11.667051   74510 main.go:141] libmachine: Using API Version  1
	I0816 18:15:11.667085   74510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:15:11.667312   74510 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:15:11.667517   74510 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:15:11.667846   74510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:15:11.667899   74510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:15:11.668039   74510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:15:11.668077   74510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:15:11.669328   74510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38849
	I0816 18:15:11.669765   74510 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:15:11.670270   74510 main.go:141] libmachine: Using API Version  1
	I0816 18:15:11.670301   74510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:15:11.670658   74510 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:15:11.670896   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetState
	I0816 18:15:11.674148   74510 addons.go:234] Setting addon default-storageclass=true in "embed-certs-777541"
	W0816 18:15:11.674165   74510 addons.go:243] addon default-storageclass should already be in state true
	I0816 18:15:11.674184   74510 host.go:66] Checking if "embed-certs-777541" exists ...
	I0816 18:15:11.674448   74510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:15:11.674482   74510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:15:11.683629   74510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39851
	I0816 18:15:11.683637   74510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42943
	I0816 18:15:11.684040   74510 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:15:11.684048   74510 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:15:11.684499   74510 main.go:141] libmachine: Using API Version  1
	I0816 18:15:11.684516   74510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:15:11.684653   74510 main.go:141] libmachine: Using API Version  1
	I0816 18:15:11.684670   74510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:15:11.684968   74510 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:15:11.685114   74510 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:15:11.685136   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetState
	I0816 18:15:11.685329   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetState
	I0816 18:15:11.687030   74510 main.go:141] libmachine: (embed-certs-777541) Calling .DriverName
	I0816 18:15:11.687130   74510 main.go:141] libmachine: (embed-certs-777541) Calling .DriverName
	I0816 18:15:11.688852   74510 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0816 18:15:11.688855   74510 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:15:08.147308   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:08.647669   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:09.147149   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:09.647072   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:10.147381   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:10.647567   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:11.147101   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:11.647587   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:12.146972   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:12.647842   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:11.689590   74510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46491
	I0816 18:15:11.690041   74510 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:15:11.690152   74510 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 18:15:11.690170   74510 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0816 18:15:11.690186   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:15:11.690223   74510 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 18:15:11.690238   74510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 18:15:11.690253   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:15:11.690606   74510 main.go:141] libmachine: Using API Version  1
	I0816 18:15:11.690627   74510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:15:11.691006   74510 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:15:11.691543   74510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:15:11.691575   74510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:15:11.693646   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:15:11.693669   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:15:11.693988   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:15:11.694007   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:15:11.694051   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:15:11.694064   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:15:11.694275   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHPort
	I0816 18:15:11.694322   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHPort
	I0816 18:15:11.694436   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:15:11.694468   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:15:11.694545   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHUsername
	I0816 18:15:11.694602   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHUsername
	I0816 18:15:11.694677   74510 sshutil.go:53] new ssh client: &{IP:192.168.61.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/embed-certs-777541/id_rsa Username:docker}
	I0816 18:15:11.694885   74510 sshutil.go:53] new ssh client: &{IP:192.168.61.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/embed-certs-777541/id_rsa Username:docker}
	I0816 18:15:11.709409   74510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36055
	I0816 18:15:11.709800   74510 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:15:11.710343   74510 main.go:141] libmachine: Using API Version  1
	I0816 18:15:11.710363   74510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:15:11.710700   74510 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:15:11.710874   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetState
	I0816 18:15:11.712484   74510 main.go:141] libmachine: (embed-certs-777541) Calling .DriverName
	I0816 18:15:11.712691   74510 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 18:15:11.712706   74510 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 18:15:11.712723   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:15:11.715590   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:15:11.716017   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:15:11.716050   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:15:11.716167   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHPort
	I0816 18:15:11.716379   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:15:11.716572   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHUsername
	I0816 18:15:11.716737   74510 sshutil.go:53] new ssh client: &{IP:192.168.61.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/embed-certs-777541/id_rsa Username:docker}
	I0816 18:15:11.864710   74510 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 18:15:11.885871   74510 node_ready.go:35] waiting up to 6m0s for node "embed-certs-777541" to be "Ready" ...
	I0816 18:15:11.985725   74510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 18:15:12.007635   74510 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 18:15:12.007669   74510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0816 18:15:12.040044   74510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 18:15:12.059661   74510 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 18:15:12.059687   74510 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0816 18:15:12.123787   74510 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 18:15:12.123812   74510 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0816 18:15:12.167249   74510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 18:15:12.457960   74510 main.go:141] libmachine: Making call to close driver server
	I0816 18:15:12.457985   74510 main.go:141] libmachine: (embed-certs-777541) Calling .Close
	I0816 18:15:12.458264   74510 main.go:141] libmachine: (embed-certs-777541) DBG | Closing plugin on server side
	I0816 18:15:12.458315   74510 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:15:12.458334   74510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:15:12.458348   74510 main.go:141] libmachine: Making call to close driver server
	I0816 18:15:12.458360   74510 main.go:141] libmachine: (embed-certs-777541) Calling .Close
	I0816 18:15:12.458577   74510 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:15:12.458590   74510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:15:12.468651   74510 main.go:141] libmachine: Making call to close driver server
	I0816 18:15:12.468675   74510 main.go:141] libmachine: (embed-certs-777541) Calling .Close
	I0816 18:15:12.468921   74510 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:15:12.468940   74510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:15:12.468963   74510 main.go:141] libmachine: (embed-certs-777541) DBG | Closing plugin on server side
	I0816 18:15:13.203995   74510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.163904081s)
	I0816 18:15:13.204048   74510 main.go:141] libmachine: Making call to close driver server
	I0816 18:15:13.204060   74510 main.go:141] libmachine: (embed-certs-777541) Calling .Close
	I0816 18:15:13.204309   74510 main.go:141] libmachine: (embed-certs-777541) DBG | Closing plugin on server side
	I0816 18:15:13.204350   74510 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:15:13.204359   74510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:15:13.204368   74510 main.go:141] libmachine: Making call to close driver server
	I0816 18:15:13.204376   74510 main.go:141] libmachine: (embed-certs-777541) Calling .Close
	I0816 18:15:13.204562   74510 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:15:13.204589   74510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:15:13.213068   74510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.045790147s)
	I0816 18:15:13.213101   74510 main.go:141] libmachine: Making call to close driver server
	I0816 18:15:13.213115   74510 main.go:141] libmachine: (embed-certs-777541) Calling .Close
	I0816 18:15:13.213533   74510 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:15:13.213551   74510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:15:13.213555   74510 main.go:141] libmachine: (embed-certs-777541) DBG | Closing plugin on server side
	I0816 18:15:13.213560   74510 main.go:141] libmachine: Making call to close driver server
	I0816 18:15:13.213595   74510 main.go:141] libmachine: (embed-certs-777541) Calling .Close
	I0816 18:15:13.213869   74510 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:15:13.213887   74510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:15:13.213897   74510 addons.go:475] Verifying addon metrics-server=true in "embed-certs-777541"
	I0816 18:15:13.213901   74510 main.go:141] libmachine: (embed-certs-777541) DBG | Closing plugin on server side
	I0816 18:15:13.215724   74510 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0816 18:15:13.217031   74510 addons.go:510] duration metric: took 1.567977779s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0816 18:15:09.706813   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:11.708577   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:10.947986   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:12.949227   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:13.147558   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:13.647755   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:14.147408   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:14.647810   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:15.147888   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:15.647476   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:16.147258   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:16.647785   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:17.147086   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:17.647852   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:13.889379   74510 node_ready.go:53] node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:15.889764   74510 node_ready.go:53] node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:18.390031   74510 node_ready.go:53] node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:14.207743   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:16.705831   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:15.448826   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:17.950756   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:18.147086   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:18.647013   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:19.147027   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:19.647100   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:20.147070   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:20.647097   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:21.147251   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:21.647856   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:22.147427   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:22.647231   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:18.890110   74510 node_ready.go:49] node "embed-certs-777541" has status "Ready":"True"
	I0816 18:15:18.890138   74510 node_ready.go:38] duration metric: took 7.004237799s for node "embed-certs-777541" to be "Ready" ...
	I0816 18:15:18.890156   74510 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:15:18.897124   74510 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-8njs2" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:18.902860   74510 pod_ready.go:93] pod "coredns-6f6b679f8f-8njs2" in "kube-system" namespace has status "Ready":"True"
	I0816 18:15:18.902878   74510 pod_ready.go:82] duration metric: took 5.73242ms for pod "coredns-6f6b679f8f-8njs2" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:18.902886   74510 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:20.909185   74510 pod_ready.go:103] pod "etcd-embed-certs-777541" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:21.909629   74510 pod_ready.go:93] pod "etcd-embed-certs-777541" in "kube-system" namespace has status "Ready":"True"
	I0816 18:15:21.909660   74510 pod_ready.go:82] duration metric: took 3.006768325s for pod "etcd-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:21.909670   74510 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:22.916066   74510 pod_ready.go:93] pod "kube-apiserver-embed-certs-777541" in "kube-system" namespace has status "Ready":"True"
	I0816 18:15:22.916090   74510 pod_ready.go:82] duration metric: took 1.006414177s for pod "kube-apiserver-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:22.916099   74510 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:22.920882   74510 pod_ready.go:93] pod "kube-controller-manager-embed-certs-777541" in "kube-system" namespace has status "Ready":"True"
	I0816 18:15:22.920908   74510 pod_ready.go:82] duration metric: took 4.802561ms for pod "kube-controller-manager-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:22.920918   74510 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-j5rl7" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:22.926952   74510 pod_ready.go:93] pod "kube-proxy-j5rl7" in "kube-system" namespace has status "Ready":"True"
	I0816 18:15:22.926975   74510 pod_ready.go:82] duration metric: took 6.0498ms for pod "kube-proxy-j5rl7" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:22.926984   74510 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:19.206127   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:21.206280   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:23.705588   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:20.448793   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:22.948798   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:23.147403   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:23.647030   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:24.147677   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:24.647324   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:25.147973   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:25.647097   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:26.147160   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:26.646963   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:27.147620   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:27.647918   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:24.933953   74510 pod_ready.go:103] pod "kube-scheduler-embed-certs-777541" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:25.433826   74510 pod_ready.go:93] pod "kube-scheduler-embed-certs-777541" in "kube-system" namespace has status "Ready":"True"
	I0816 18:15:25.433846   74510 pod_ready.go:82] duration metric: took 2.506855714s for pod "kube-scheduler-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:25.433855   74510 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:27.440119   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:25.707915   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:28.206580   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:25.447687   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:27.948700   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:28.146994   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:28.647364   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:29.147332   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:29.647773   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:30.147276   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:30.647794   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:31.147398   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:31.647565   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:32.147139   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:32.647961   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:29.440564   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:31.940747   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:30.706544   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:32.706852   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:29.948982   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:32.447920   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:34.448186   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:33.147648   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:33.647087   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:34.147881   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:34.646988   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:35.147118   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:35.647978   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:36.147541   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:36.647423   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:37.147051   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:37.647726   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:34.439692   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:36.439956   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:38.440315   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:35.206291   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:37.206902   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:36.948416   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:39.447952   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:38.147192   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:38.647318   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:39.147186   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:39.647662   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:40.147044   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:40.647787   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:41.147638   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:41.647490   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:42.147787   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:42.647959   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:40.440405   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:42.440727   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:39.207086   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:41.706048   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:43.706585   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:41.450069   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:43.948101   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:43.147938   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:43.647855   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:44.147781   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:44.647710   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:15:44.647796   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:15:44.682176   75402 cri.go:89] found id: ""
	I0816 18:15:44.682207   75402 logs.go:276] 0 containers: []
	W0816 18:15:44.682218   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:15:44.682226   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:15:44.682285   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:15:44.717500   75402 cri.go:89] found id: ""
	I0816 18:15:44.717530   75402 logs.go:276] 0 containers: []
	W0816 18:15:44.717540   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:15:44.717552   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:15:44.717620   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:15:44.751816   75402 cri.go:89] found id: ""
	I0816 18:15:44.751847   75402 logs.go:276] 0 containers: []
	W0816 18:15:44.751858   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:15:44.751865   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:15:44.751942   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:15:44.783236   75402 cri.go:89] found id: ""
	I0816 18:15:44.783260   75402 logs.go:276] 0 containers: []
	W0816 18:15:44.783267   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:15:44.783272   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:15:44.783337   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:15:44.813087   75402 cri.go:89] found id: ""
	I0816 18:15:44.813110   75402 logs.go:276] 0 containers: []
	W0816 18:15:44.813116   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:15:44.813122   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:15:44.813166   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:15:44.843568   75402 cri.go:89] found id: ""
	I0816 18:15:44.843599   75402 logs.go:276] 0 containers: []
	W0816 18:15:44.843609   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:15:44.843616   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:15:44.843679   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:15:44.873694   75402 cri.go:89] found id: ""
	I0816 18:15:44.873723   75402 logs.go:276] 0 containers: []
	W0816 18:15:44.873734   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:15:44.873741   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:15:44.873808   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:15:44.906183   75402 cri.go:89] found id: ""
	I0816 18:15:44.906212   75402 logs.go:276] 0 containers: []
	W0816 18:15:44.906222   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:15:44.906231   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:15:44.906241   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:15:44.958963   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:15:44.958993   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:15:44.972390   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:15:44.972415   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:15:45.091624   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:15:45.091645   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:15:45.091661   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:15:45.159927   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:15:45.159963   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:15:47.698398   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:47.711848   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:15:47.711917   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:15:47.744247   75402 cri.go:89] found id: ""
	I0816 18:15:47.744278   75402 logs.go:276] 0 containers: []
	W0816 18:15:47.744288   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:15:47.744295   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:15:47.744374   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:15:47.783188   75402 cri.go:89] found id: ""
	I0816 18:15:47.783211   75402 logs.go:276] 0 containers: []
	W0816 18:15:47.783219   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:15:47.783224   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:15:47.783270   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:15:47.829284   75402 cri.go:89] found id: ""
	I0816 18:15:47.829320   75402 logs.go:276] 0 containers: []
	W0816 18:15:47.829333   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:15:47.829341   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:15:47.829413   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:15:47.879482   75402 cri.go:89] found id: ""
	I0816 18:15:47.879514   75402 logs.go:276] 0 containers: []
	W0816 18:15:47.879525   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:15:47.879532   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:15:47.879606   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:15:47.913766   75402 cri.go:89] found id: ""
	I0816 18:15:47.913797   75402 logs.go:276] 0 containers: []
	W0816 18:15:47.913808   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:15:47.913815   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:15:47.913880   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:15:47.947262   75402 cri.go:89] found id: ""
	I0816 18:15:47.947340   75402 logs.go:276] 0 containers: []
	W0816 18:15:47.947353   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:15:47.947362   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:15:47.947427   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:15:47.979638   75402 cri.go:89] found id: ""
	I0816 18:15:47.979667   75402 logs.go:276] 0 containers: []
	W0816 18:15:47.979678   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:15:47.979685   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:15:47.979741   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:15:48.010246   75402 cri.go:89] found id: ""
	I0816 18:15:48.010277   75402 logs.go:276] 0 containers: []
	W0816 18:15:48.010288   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:15:48.010296   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:15:48.010310   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:15:48.083916   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:15:48.083953   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:15:44.940775   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:47.440356   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:46.207236   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:48.705791   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:45.948300   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:47.948501   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:48.120254   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:15:48.120285   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:15:48.169590   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:15:48.169628   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:15:48.182821   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:15:48.182850   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:15:48.254088   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:15:50.755114   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:50.768167   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:15:50.768250   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:15:50.800881   75402 cri.go:89] found id: ""
	I0816 18:15:50.800906   75402 logs.go:276] 0 containers: []
	W0816 18:15:50.800913   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:15:50.800918   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:15:50.800969   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:15:50.833538   75402 cri.go:89] found id: ""
	I0816 18:15:50.833567   75402 logs.go:276] 0 containers: []
	W0816 18:15:50.833578   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:15:50.833586   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:15:50.833649   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:15:50.867306   75402 cri.go:89] found id: ""
	I0816 18:15:50.867336   75402 logs.go:276] 0 containers: []
	W0816 18:15:50.867347   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:15:50.867353   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:15:50.867400   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:15:50.900029   75402 cri.go:89] found id: ""
	I0816 18:15:50.900055   75402 logs.go:276] 0 containers: []
	W0816 18:15:50.900064   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:15:50.900072   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:15:50.900135   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:15:50.933604   75402 cri.go:89] found id: ""
	I0816 18:15:50.933630   75402 logs.go:276] 0 containers: []
	W0816 18:15:50.933638   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:15:50.933643   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:15:50.933707   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:15:50.966102   75402 cri.go:89] found id: ""
	I0816 18:15:50.966131   75402 logs.go:276] 0 containers: []
	W0816 18:15:50.966141   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:15:50.966149   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:15:50.966210   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:15:50.998007   75402 cri.go:89] found id: ""
	I0816 18:15:50.998036   75402 logs.go:276] 0 containers: []
	W0816 18:15:50.998047   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:15:50.998054   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:15:50.998115   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:15:51.032306   75402 cri.go:89] found id: ""
	I0816 18:15:51.032342   75402 logs.go:276] 0 containers: []
	W0816 18:15:51.032349   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:15:51.032357   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:15:51.032369   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:15:51.083186   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:15:51.083222   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:15:51.096072   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:15:51.096153   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:15:51.162667   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:15:51.162693   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:15:51.162709   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:15:51.241913   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:15:51.241954   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:15:49.440546   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:51.940026   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:50.706662   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:53.206075   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:50.447947   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:52.448340   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:54.448431   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:53.779323   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:53.793358   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:15:53.793433   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:15:53.827380   75402 cri.go:89] found id: ""
	I0816 18:15:53.827414   75402 logs.go:276] 0 containers: []
	W0816 18:15:53.827424   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:15:53.827430   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:15:53.827489   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:15:53.867331   75402 cri.go:89] found id: ""
	I0816 18:15:53.867370   75402 logs.go:276] 0 containers: []
	W0816 18:15:53.867380   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:15:53.867386   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:15:53.867438   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:15:53.899445   75402 cri.go:89] found id: ""
	I0816 18:15:53.899477   75402 logs.go:276] 0 containers: []
	W0816 18:15:53.899489   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:15:53.899498   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:15:53.899588   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:15:53.936527   75402 cri.go:89] found id: ""
	I0816 18:15:53.936556   75402 logs.go:276] 0 containers: []
	W0816 18:15:53.936568   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:15:53.936576   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:15:53.936653   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:15:53.970739   75402 cri.go:89] found id: ""
	I0816 18:15:53.970765   75402 logs.go:276] 0 containers: []
	W0816 18:15:53.970773   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:15:53.970780   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:15:53.970842   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:15:54.004119   75402 cri.go:89] found id: ""
	I0816 18:15:54.004150   75402 logs.go:276] 0 containers: []
	W0816 18:15:54.004159   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:15:54.004164   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:15:54.004217   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:15:54.038370   75402 cri.go:89] found id: ""
	I0816 18:15:54.038400   75402 logs.go:276] 0 containers: []
	W0816 18:15:54.038411   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:15:54.038416   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:15:54.038472   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:15:54.079346   75402 cri.go:89] found id: ""
	I0816 18:15:54.079375   75402 logs.go:276] 0 containers: []
	W0816 18:15:54.079383   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:15:54.079392   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:15:54.079403   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:15:54.116551   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:15:54.116586   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:15:54.169930   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:15:54.169970   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:15:54.182416   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:15:54.182448   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:15:54.253516   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:15:54.253539   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:15:54.253559   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:15:56.833124   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:56.846139   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:15:56.846211   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:15:56.880899   75402 cri.go:89] found id: ""
	I0816 18:15:56.880928   75402 logs.go:276] 0 containers: []
	W0816 18:15:56.880939   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:15:56.880945   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:15:56.880994   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:15:56.913362   75402 cri.go:89] found id: ""
	I0816 18:15:56.913393   75402 logs.go:276] 0 containers: []
	W0816 18:15:56.913406   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:15:56.913415   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:15:56.913507   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:15:56.951876   75402 cri.go:89] found id: ""
	I0816 18:15:56.951904   75402 logs.go:276] 0 containers: []
	W0816 18:15:56.951914   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:15:56.951919   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:15:56.951988   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:15:56.986335   75402 cri.go:89] found id: ""
	I0816 18:15:56.986358   75402 logs.go:276] 0 containers: []
	W0816 18:15:56.986366   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:15:56.986372   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:15:56.986423   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:15:57.022485   75402 cri.go:89] found id: ""
	I0816 18:15:57.022511   75402 logs.go:276] 0 containers: []
	W0816 18:15:57.022522   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:15:57.022529   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:15:57.022641   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:15:57.055436   75402 cri.go:89] found id: ""
	I0816 18:15:57.055463   75402 logs.go:276] 0 containers: []
	W0816 18:15:57.055470   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:15:57.055476   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:15:57.055536   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:15:57.085930   75402 cri.go:89] found id: ""
	I0816 18:15:57.085965   75402 logs.go:276] 0 containers: []
	W0816 18:15:57.085975   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:15:57.085981   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:15:57.086032   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:15:57.120436   75402 cri.go:89] found id: ""
	I0816 18:15:57.120466   75402 logs.go:276] 0 containers: []
	W0816 18:15:57.120477   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:15:57.120488   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:15:57.120501   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:15:57.202161   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:15:57.202218   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:15:57.243766   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:15:57.243805   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:15:57.295552   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:15:57.295585   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:15:57.307769   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:15:57.307802   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:15:57.390480   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:15:53.941399   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:56.439763   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:58.440357   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:55.206970   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:57.207312   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:56.948085   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:59.448174   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:59.891480   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:59.904766   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:15:59.904836   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:15:59.939209   75402 cri.go:89] found id: ""
	I0816 18:15:59.939241   75402 logs.go:276] 0 containers: []
	W0816 18:15:59.939252   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:15:59.939260   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:15:59.939324   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:15:59.971782   75402 cri.go:89] found id: ""
	I0816 18:15:59.971812   75402 logs.go:276] 0 containers: []
	W0816 18:15:59.971822   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:15:59.971832   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:15:59.971894   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:00.018585   75402 cri.go:89] found id: ""
	I0816 18:16:00.018630   75402 logs.go:276] 0 containers: []
	W0816 18:16:00.018643   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:00.018654   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:00.018722   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:00.050484   75402 cri.go:89] found id: ""
	I0816 18:16:00.050520   75402 logs.go:276] 0 containers: []
	W0816 18:16:00.050532   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:00.050540   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:00.050603   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:00.082900   75402 cri.go:89] found id: ""
	I0816 18:16:00.082930   75402 logs.go:276] 0 containers: []
	W0816 18:16:00.082942   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:00.082951   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:00.083025   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:00.115330   75402 cri.go:89] found id: ""
	I0816 18:16:00.115363   75402 logs.go:276] 0 containers: []
	W0816 18:16:00.115372   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:00.115378   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:00.115442   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:00.150898   75402 cri.go:89] found id: ""
	I0816 18:16:00.150935   75402 logs.go:276] 0 containers: []
	W0816 18:16:00.150952   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:00.150960   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:00.151033   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:00.193304   75402 cri.go:89] found id: ""
	I0816 18:16:00.193338   75402 logs.go:276] 0 containers: []
	W0816 18:16:00.193349   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:00.193359   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:00.193370   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:00.247340   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:00.247376   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:00.260470   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:00.260500   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:00.336483   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:00.336506   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:00.336521   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:00.421251   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:00.421289   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:02.964042   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:02.977284   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:02.977381   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:03.009533   75402 cri.go:89] found id: ""
	I0816 18:16:03.009574   75402 logs.go:276] 0 containers: []
	W0816 18:16:03.009586   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:03.009594   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:03.009673   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:03.043756   75402 cri.go:89] found id: ""
	I0816 18:16:03.043784   75402 logs.go:276] 0 containers: []
	W0816 18:16:03.043794   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:03.043802   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:03.043867   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:03.078817   75402 cri.go:89] found id: ""
	I0816 18:16:03.078840   75402 logs.go:276] 0 containers: []
	W0816 18:16:03.078848   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:03.078853   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:03.078906   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:00.440728   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:02.440788   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:59.706129   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:01.707967   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:01.948193   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:04.448504   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:03.112874   75402 cri.go:89] found id: ""
	I0816 18:16:03.112903   75402 logs.go:276] 0 containers: []
	W0816 18:16:03.112912   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:03.112918   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:03.112985   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:03.152008   75402 cri.go:89] found id: ""
	I0816 18:16:03.152040   75402 logs.go:276] 0 containers: []
	W0816 18:16:03.152052   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:03.152059   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:03.152125   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:03.187353   75402 cri.go:89] found id: ""
	I0816 18:16:03.187386   75402 logs.go:276] 0 containers: []
	W0816 18:16:03.187396   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:03.187404   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:03.187467   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:03.220860   75402 cri.go:89] found id: ""
	I0816 18:16:03.220895   75402 logs.go:276] 0 containers: []
	W0816 18:16:03.220903   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:03.220909   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:03.220958   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:03.252202   75402 cri.go:89] found id: ""
	I0816 18:16:03.252240   75402 logs.go:276] 0 containers: []
	W0816 18:16:03.252247   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:03.252256   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:03.252268   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:03.286907   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:03.286934   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:03.338212   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:03.338249   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:03.352548   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:03.352585   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:03.427580   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:03.427610   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:03.427626   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:06.011792   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:06.024201   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:06.024277   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:06.058328   75402 cri.go:89] found id: ""
	I0816 18:16:06.058356   75402 logs.go:276] 0 containers: []
	W0816 18:16:06.058367   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:06.058373   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:06.058433   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:06.091262   75402 cri.go:89] found id: ""
	I0816 18:16:06.091298   75402 logs.go:276] 0 containers: []
	W0816 18:16:06.091311   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:06.091318   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:06.091382   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:06.124114   75402 cri.go:89] found id: ""
	I0816 18:16:06.124146   75402 logs.go:276] 0 containers: []
	W0816 18:16:06.124154   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:06.124159   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:06.124220   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:06.155379   75402 cri.go:89] found id: ""
	I0816 18:16:06.155406   75402 logs.go:276] 0 containers: []
	W0816 18:16:06.155416   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:06.155422   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:06.155471   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:06.189442   75402 cri.go:89] found id: ""
	I0816 18:16:06.189472   75402 logs.go:276] 0 containers: []
	W0816 18:16:06.189480   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:06.189485   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:06.189538   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:06.228881   75402 cri.go:89] found id: ""
	I0816 18:16:06.228910   75402 logs.go:276] 0 containers: []
	W0816 18:16:06.228921   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:06.228929   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:06.229003   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:06.262272   75402 cri.go:89] found id: ""
	I0816 18:16:06.262299   75402 logs.go:276] 0 containers: []
	W0816 18:16:06.262310   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:06.262317   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:06.262386   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:06.295427   75402 cri.go:89] found id: ""
	I0816 18:16:06.295456   75402 logs.go:276] 0 containers: []
	W0816 18:16:06.295468   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:06.295478   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:06.295492   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:06.347569   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:06.347608   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:06.362786   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:06.362825   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:06.432020   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:06.432044   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:06.432059   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:06.512085   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:06.512120   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:04.940128   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:07.439708   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:04.206477   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:06.208125   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:08.706765   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:06.947599   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:08.948183   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:09.051957   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:09.066630   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:09.066690   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:09.101484   75402 cri.go:89] found id: ""
	I0816 18:16:09.101515   75402 logs.go:276] 0 containers: []
	W0816 18:16:09.101526   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:09.101536   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:09.101614   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:09.140645   75402 cri.go:89] found id: ""
	I0816 18:16:09.140677   75402 logs.go:276] 0 containers: []
	W0816 18:16:09.140689   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:09.140696   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:09.140758   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:09.174666   75402 cri.go:89] found id: ""
	I0816 18:16:09.174698   75402 logs.go:276] 0 containers: []
	W0816 18:16:09.174708   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:09.174717   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:09.174780   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:09.209715   75402 cri.go:89] found id: ""
	I0816 18:16:09.209748   75402 logs.go:276] 0 containers: []
	W0816 18:16:09.209758   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:09.209767   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:09.209845   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:09.243681   75402 cri.go:89] found id: ""
	I0816 18:16:09.243712   75402 logs.go:276] 0 containers: []
	W0816 18:16:09.243720   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:09.243726   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:09.243781   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:09.278058   75402 cri.go:89] found id: ""
	I0816 18:16:09.278090   75402 logs.go:276] 0 containers: []
	W0816 18:16:09.278102   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:09.278111   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:09.278178   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:09.313092   75402 cri.go:89] found id: ""
	I0816 18:16:09.313122   75402 logs.go:276] 0 containers: []
	W0816 18:16:09.313132   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:09.313137   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:09.313201   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:09.345203   75402 cri.go:89] found id: ""
	I0816 18:16:09.345229   75402 logs.go:276] 0 containers: []
	W0816 18:16:09.345236   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:09.345245   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:09.345259   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:09.358198   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:09.358225   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:09.422024   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:09.422047   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:09.422059   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:09.498684   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:09.498717   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:09.535349   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:09.535382   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:12.087472   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:12.100412   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:12.100477   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:12.133982   75402 cri.go:89] found id: ""
	I0816 18:16:12.134018   75402 logs.go:276] 0 containers: []
	W0816 18:16:12.134030   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:12.134038   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:12.134100   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:12.166466   75402 cri.go:89] found id: ""
	I0816 18:16:12.166497   75402 logs.go:276] 0 containers: []
	W0816 18:16:12.166507   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:12.166514   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:12.166589   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:12.197752   75402 cri.go:89] found id: ""
	I0816 18:16:12.197779   75402 logs.go:276] 0 containers: []
	W0816 18:16:12.197790   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:12.197797   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:12.197856   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:12.239759   75402 cri.go:89] found id: ""
	I0816 18:16:12.239789   75402 logs.go:276] 0 containers: []
	W0816 18:16:12.239801   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:12.239810   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:12.239871   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:12.273263   75402 cri.go:89] found id: ""
	I0816 18:16:12.273292   75402 logs.go:276] 0 containers: []
	W0816 18:16:12.273302   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:12.273310   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:12.273370   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:12.308788   75402 cri.go:89] found id: ""
	I0816 18:16:12.308820   75402 logs.go:276] 0 containers: []
	W0816 18:16:12.308831   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:12.308839   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:12.308897   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:12.345243   75402 cri.go:89] found id: ""
	I0816 18:16:12.345274   75402 logs.go:276] 0 containers: []
	W0816 18:16:12.345281   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:12.345288   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:12.345341   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:12.379939   75402 cri.go:89] found id: ""
	I0816 18:16:12.379968   75402 logs.go:276] 0 containers: []
	W0816 18:16:12.379978   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:12.379989   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:12.380004   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:12.436097   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:12.436130   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:12.449328   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:12.449357   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:12.518723   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:12.518749   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:12.518764   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:12.600228   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:12.600268   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:09.441051   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:11.441097   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:11.206853   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:13.705328   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:11.449793   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:13.948517   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:15.137940   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:15.150617   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:15.150694   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:15.186029   75402 cri.go:89] found id: ""
	I0816 18:16:15.186057   75402 logs.go:276] 0 containers: []
	W0816 18:16:15.186067   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:15.186074   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:15.186134   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:15.219812   75402 cri.go:89] found id: ""
	I0816 18:16:15.219840   75402 logs.go:276] 0 containers: []
	W0816 18:16:15.219851   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:15.219864   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:15.219927   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:15.253434   75402 cri.go:89] found id: ""
	I0816 18:16:15.253462   75402 logs.go:276] 0 containers: []
	W0816 18:16:15.253472   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:15.253479   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:15.253542   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:15.286697   75402 cri.go:89] found id: ""
	I0816 18:16:15.286729   75402 logs.go:276] 0 containers: []
	W0816 18:16:15.286745   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:15.286751   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:15.286810   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:15.319363   75402 cri.go:89] found id: ""
	I0816 18:16:15.319405   75402 logs.go:276] 0 containers: []
	W0816 18:16:15.319415   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:15.319422   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:15.319506   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:15.353900   75402 cri.go:89] found id: ""
	I0816 18:16:15.353924   75402 logs.go:276] 0 containers: []
	W0816 18:16:15.353931   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:15.353937   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:15.353991   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:15.389086   75402 cri.go:89] found id: ""
	I0816 18:16:15.389114   75402 logs.go:276] 0 containers: []
	W0816 18:16:15.389122   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:15.389127   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:15.389184   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:15.424069   75402 cri.go:89] found id: ""
	I0816 18:16:15.424099   75402 logs.go:276] 0 containers: []
	W0816 18:16:15.424110   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:15.424121   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:15.424136   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:15.482703   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:15.482738   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:15.496859   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:15.496886   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:15.562178   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:15.562196   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:15.562212   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:15.643484   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:15.643521   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:13.944174   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:16.439987   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:18.442569   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:15.706743   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:18.206088   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:16.448775   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:18.948447   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:18.180963   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:18.194705   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:18.194783   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:18.231302   75402 cri.go:89] found id: ""
	I0816 18:16:18.231337   75402 logs.go:276] 0 containers: []
	W0816 18:16:18.231348   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:18.231355   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:18.231413   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:18.264098   75402 cri.go:89] found id: ""
	I0816 18:16:18.264124   75402 logs.go:276] 0 containers: []
	W0816 18:16:18.264135   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:18.264155   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:18.264228   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:18.298133   75402 cri.go:89] found id: ""
	I0816 18:16:18.298165   75402 logs.go:276] 0 containers: []
	W0816 18:16:18.298178   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:18.298186   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:18.298252   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:18.331323   75402 cri.go:89] found id: ""
	I0816 18:16:18.331354   75402 logs.go:276] 0 containers: []
	W0816 18:16:18.331362   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:18.331367   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:18.331416   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:18.365677   75402 cri.go:89] found id: ""
	I0816 18:16:18.365709   75402 logs.go:276] 0 containers: []
	W0816 18:16:18.365718   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:18.365724   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:18.365774   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:18.399801   75402 cri.go:89] found id: ""
	I0816 18:16:18.399835   75402 logs.go:276] 0 containers: []
	W0816 18:16:18.399844   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:18.399850   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:18.399908   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:18.438148   75402 cri.go:89] found id: ""
	I0816 18:16:18.438179   75402 logs.go:276] 0 containers: []
	W0816 18:16:18.438189   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:18.438197   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:18.438257   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:18.472185   75402 cri.go:89] found id: ""
	I0816 18:16:18.472215   75402 logs.go:276] 0 containers: []
	W0816 18:16:18.472223   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:18.472232   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:18.472243   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:18.523369   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:18.523400   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:18.536152   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:18.536179   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:18.611539   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:18.611560   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:18.611571   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:18.688043   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:18.688079   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:21.229163   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:21.242641   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:21.242717   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:21.275188   75402 cri.go:89] found id: ""
	I0816 18:16:21.275213   75402 logs.go:276] 0 containers: []
	W0816 18:16:21.275220   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:21.275226   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:21.275275   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:21.308377   75402 cri.go:89] found id: ""
	I0816 18:16:21.308406   75402 logs.go:276] 0 containers: []
	W0816 18:16:21.308417   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:21.308424   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:21.308475   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:21.341067   75402 cri.go:89] found id: ""
	I0816 18:16:21.341098   75402 logs.go:276] 0 containers: []
	W0816 18:16:21.341106   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:21.341112   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:21.341170   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:21.372707   75402 cri.go:89] found id: ""
	I0816 18:16:21.372743   75402 logs.go:276] 0 containers: []
	W0816 18:16:21.372756   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:21.372763   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:21.372847   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:21.410210   75402 cri.go:89] found id: ""
	I0816 18:16:21.410241   75402 logs.go:276] 0 containers: []
	W0816 18:16:21.410252   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:21.410259   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:21.410323   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:21.444840   75402 cri.go:89] found id: ""
	I0816 18:16:21.444863   75402 logs.go:276] 0 containers: []
	W0816 18:16:21.444872   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:21.444879   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:21.444942   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:21.478278   75402 cri.go:89] found id: ""
	I0816 18:16:21.478319   75402 logs.go:276] 0 containers: []
	W0816 18:16:21.478327   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:21.478333   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:21.478395   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:21.512026   75402 cri.go:89] found id: ""
	I0816 18:16:21.512063   75402 logs.go:276] 0 containers: []
	W0816 18:16:21.512073   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:21.512090   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:21.512111   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:21.564800   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:21.564834   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:21.577343   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:21.577368   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:21.663216   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:21.663238   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:21.663251   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:21.741960   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:21.741994   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:20.939740   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:22.942844   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:20.706032   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:22.707112   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:21.449404   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:23.454804   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:24.282136   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:24.296452   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:24.296513   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:24.337173   75402 cri.go:89] found id: ""
	I0816 18:16:24.337200   75402 logs.go:276] 0 containers: []
	W0816 18:16:24.337210   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:24.337218   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:24.337282   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:24.374163   75402 cri.go:89] found id: ""
	I0816 18:16:24.374200   75402 logs.go:276] 0 containers: []
	W0816 18:16:24.374213   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:24.374222   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:24.374287   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:24.407823   75402 cri.go:89] found id: ""
	I0816 18:16:24.407854   75402 logs.go:276] 0 containers: []
	W0816 18:16:24.407866   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:24.407881   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:24.407953   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:24.444006   75402 cri.go:89] found id: ""
	I0816 18:16:24.444032   75402 logs.go:276] 0 containers: []
	W0816 18:16:24.444042   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:24.444049   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:24.444113   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:24.479082   75402 cri.go:89] found id: ""
	I0816 18:16:24.479110   75402 logs.go:276] 0 containers: []
	W0816 18:16:24.479119   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:24.479125   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:24.479174   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:24.524738   75402 cri.go:89] found id: ""
	I0816 18:16:24.524764   75402 logs.go:276] 0 containers: []
	W0816 18:16:24.524775   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:24.524782   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:24.524842   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:24.560298   75402 cri.go:89] found id: ""
	I0816 18:16:24.560326   75402 logs.go:276] 0 containers: []
	W0816 18:16:24.560335   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:24.560343   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:24.560406   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:24.597182   75402 cri.go:89] found id: ""
	I0816 18:16:24.597214   75402 logs.go:276] 0 containers: []
	W0816 18:16:24.597227   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:24.597239   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:24.597254   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:24.653063   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:24.653106   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:24.665940   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:24.665972   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:24.736599   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:24.736639   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:24.736657   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:24.821883   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:24.821939   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:27.359558   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:27.382980   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:27.383053   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:27.416766   75402 cri.go:89] found id: ""
	I0816 18:16:27.416793   75402 logs.go:276] 0 containers: []
	W0816 18:16:27.416802   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:27.416811   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:27.416873   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:27.452966   75402 cri.go:89] found id: ""
	I0816 18:16:27.452988   75402 logs.go:276] 0 containers: []
	W0816 18:16:27.452995   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:27.453001   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:27.453050   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:27.485850   75402 cri.go:89] found id: ""
	I0816 18:16:27.485885   75402 logs.go:276] 0 containers: []
	W0816 18:16:27.485896   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:27.485903   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:27.485960   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:27.517667   75402 cri.go:89] found id: ""
	I0816 18:16:27.517694   75402 logs.go:276] 0 containers: []
	W0816 18:16:27.517704   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:27.517711   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:27.517774   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:27.553547   75402 cri.go:89] found id: ""
	I0816 18:16:27.553574   75402 logs.go:276] 0 containers: []
	W0816 18:16:27.553582   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:27.553593   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:27.553653   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:27.586857   75402 cri.go:89] found id: ""
	I0816 18:16:27.586884   75402 logs.go:276] 0 containers: []
	W0816 18:16:27.586893   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:27.586898   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:27.586957   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:27.621739   75402 cri.go:89] found id: ""
	I0816 18:16:27.621766   75402 logs.go:276] 0 containers: []
	W0816 18:16:27.621776   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:27.621784   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:27.621844   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:27.657772   75402 cri.go:89] found id: ""
	I0816 18:16:27.657797   75402 logs.go:276] 0 containers: []
	W0816 18:16:27.657805   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:27.657819   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:27.657831   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:27.729769   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:27.729796   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:27.729810   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:27.813351   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:27.813403   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:27.852985   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:27.853010   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:27.908434   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:27.908476   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:25.439828   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:27.440749   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:25.207590   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:27.706496   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:25.948579   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:28.448590   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:30.422781   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:30.435987   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:30.436070   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:30.470878   75402 cri.go:89] found id: ""
	I0816 18:16:30.470907   75402 logs.go:276] 0 containers: []
	W0816 18:16:30.470918   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:30.470926   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:30.470983   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:30.504940   75402 cri.go:89] found id: ""
	I0816 18:16:30.504969   75402 logs.go:276] 0 containers: []
	W0816 18:16:30.504980   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:30.504988   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:30.505058   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:30.538680   75402 cri.go:89] found id: ""
	I0816 18:16:30.538708   75402 logs.go:276] 0 containers: []
	W0816 18:16:30.538716   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:30.538722   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:30.538788   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:30.574757   75402 cri.go:89] found id: ""
	I0816 18:16:30.574782   75402 logs.go:276] 0 containers: []
	W0816 18:16:30.574791   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:30.574797   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:30.574853   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:30.612500   75402 cri.go:89] found id: ""
	I0816 18:16:30.612529   75402 logs.go:276] 0 containers: []
	W0816 18:16:30.612539   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:30.612547   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:30.612613   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:30.644572   75402 cri.go:89] found id: ""
	I0816 18:16:30.644595   75402 logs.go:276] 0 containers: []
	W0816 18:16:30.644603   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:30.644609   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:30.644678   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:30.678199   75402 cri.go:89] found id: ""
	I0816 18:16:30.678232   75402 logs.go:276] 0 containers: []
	W0816 18:16:30.678243   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:30.678252   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:30.678331   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:30.709435   75402 cri.go:89] found id: ""
	I0816 18:16:30.709470   75402 logs.go:276] 0 containers: []
	W0816 18:16:30.709482   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:30.709494   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:30.709511   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:30.723430   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:30.723464   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:30.800340   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:30.800374   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:30.800390   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:30.883945   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:30.883986   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:30.922107   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:30.922139   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:29.940430   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:32.440198   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:29.706649   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:32.205271   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:30.949515   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:33.448456   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:33.480016   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:33.494178   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:33.494241   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:33.529497   75402 cri.go:89] found id: ""
	I0816 18:16:33.529527   75402 logs.go:276] 0 containers: []
	W0816 18:16:33.529546   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:33.529554   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:33.529614   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:33.566670   75402 cri.go:89] found id: ""
	I0816 18:16:33.566700   75402 logs.go:276] 0 containers: []
	W0816 18:16:33.566711   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:33.566718   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:33.566781   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:33.603898   75402 cri.go:89] found id: ""
	I0816 18:16:33.603926   75402 logs.go:276] 0 containers: []
	W0816 18:16:33.603937   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:33.603944   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:33.604003   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:33.636077   75402 cri.go:89] found id: ""
	I0816 18:16:33.636111   75402 logs.go:276] 0 containers: []
	W0816 18:16:33.636125   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:33.636134   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:33.636200   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:33.668974   75402 cri.go:89] found id: ""
	I0816 18:16:33.669002   75402 logs.go:276] 0 containers: []
	W0816 18:16:33.669011   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:33.669017   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:33.669070   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:33.700981   75402 cri.go:89] found id: ""
	I0816 18:16:33.701010   75402 logs.go:276] 0 containers: []
	W0816 18:16:33.701019   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:33.701026   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:33.701088   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:33.735430   75402 cri.go:89] found id: ""
	I0816 18:16:33.735463   75402 logs.go:276] 0 containers: []
	W0816 18:16:33.735474   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:33.735481   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:33.735539   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:33.779797   75402 cri.go:89] found id: ""
	I0816 18:16:33.779829   75402 logs.go:276] 0 containers: []
	W0816 18:16:33.779840   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:33.779851   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:33.779865   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:33.824873   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:33.824908   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:33.874177   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:33.874217   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:33.888535   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:33.888561   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:33.957590   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:33.957608   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:33.957627   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:36.533660   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:36.546542   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:36.546606   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:36.584056   75402 cri.go:89] found id: ""
	I0816 18:16:36.584085   75402 logs.go:276] 0 containers: []
	W0816 18:16:36.584094   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:36.584099   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:36.584149   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:36.622143   75402 cri.go:89] found id: ""
	I0816 18:16:36.622172   75402 logs.go:276] 0 containers: []
	W0816 18:16:36.622184   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:36.622193   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:36.622262   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:36.655479   75402 cri.go:89] found id: ""
	I0816 18:16:36.655509   75402 logs.go:276] 0 containers: []
	W0816 18:16:36.655520   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:36.655528   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:36.655603   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:36.688044   75402 cri.go:89] found id: ""
	I0816 18:16:36.688076   75402 logs.go:276] 0 containers: []
	W0816 18:16:36.688088   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:36.688096   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:36.688161   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:36.725831   75402 cri.go:89] found id: ""
	I0816 18:16:36.725861   75402 logs.go:276] 0 containers: []
	W0816 18:16:36.725868   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:36.725874   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:36.725925   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:36.758398   75402 cri.go:89] found id: ""
	I0816 18:16:36.758433   75402 logs.go:276] 0 containers: []
	W0816 18:16:36.758444   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:36.758453   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:36.758517   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:36.791097   75402 cri.go:89] found id: ""
	I0816 18:16:36.791126   75402 logs.go:276] 0 containers: []
	W0816 18:16:36.791136   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:36.791144   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:36.791207   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:36.829337   75402 cri.go:89] found id: ""
	I0816 18:16:36.829369   75402 logs.go:276] 0 containers: []
	W0816 18:16:36.829380   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:36.829391   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:36.829405   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:36.881898   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:36.881932   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:36.895584   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:36.895618   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:36.967175   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:36.967197   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:36.967213   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:37.046993   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:37.047025   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:34.440475   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:36.946369   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:34.206677   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:36.207893   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:38.706193   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:35.449611   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:37.947527   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:39.588683   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:39.607205   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:39.607287   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:39.640517   75402 cri.go:89] found id: ""
	I0816 18:16:39.640541   75402 logs.go:276] 0 containers: []
	W0816 18:16:39.640549   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:39.640554   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:39.640604   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:39.673777   75402 cri.go:89] found id: ""
	I0816 18:16:39.673805   75402 logs.go:276] 0 containers: []
	W0816 18:16:39.673813   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:39.673818   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:39.673899   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:39.709574   75402 cri.go:89] found id: ""
	I0816 18:16:39.709598   75402 logs.go:276] 0 containers: []
	W0816 18:16:39.709606   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:39.709611   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:39.709666   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:39.743946   75402 cri.go:89] found id: ""
	I0816 18:16:39.743971   75402 logs.go:276] 0 containers: []
	W0816 18:16:39.743979   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:39.743985   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:39.744041   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:39.776140   75402 cri.go:89] found id: ""
	I0816 18:16:39.776171   75402 logs.go:276] 0 containers: []
	W0816 18:16:39.776181   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:39.776187   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:39.776254   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:39.808697   75402 cri.go:89] found id: ""
	I0816 18:16:39.808719   75402 logs.go:276] 0 containers: []
	W0816 18:16:39.808728   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:39.808734   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:39.808793   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:39.840163   75402 cri.go:89] found id: ""
	I0816 18:16:39.840190   75402 logs.go:276] 0 containers: []
	W0816 18:16:39.840200   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:39.840206   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:39.840270   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:39.874396   75402 cri.go:89] found id: ""
	I0816 18:16:39.874419   75402 logs.go:276] 0 containers: []
	W0816 18:16:39.874426   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:39.874434   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:39.874448   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:39.927922   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:39.927963   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:39.942048   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:39.942076   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:40.012143   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:40.012166   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:40.012181   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:40.088798   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:40.088844   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:42.625875   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:42.640386   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:42.640448   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:42.675201   75402 cri.go:89] found id: ""
	I0816 18:16:42.675224   75402 logs.go:276] 0 containers: []
	W0816 18:16:42.675231   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:42.675236   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:42.675293   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:42.705156   75402 cri.go:89] found id: ""
	I0816 18:16:42.705182   75402 logs.go:276] 0 containers: []
	W0816 18:16:42.705192   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:42.705199   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:42.705258   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:42.738921   75402 cri.go:89] found id: ""
	I0816 18:16:42.738948   75402 logs.go:276] 0 containers: []
	W0816 18:16:42.738956   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:42.738962   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:42.739013   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:42.771130   75402 cri.go:89] found id: ""
	I0816 18:16:42.771160   75402 logs.go:276] 0 containers: []
	W0816 18:16:42.771168   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:42.771175   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:42.771231   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:42.805774   75402 cri.go:89] found id: ""
	I0816 18:16:42.805803   75402 logs.go:276] 0 containers: []
	W0816 18:16:42.805811   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:42.805817   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:42.805879   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:42.840248   75402 cri.go:89] found id: ""
	I0816 18:16:42.840277   75402 logs.go:276] 0 containers: []
	W0816 18:16:42.840293   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:42.840302   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:42.840360   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:42.873260   75402 cri.go:89] found id: ""
	I0816 18:16:42.873287   75402 logs.go:276] 0 containers: []
	W0816 18:16:42.873297   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:42.873322   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:42.873383   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:42.906205   75402 cri.go:89] found id: ""
	I0816 18:16:42.906230   75402 logs.go:276] 0 containers: []
	W0816 18:16:42.906238   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:42.906247   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:42.906257   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:42.959235   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:42.959272   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:42.972063   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:42.972090   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:43.039530   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:43.039558   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:43.039569   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:39.440219   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:41.441052   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:40.707059   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:43.210643   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:39.948907   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:42.448534   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:43.115486   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:43.115519   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:45.651040   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:45.663718   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:45.663812   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:45.696548   75402 cri.go:89] found id: ""
	I0816 18:16:45.696578   75402 logs.go:276] 0 containers: []
	W0816 18:16:45.696586   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:45.696591   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:45.696663   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:45.731032   75402 cri.go:89] found id: ""
	I0816 18:16:45.731059   75402 logs.go:276] 0 containers: []
	W0816 18:16:45.731068   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:45.731073   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:45.731126   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:45.764801   75402 cri.go:89] found id: ""
	I0816 18:16:45.764829   75402 logs.go:276] 0 containers: []
	W0816 18:16:45.764840   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:45.764846   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:45.764908   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:45.800768   75402 cri.go:89] found id: ""
	I0816 18:16:45.800795   75402 logs.go:276] 0 containers: []
	W0816 18:16:45.800803   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:45.800809   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:45.800858   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:45.841460   75402 cri.go:89] found id: ""
	I0816 18:16:45.841486   75402 logs.go:276] 0 containers: []
	W0816 18:16:45.841493   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:45.841505   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:45.841566   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:45.875230   75402 cri.go:89] found id: ""
	I0816 18:16:45.875254   75402 logs.go:276] 0 containers: []
	W0816 18:16:45.875261   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:45.875266   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:45.875319   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:45.907711   75402 cri.go:89] found id: ""
	I0816 18:16:45.907739   75402 logs.go:276] 0 containers: []
	W0816 18:16:45.907747   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:45.907753   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:45.907804   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:45.943147   75402 cri.go:89] found id: ""
	I0816 18:16:45.943171   75402 logs.go:276] 0 containers: []
	W0816 18:16:45.943182   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:45.943192   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:45.943206   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:45.998459   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:45.998491   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:46.013237   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:46.013267   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:46.079248   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:46.079273   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:46.079288   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:46.158842   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:46.158874   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:43.939212   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:45.939893   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:47.940331   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:45.706588   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:48.206342   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:44.948046   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:46.948752   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:49.448263   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:48.696728   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:48.710946   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:48.711041   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:48.746696   75402 cri.go:89] found id: ""
	I0816 18:16:48.746727   75402 logs.go:276] 0 containers: []
	W0816 18:16:48.746735   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:48.746741   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:48.746803   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:48.781496   75402 cri.go:89] found id: ""
	I0816 18:16:48.781522   75402 logs.go:276] 0 containers: []
	W0816 18:16:48.781532   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:48.781539   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:48.781603   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:48.815628   75402 cri.go:89] found id: ""
	I0816 18:16:48.815654   75402 logs.go:276] 0 containers: []
	W0816 18:16:48.815665   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:48.815673   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:48.815736   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:48.848990   75402 cri.go:89] found id: ""
	I0816 18:16:48.849018   75402 logs.go:276] 0 containers: []
	W0816 18:16:48.849030   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:48.849040   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:48.849098   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:48.886924   75402 cri.go:89] found id: ""
	I0816 18:16:48.886949   75402 logs.go:276] 0 containers: []
	W0816 18:16:48.886960   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:48.886968   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:48.887022   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:48.923989   75402 cri.go:89] found id: ""
	I0816 18:16:48.924018   75402 logs.go:276] 0 containers: []
	W0816 18:16:48.924030   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:48.924038   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:48.924102   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:48.959513   75402 cri.go:89] found id: ""
	I0816 18:16:48.959546   75402 logs.go:276] 0 containers: []
	W0816 18:16:48.959556   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:48.959562   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:48.959614   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:48.995615   75402 cri.go:89] found id: ""
	I0816 18:16:48.995651   75402 logs.go:276] 0 containers: []
	W0816 18:16:48.995662   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:48.995673   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:48.995688   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:49.008440   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:49.008468   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:49.076761   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:49.076780   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:49.076797   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:49.152855   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:49.152893   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:49.190857   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:49.190887   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:51.745344   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:51.759552   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:51.759628   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:51.795494   75402 cri.go:89] found id: ""
	I0816 18:16:51.795520   75402 logs.go:276] 0 containers: []
	W0816 18:16:51.795531   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:51.795539   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:51.795600   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:51.833162   75402 cri.go:89] found id: ""
	I0816 18:16:51.833188   75402 logs.go:276] 0 containers: []
	W0816 18:16:51.833198   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:51.833205   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:51.833265   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:51.866940   75402 cri.go:89] found id: ""
	I0816 18:16:51.866968   75402 logs.go:276] 0 containers: []
	W0816 18:16:51.866979   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:51.866986   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:51.867051   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:51.899824   75402 cri.go:89] found id: ""
	I0816 18:16:51.899857   75402 logs.go:276] 0 containers: []
	W0816 18:16:51.899867   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:51.899874   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:51.899937   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:51.932273   75402 cri.go:89] found id: ""
	I0816 18:16:51.932297   75402 logs.go:276] 0 containers: []
	W0816 18:16:51.932312   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:51.932320   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:51.932390   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:51.966885   75402 cri.go:89] found id: ""
	I0816 18:16:51.966911   75402 logs.go:276] 0 containers: []
	W0816 18:16:51.966922   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:51.966930   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:51.966996   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:52.002988   75402 cri.go:89] found id: ""
	I0816 18:16:52.003020   75402 logs.go:276] 0 containers: []
	W0816 18:16:52.003029   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:52.003035   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:52.003098   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:52.038858   75402 cri.go:89] found id: ""
	I0816 18:16:52.038894   75402 logs.go:276] 0 containers: []
	W0816 18:16:52.038909   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:52.038919   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:52.038933   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:52.076404   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:52.076431   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:52.127735   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:52.127767   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:52.140657   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:52.140680   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:52.202961   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:52.202989   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:52.203008   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:50.440577   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:52.441865   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:50.705618   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:52.706795   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:51.448948   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:53.947907   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:54.787095   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:54.801258   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:54.801332   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:54.837987   75402 cri.go:89] found id: ""
	I0816 18:16:54.838018   75402 logs.go:276] 0 containers: []
	W0816 18:16:54.838028   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:54.838034   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:54.838118   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:54.872439   75402 cri.go:89] found id: ""
	I0816 18:16:54.872466   75402 logs.go:276] 0 containers: []
	W0816 18:16:54.872477   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:54.872490   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:54.872554   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:54.904676   75402 cri.go:89] found id: ""
	I0816 18:16:54.904706   75402 logs.go:276] 0 containers: []
	W0816 18:16:54.904717   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:54.904724   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:54.904783   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:54.938101   75402 cri.go:89] found id: ""
	I0816 18:16:54.938134   75402 logs.go:276] 0 containers: []
	W0816 18:16:54.938145   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:54.938154   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:54.938218   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:54.977409   75402 cri.go:89] found id: ""
	I0816 18:16:54.977442   75402 logs.go:276] 0 containers: []
	W0816 18:16:54.977453   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:54.977460   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:54.977521   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:55.013248   75402 cri.go:89] found id: ""
	I0816 18:16:55.013275   75402 logs.go:276] 0 containers: []
	W0816 18:16:55.013286   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:55.013294   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:55.013363   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:55.044555   75402 cri.go:89] found id: ""
	I0816 18:16:55.044588   75402 logs.go:276] 0 containers: []
	W0816 18:16:55.044597   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:55.044603   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:55.044690   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:55.075970   75402 cri.go:89] found id: ""
	I0816 18:16:55.075997   75402 logs.go:276] 0 containers: []
	W0816 18:16:55.076006   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:55.076014   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:55.076025   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:55.149982   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:55.150017   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:55.190160   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:55.190194   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:55.242629   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:55.242660   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:55.255229   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:55.255254   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:55.324775   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:57.824996   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:57.838666   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:57.838740   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:57.872828   75402 cri.go:89] found id: ""
	I0816 18:16:57.872861   75402 logs.go:276] 0 containers: []
	W0816 18:16:57.872869   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:57.872875   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:57.872927   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:57.907324   75402 cri.go:89] found id: ""
	I0816 18:16:57.907354   75402 logs.go:276] 0 containers: []
	W0816 18:16:57.907366   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:57.907373   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:57.907433   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:57.941657   75402 cri.go:89] found id: ""
	I0816 18:16:57.941682   75402 logs.go:276] 0 containers: []
	W0816 18:16:57.941689   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:57.941695   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:57.941746   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:57.981424   75402 cri.go:89] found id: ""
	I0816 18:16:57.981466   75402 logs.go:276] 0 containers: []
	W0816 18:16:57.981480   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:57.981489   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:57.981562   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:58.015534   75402 cri.go:89] found id: ""
	I0816 18:16:58.015587   75402 logs.go:276] 0 containers: []
	W0816 18:16:58.015598   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:58.015606   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:58.015669   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:58.047875   75402 cri.go:89] found id: ""
	I0816 18:16:58.047908   75402 logs.go:276] 0 containers: []
	W0816 18:16:58.047917   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:58.047923   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:58.047976   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:58.079294   75402 cri.go:89] found id: ""
	I0816 18:16:58.079324   75402 logs.go:276] 0 containers: []
	W0816 18:16:58.079334   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:58.079342   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:58.079406   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:54.940977   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:57.439254   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:55.208298   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:57.706380   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:55.948080   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:57.949589   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:58.112357   75402 cri.go:89] found id: ""
	I0816 18:16:58.112389   75402 logs.go:276] 0 containers: []
	W0816 18:16:58.112401   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:58.112413   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:58.112428   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:58.159903   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:58.159934   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:58.172763   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:58.172789   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:58.245827   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:58.245856   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:58.245872   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:58.325008   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:58.325049   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:00.864354   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:00.877517   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:00.877593   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:00.915396   75402 cri.go:89] found id: ""
	I0816 18:17:00.915428   75402 logs.go:276] 0 containers: []
	W0816 18:17:00.915438   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:00.915446   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:00.915611   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:00.953950   75402 cri.go:89] found id: ""
	I0816 18:17:00.953977   75402 logs.go:276] 0 containers: []
	W0816 18:17:00.953987   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:00.953993   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:00.954051   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:00.987673   75402 cri.go:89] found id: ""
	I0816 18:17:00.987703   75402 logs.go:276] 0 containers: []
	W0816 18:17:00.987713   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:00.987721   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:00.987784   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:01.021230   75402 cri.go:89] found id: ""
	I0816 18:17:01.021277   75402 logs.go:276] 0 containers: []
	W0816 18:17:01.021308   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:01.021315   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:01.021388   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:01.057087   75402 cri.go:89] found id: ""
	I0816 18:17:01.057117   75402 logs.go:276] 0 containers: []
	W0816 18:17:01.057127   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:01.057135   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:01.057207   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:01.094142   75402 cri.go:89] found id: ""
	I0816 18:17:01.094168   75402 logs.go:276] 0 containers: []
	W0816 18:17:01.094176   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:01.094183   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:01.094233   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:01.132799   75402 cri.go:89] found id: ""
	I0816 18:17:01.132824   75402 logs.go:276] 0 containers: []
	W0816 18:17:01.132831   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:01.132837   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:01.132888   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:01.173367   75402 cri.go:89] found id: ""
	I0816 18:17:01.173402   75402 logs.go:276] 0 containers: []
	W0816 18:17:01.173414   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:01.173425   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:01.173443   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:01.186856   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:01.186896   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:01.259913   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:01.259941   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:01.259955   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:01.340914   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:01.340947   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:01.381023   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:01.381058   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:59.440314   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:01.440377   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:59.706750   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:01.707186   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:00.448182   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:02.448773   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:03.933420   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:03.946940   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:03.947008   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:03.984529   75402 cri.go:89] found id: ""
	I0816 18:17:03.984560   75402 logs.go:276] 0 containers: []
	W0816 18:17:03.984571   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:03.984581   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:03.984668   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:04.017900   75402 cri.go:89] found id: ""
	I0816 18:17:04.017929   75402 logs.go:276] 0 containers: []
	W0816 18:17:04.017940   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:04.017948   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:04.018009   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:04.050837   75402 cri.go:89] found id: ""
	I0816 18:17:04.050871   75402 logs.go:276] 0 containers: []
	W0816 18:17:04.050888   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:04.050896   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:04.050959   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:04.085448   75402 cri.go:89] found id: ""
	I0816 18:17:04.085477   75402 logs.go:276] 0 containers: []
	W0816 18:17:04.085487   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:04.085495   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:04.085564   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:04.118177   75402 cri.go:89] found id: ""
	I0816 18:17:04.118203   75402 logs.go:276] 0 containers: []
	W0816 18:17:04.118213   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:04.118220   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:04.118284   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:04.150289   75402 cri.go:89] found id: ""
	I0816 18:17:04.150317   75402 logs.go:276] 0 containers: []
	W0816 18:17:04.150330   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:04.150338   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:04.150404   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:04.184258   75402 cri.go:89] found id: ""
	I0816 18:17:04.184282   75402 logs.go:276] 0 containers: []
	W0816 18:17:04.184290   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:04.184295   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:04.184347   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:04.217142   75402 cri.go:89] found id: ""
	I0816 18:17:04.217174   75402 logs.go:276] 0 containers: []
	W0816 18:17:04.217184   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:04.217192   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:04.217204   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:04.253000   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:04.253034   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:04.304978   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:04.305018   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:04.320210   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:04.320241   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:04.396146   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:04.396169   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:04.396184   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:06.980747   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:06.992944   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:06.993006   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:07.026303   75402 cri.go:89] found id: ""
	I0816 18:17:07.026356   75402 logs.go:276] 0 containers: []
	W0816 18:17:07.026368   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:07.026376   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:07.026443   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:07.059226   75402 cri.go:89] found id: ""
	I0816 18:17:07.059257   75402 logs.go:276] 0 containers: []
	W0816 18:17:07.059268   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:07.059277   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:07.059339   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:07.092142   75402 cri.go:89] found id: ""
	I0816 18:17:07.092171   75402 logs.go:276] 0 containers: []
	W0816 18:17:07.092182   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:07.092188   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:07.092248   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:07.125284   75402 cri.go:89] found id: ""
	I0816 18:17:07.125330   75402 logs.go:276] 0 containers: []
	W0816 18:17:07.125347   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:07.125355   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:07.125420   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:07.163890   75402 cri.go:89] found id: ""
	I0816 18:17:07.163919   75402 logs.go:276] 0 containers: []
	W0816 18:17:07.163930   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:07.163938   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:07.164002   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:07.197988   75402 cri.go:89] found id: ""
	I0816 18:17:07.198014   75402 logs.go:276] 0 containers: []
	W0816 18:17:07.198025   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:07.198033   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:07.198116   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:07.232709   75402 cri.go:89] found id: ""
	I0816 18:17:07.232738   75402 logs.go:276] 0 containers: []
	W0816 18:17:07.232749   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:07.232756   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:07.232817   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:07.264514   75402 cri.go:89] found id: ""
	I0816 18:17:07.264548   75402 logs.go:276] 0 containers: []
	W0816 18:17:07.264558   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:07.264569   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:07.264583   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:07.316138   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:07.316173   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:07.329659   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:07.329688   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:07.397345   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:07.397380   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:07.397397   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:07.481245   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:07.481280   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:03.940100   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:05.940355   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:07.940821   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:04.207253   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:06.705745   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:08.706828   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:04.949027   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:07.447957   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:10.024405   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:10.036860   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:10.036927   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:10.069402   75402 cri.go:89] found id: ""
	I0816 18:17:10.069436   75402 logs.go:276] 0 containers: []
	W0816 18:17:10.069448   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:10.069458   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:10.069511   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:10.101480   75402 cri.go:89] found id: ""
	I0816 18:17:10.101508   75402 logs.go:276] 0 containers: []
	W0816 18:17:10.101518   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:10.101529   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:10.101601   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:10.131673   75402 cri.go:89] found id: ""
	I0816 18:17:10.131708   75402 logs.go:276] 0 containers: []
	W0816 18:17:10.131719   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:10.131726   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:10.131821   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:10.166476   75402 cri.go:89] found id: ""
	I0816 18:17:10.166508   75402 logs.go:276] 0 containers: []
	W0816 18:17:10.166518   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:10.166525   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:10.166590   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:10.199296   75402 cri.go:89] found id: ""
	I0816 18:17:10.199321   75402 logs.go:276] 0 containers: []
	W0816 18:17:10.199332   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:10.199340   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:10.199406   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:10.232640   75402 cri.go:89] found id: ""
	I0816 18:17:10.232672   75402 logs.go:276] 0 containers: []
	W0816 18:17:10.232683   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:10.232691   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:10.232775   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:10.263958   75402 cri.go:89] found id: ""
	I0816 18:17:10.263988   75402 logs.go:276] 0 containers: []
	W0816 18:17:10.263998   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:10.264003   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:10.264052   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:10.295904   75402 cri.go:89] found id: ""
	I0816 18:17:10.295929   75402 logs.go:276] 0 containers: []
	W0816 18:17:10.295937   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:10.295946   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:10.295957   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:10.344874   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:10.344909   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:10.358523   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:10.358552   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:10.433311   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:10.433334   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:10.433351   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:10.514580   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:10.514620   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:13.053815   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:13.068517   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:13.068597   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:10.440472   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:12.939209   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:10.707438   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:13.207630   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:09.947889   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:11.949408   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:14.447906   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:13.104251   75402 cri.go:89] found id: ""
	I0816 18:17:13.104279   75402 logs.go:276] 0 containers: []
	W0816 18:17:13.104313   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:13.104321   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:13.104375   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:13.137415   75402 cri.go:89] found id: ""
	I0816 18:17:13.137442   75402 logs.go:276] 0 containers: []
	W0816 18:17:13.137453   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:13.137461   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:13.137510   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:13.174165   75402 cri.go:89] found id: ""
	I0816 18:17:13.174191   75402 logs.go:276] 0 containers: []
	W0816 18:17:13.174203   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:13.174210   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:13.174271   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:13.206789   75402 cri.go:89] found id: ""
	I0816 18:17:13.206814   75402 logs.go:276] 0 containers: []
	W0816 18:17:13.206823   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:13.206831   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:13.206892   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:13.238950   75402 cri.go:89] found id: ""
	I0816 18:17:13.238975   75402 logs.go:276] 0 containers: []
	W0816 18:17:13.238984   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:13.238990   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:13.239037   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:13.271485   75402 cri.go:89] found id: ""
	I0816 18:17:13.271518   75402 logs.go:276] 0 containers: []
	W0816 18:17:13.271535   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:13.271544   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:13.271612   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:13.307576   75402 cri.go:89] found id: ""
	I0816 18:17:13.307610   75402 logs.go:276] 0 containers: []
	W0816 18:17:13.307622   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:13.307632   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:13.307698   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:13.339746   75402 cri.go:89] found id: ""
	I0816 18:17:13.339792   75402 logs.go:276] 0 containers: []
	W0816 18:17:13.339802   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:13.339813   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:13.339827   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:13.352847   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:13.352875   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:13.440397   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:13.440418   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:13.440432   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:13.514879   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:13.514916   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:13.553848   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:13.553882   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:16.103318   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:16.115837   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:16.115922   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:16.147079   75402 cri.go:89] found id: ""
	I0816 18:17:16.147108   75402 logs.go:276] 0 containers: []
	W0816 18:17:16.147119   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:16.147127   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:16.147189   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:16.184207   75402 cri.go:89] found id: ""
	I0816 18:17:16.184233   75402 logs.go:276] 0 containers: []
	W0816 18:17:16.184241   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:16.184247   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:16.184295   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:16.219036   75402 cri.go:89] found id: ""
	I0816 18:17:16.219065   75402 logs.go:276] 0 containers: []
	W0816 18:17:16.219072   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:16.219078   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:16.219163   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:16.251269   75402 cri.go:89] found id: ""
	I0816 18:17:16.251307   75402 logs.go:276] 0 containers: []
	W0816 18:17:16.251320   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:16.251329   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:16.251394   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:16.286549   75402 cri.go:89] found id: ""
	I0816 18:17:16.286576   75402 logs.go:276] 0 containers: []
	W0816 18:17:16.286585   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:16.286591   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:16.286647   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:16.322017   75402 cri.go:89] found id: ""
	I0816 18:17:16.322045   75402 logs.go:276] 0 containers: []
	W0816 18:17:16.322055   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:16.322063   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:16.322128   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:16.353606   75402 cri.go:89] found id: ""
	I0816 18:17:16.353636   75402 logs.go:276] 0 containers: []
	W0816 18:17:16.353646   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:16.353653   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:16.353719   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:16.386973   75402 cri.go:89] found id: ""
	I0816 18:17:16.387005   75402 logs.go:276] 0 containers: []
	W0816 18:17:16.387016   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:16.387027   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:16.387039   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:16.437031   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:16.437066   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:16.451258   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:16.451292   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:16.519130   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:16.519155   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:16.519170   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:16.598591   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:16.598626   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:14.939993   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:17.440655   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:15.705969   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:17.706271   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:16.449266   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:18.948220   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:19.147916   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:19.160525   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:19.160600   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:19.193494   75402 cri.go:89] found id: ""
	I0816 18:17:19.193520   75402 logs.go:276] 0 containers: []
	W0816 18:17:19.193527   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:19.193533   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:19.193599   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:19.230936   75402 cri.go:89] found id: ""
	I0816 18:17:19.230963   75402 logs.go:276] 0 containers: []
	W0816 18:17:19.230971   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:19.230976   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:19.231029   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:19.263713   75402 cri.go:89] found id: ""
	I0816 18:17:19.263735   75402 logs.go:276] 0 containers: []
	W0816 18:17:19.263742   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:19.263748   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:19.263794   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:19.294609   75402 cri.go:89] found id: ""
	I0816 18:17:19.294635   75402 logs.go:276] 0 containers: []
	W0816 18:17:19.294642   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:19.294647   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:19.294698   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:19.329278   75402 cri.go:89] found id: ""
	I0816 18:17:19.329303   75402 logs.go:276] 0 containers: []
	W0816 18:17:19.329313   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:19.329319   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:19.329368   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:19.362007   75402 cri.go:89] found id: ""
	I0816 18:17:19.362043   75402 logs.go:276] 0 containers: []
	W0816 18:17:19.362052   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:19.362067   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:19.362120   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:19.395190   75402 cri.go:89] found id: ""
	I0816 18:17:19.395217   75402 logs.go:276] 0 containers: []
	W0816 18:17:19.395248   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:19.395255   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:19.395302   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:19.426962   75402 cri.go:89] found id: ""
	I0816 18:17:19.426991   75402 logs.go:276] 0 containers: []
	W0816 18:17:19.427002   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:19.427012   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:19.427027   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:19.441319   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:19.441346   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:19.511390   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:19.511409   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:19.511425   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:19.590897   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:19.590935   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:19.628753   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:19.628781   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:22.182534   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:22.194844   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:22.194917   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:22.228225   75402 cri.go:89] found id: ""
	I0816 18:17:22.228247   75402 logs.go:276] 0 containers: []
	W0816 18:17:22.228269   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:22.228276   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:22.228325   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:22.258614   75402 cri.go:89] found id: ""
	I0816 18:17:22.258646   75402 logs.go:276] 0 containers: []
	W0816 18:17:22.258654   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:22.258660   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:22.258708   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:22.289103   75402 cri.go:89] found id: ""
	I0816 18:17:22.289136   75402 logs.go:276] 0 containers: []
	W0816 18:17:22.289147   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:22.289154   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:22.289215   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:22.321828   75402 cri.go:89] found id: ""
	I0816 18:17:22.321857   75402 logs.go:276] 0 containers: []
	W0816 18:17:22.321869   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:22.321877   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:22.321942   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:22.353557   75402 cri.go:89] found id: ""
	I0816 18:17:22.353588   75402 logs.go:276] 0 containers: []
	W0816 18:17:22.353597   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:22.353602   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:22.353660   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:22.385078   75402 cri.go:89] found id: ""
	I0816 18:17:22.385103   75402 logs.go:276] 0 containers: []
	W0816 18:17:22.385110   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:22.385116   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:22.385189   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:22.415864   75402 cri.go:89] found id: ""
	I0816 18:17:22.415900   75402 logs.go:276] 0 containers: []
	W0816 18:17:22.415913   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:22.415922   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:22.415990   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:22.449895   75402 cri.go:89] found id: ""
	I0816 18:17:22.449922   75402 logs.go:276] 0 containers: []
	W0816 18:17:22.449942   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:22.449957   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:22.449974   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:22.523055   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:22.523073   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:22.523084   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:22.599680   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:22.599719   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:22.638021   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:22.638057   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:22.688970   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:22.689010   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:19.941154   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:22.440580   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:20.207713   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:22.706805   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:21.448399   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:23.448444   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:25.202748   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:25.217316   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:25.217388   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:25.249528   75402 cri.go:89] found id: ""
	I0816 18:17:25.249558   75402 logs.go:276] 0 containers: []
	W0816 18:17:25.249566   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:25.249578   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:25.249625   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:25.282667   75402 cri.go:89] found id: ""
	I0816 18:17:25.282696   75402 logs.go:276] 0 containers: []
	W0816 18:17:25.282706   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:25.282712   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:25.282764   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:25.314061   75402 cri.go:89] found id: ""
	I0816 18:17:25.314091   75402 logs.go:276] 0 containers: []
	W0816 18:17:25.314101   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:25.314108   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:25.314161   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:25.351260   75402 cri.go:89] found id: ""
	I0816 18:17:25.351287   75402 logs.go:276] 0 containers: []
	W0816 18:17:25.351296   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:25.351301   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:25.351352   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:25.388303   75402 cri.go:89] found id: ""
	I0816 18:17:25.388334   75402 logs.go:276] 0 containers: []
	W0816 18:17:25.388345   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:25.388352   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:25.388412   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:25.422133   75402 cri.go:89] found id: ""
	I0816 18:17:25.422161   75402 logs.go:276] 0 containers: []
	W0816 18:17:25.422169   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:25.422175   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:25.422232   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:25.456749   75402 cri.go:89] found id: ""
	I0816 18:17:25.456775   75402 logs.go:276] 0 containers: []
	W0816 18:17:25.456783   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:25.456789   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:25.456836   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:25.494783   75402 cri.go:89] found id: ""
	I0816 18:17:25.494809   75402 logs.go:276] 0 containers: []
	W0816 18:17:25.494817   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:25.494825   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:25.494836   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:25.561253   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:25.561290   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:25.580349   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:25.580383   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:25.656333   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:25.656361   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:25.656378   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:25.733479   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:25.733515   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:24.444069   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:26.939743   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:24.707849   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:26.709711   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:25.448555   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:27.449070   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:28.272217   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:28.285750   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:28.285822   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:28.318230   75402 cri.go:89] found id: ""
	I0816 18:17:28.318260   75402 logs.go:276] 0 containers: []
	W0816 18:17:28.318268   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:28.318275   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:28.318344   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:28.351766   75402 cri.go:89] found id: ""
	I0816 18:17:28.351798   75402 logs.go:276] 0 containers: []
	W0816 18:17:28.351808   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:28.351814   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:28.351872   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:28.385543   75402 cri.go:89] found id: ""
	I0816 18:17:28.385572   75402 logs.go:276] 0 containers: []
	W0816 18:17:28.385581   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:28.385588   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:28.385653   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:28.418808   75402 cri.go:89] found id: ""
	I0816 18:17:28.418837   75402 logs.go:276] 0 containers: []
	W0816 18:17:28.418846   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:28.418852   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:28.418900   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:28.453883   75402 cri.go:89] found id: ""
	I0816 18:17:28.453911   75402 logs.go:276] 0 containers: []
	W0816 18:17:28.453922   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:28.453929   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:28.453996   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:28.486261   75402 cri.go:89] found id: ""
	I0816 18:17:28.486291   75402 logs.go:276] 0 containers: []
	W0816 18:17:28.486304   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:28.486310   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:28.486366   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:28.520617   75402 cri.go:89] found id: ""
	I0816 18:17:28.520658   75402 logs.go:276] 0 containers: []
	W0816 18:17:28.520670   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:28.520678   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:28.520731   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:28.552996   75402 cri.go:89] found id: ""
	I0816 18:17:28.553026   75402 logs.go:276] 0 containers: []
	W0816 18:17:28.553036   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:28.553046   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:28.553061   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:28.604149   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:28.604192   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:28.617393   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:28.617421   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:28.683258   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:28.683279   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:28.683294   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:28.766933   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:28.766977   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:31.305897   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:31.326070   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:31.326143   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:31.375314   75402 cri.go:89] found id: ""
	I0816 18:17:31.375350   75402 logs.go:276] 0 containers: []
	W0816 18:17:31.375361   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:31.375369   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:31.375429   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:31.407372   75402 cri.go:89] found id: ""
	I0816 18:17:31.407398   75402 logs.go:276] 0 containers: []
	W0816 18:17:31.407406   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:31.407411   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:31.407459   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:31.445679   75402 cri.go:89] found id: ""
	I0816 18:17:31.445706   75402 logs.go:276] 0 containers: []
	W0816 18:17:31.445714   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:31.445720   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:31.445781   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:31.480040   75402 cri.go:89] found id: ""
	I0816 18:17:31.480072   75402 logs.go:276] 0 containers: []
	W0816 18:17:31.480080   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:31.480085   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:31.480145   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:31.511143   75402 cri.go:89] found id: ""
	I0816 18:17:31.511171   75402 logs.go:276] 0 containers: []
	W0816 18:17:31.511182   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:31.511188   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:31.511252   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:31.544254   75402 cri.go:89] found id: ""
	I0816 18:17:31.544282   75402 logs.go:276] 0 containers: []
	W0816 18:17:31.544293   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:31.544300   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:31.544363   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:31.579007   75402 cri.go:89] found id: ""
	I0816 18:17:31.579033   75402 logs.go:276] 0 containers: []
	W0816 18:17:31.579041   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:31.579046   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:31.579108   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:31.619966   75402 cri.go:89] found id: ""
	I0816 18:17:31.619995   75402 logs.go:276] 0 containers: []
	W0816 18:17:31.620005   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:31.620018   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:31.620035   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:31.657784   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:31.657815   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:31.706824   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:31.706853   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:31.719696   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:31.719721   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:31.786096   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:31.786124   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:31.786142   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:28.940711   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:31.440514   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:29.206929   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:31.706188   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:33.706244   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:29.948053   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:32.448453   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:34.363862   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:34.377365   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:34.377430   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:34.414191   75402 cri.go:89] found id: ""
	I0816 18:17:34.414216   75402 logs.go:276] 0 containers: []
	W0816 18:17:34.414223   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:34.414229   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:34.414285   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:34.446811   75402 cri.go:89] found id: ""
	I0816 18:17:34.446836   75402 logs.go:276] 0 containers: []
	W0816 18:17:34.446843   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:34.446848   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:34.446905   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:34.477582   75402 cri.go:89] found id: ""
	I0816 18:17:34.477615   75402 logs.go:276] 0 containers: []
	W0816 18:17:34.477627   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:34.477634   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:34.477695   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:34.507868   75402 cri.go:89] found id: ""
	I0816 18:17:34.507901   75402 logs.go:276] 0 containers: []
	W0816 18:17:34.507912   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:34.507921   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:34.507984   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:34.538719   75402 cri.go:89] found id: ""
	I0816 18:17:34.538754   75402 logs.go:276] 0 containers: []
	W0816 18:17:34.538765   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:34.538772   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:34.538826   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:34.571445   75402 cri.go:89] found id: ""
	I0816 18:17:34.571468   75402 logs.go:276] 0 containers: []
	W0816 18:17:34.571477   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:34.571484   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:34.571557   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:34.601587   75402 cri.go:89] found id: ""
	I0816 18:17:34.601611   75402 logs.go:276] 0 containers: []
	W0816 18:17:34.601618   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:34.601624   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:34.601669   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:34.634850   75402 cri.go:89] found id: ""
	I0816 18:17:34.634878   75402 logs.go:276] 0 containers: []
	W0816 18:17:34.634892   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:34.634906   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:34.634920   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:34.682828   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:34.682859   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:34.695796   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:34.695820   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:34.762100   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:34.762121   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:34.762133   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:34.845329   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:34.845359   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:37.386266   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:37.398940   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:37.399005   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:37.433072   75402 cri.go:89] found id: ""
	I0816 18:17:37.433099   75402 logs.go:276] 0 containers: []
	W0816 18:17:37.433112   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:37.433118   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:37.433169   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:37.466968   75402 cri.go:89] found id: ""
	I0816 18:17:37.467001   75402 logs.go:276] 0 containers: []
	W0816 18:17:37.467012   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:37.467021   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:37.467086   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:37.509268   75402 cri.go:89] found id: ""
	I0816 18:17:37.509291   75402 logs.go:276] 0 containers: []
	W0816 18:17:37.509300   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:37.509306   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:37.509365   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:37.541295   75402 cri.go:89] found id: ""
	I0816 18:17:37.541338   75402 logs.go:276] 0 containers: []
	W0816 18:17:37.541350   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:37.541357   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:37.541421   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:37.575423   75402 cri.go:89] found id: ""
	I0816 18:17:37.575453   75402 logs.go:276] 0 containers: []
	W0816 18:17:37.575464   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:37.575472   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:37.575540   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:37.614787   75402 cri.go:89] found id: ""
	I0816 18:17:37.614817   75402 logs.go:276] 0 containers: []
	W0816 18:17:37.614828   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:37.614835   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:37.614896   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:37.646396   75402 cri.go:89] found id: ""
	I0816 18:17:37.646430   75402 logs.go:276] 0 containers: []
	W0816 18:17:37.646441   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:37.646449   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:37.646517   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:37.679383   75402 cri.go:89] found id: ""
	I0816 18:17:37.679414   75402 logs.go:276] 0 containers: []
	W0816 18:17:37.679423   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:37.679431   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:37.679442   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:37.729641   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:37.729673   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:37.742420   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:37.742448   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:37.812572   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:37.812600   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:37.812615   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:37.887100   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:37.887137   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:33.940380   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:35.941055   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:38.440700   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:35.706903   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:38.207115   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:34.947638   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:37.448511   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:39.448944   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:40.424202   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:40.438231   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:40.438337   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:40.474614   75402 cri.go:89] found id: ""
	I0816 18:17:40.474639   75402 logs.go:276] 0 containers: []
	W0816 18:17:40.474648   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:40.474653   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:40.474701   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:40.510123   75402 cri.go:89] found id: ""
	I0816 18:17:40.510154   75402 logs.go:276] 0 containers: []
	W0816 18:17:40.510162   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:40.510167   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:40.510217   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:40.548971   75402 cri.go:89] found id: ""
	I0816 18:17:40.549000   75402 logs.go:276] 0 containers: []
	W0816 18:17:40.549008   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:40.549013   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:40.549069   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:40.595126   75402 cri.go:89] found id: ""
	I0816 18:17:40.595158   75402 logs.go:276] 0 containers: []
	W0816 18:17:40.595167   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:40.595174   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:40.595220   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:40.629769   75402 cri.go:89] found id: ""
	I0816 18:17:40.629793   75402 logs.go:276] 0 containers: []
	W0816 18:17:40.629801   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:40.629807   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:40.629871   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:40.661889   75402 cri.go:89] found id: ""
	I0816 18:17:40.661922   75402 logs.go:276] 0 containers: []
	W0816 18:17:40.661932   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:40.661939   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:40.662001   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:40.697764   75402 cri.go:89] found id: ""
	I0816 18:17:40.697790   75402 logs.go:276] 0 containers: []
	W0816 18:17:40.697801   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:40.697808   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:40.697867   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:40.734825   75402 cri.go:89] found id: ""
	I0816 18:17:40.734852   75402 logs.go:276] 0 containers: []
	W0816 18:17:40.734862   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:40.734872   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:40.734939   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:40.787975   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:40.788015   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:40.800817   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:40.800843   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:40.874182   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:40.874205   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:40.874219   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:40.960032   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:40.960066   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:40.940284   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:42.943218   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:40.207943   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:42.707356   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:41.947437   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:43.947887   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:43.499770   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:43.513726   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:43.513806   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:43.548368   75402 cri.go:89] found id: ""
	I0816 18:17:43.548396   75402 logs.go:276] 0 containers: []
	W0816 18:17:43.548406   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:43.548413   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:43.548474   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:43.581177   75402 cri.go:89] found id: ""
	I0816 18:17:43.581205   75402 logs.go:276] 0 containers: []
	W0816 18:17:43.581216   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:43.581223   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:43.581291   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:43.614315   75402 cri.go:89] found id: ""
	I0816 18:17:43.614354   75402 logs.go:276] 0 containers: []
	W0816 18:17:43.614367   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:43.614374   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:43.614437   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:43.648608   75402 cri.go:89] found id: ""
	I0816 18:17:43.648645   75402 logs.go:276] 0 containers: []
	W0816 18:17:43.648658   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:43.648669   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:43.648722   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:43.680549   75402 cri.go:89] found id: ""
	I0816 18:17:43.680586   75402 logs.go:276] 0 containers: []
	W0816 18:17:43.680597   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:43.680604   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:43.680686   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:43.710473   75402 cri.go:89] found id: ""
	I0816 18:17:43.710497   75402 logs.go:276] 0 containers: []
	W0816 18:17:43.710506   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:43.710514   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:43.710576   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:43.741415   75402 cri.go:89] found id: ""
	I0816 18:17:43.741442   75402 logs.go:276] 0 containers: []
	W0816 18:17:43.741450   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:43.741456   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:43.741505   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:43.775018   75402 cri.go:89] found id: ""
	I0816 18:17:43.775051   75402 logs.go:276] 0 containers: []
	W0816 18:17:43.775063   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:43.775074   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:43.775087   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:43.825596   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:43.825630   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:43.839133   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:43.839161   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:43.905645   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:43.905667   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:43.905679   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:43.988860   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:43.988901   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:46.525896   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:46.539147   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:46.539229   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:46.570703   75402 cri.go:89] found id: ""
	I0816 18:17:46.570726   75402 logs.go:276] 0 containers: []
	W0816 18:17:46.570734   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:46.570740   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:46.570785   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:46.605909   75402 cri.go:89] found id: ""
	I0816 18:17:46.605939   75402 logs.go:276] 0 containers: []
	W0816 18:17:46.605954   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:46.605961   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:46.606013   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:46.638865   75402 cri.go:89] found id: ""
	I0816 18:17:46.638899   75402 logs.go:276] 0 containers: []
	W0816 18:17:46.638911   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:46.638919   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:46.638994   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:46.671869   75402 cri.go:89] found id: ""
	I0816 18:17:46.671904   75402 logs.go:276] 0 containers: []
	W0816 18:17:46.671917   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:46.671926   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:46.671988   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:46.703423   75402 cri.go:89] found id: ""
	I0816 18:17:46.703464   75402 logs.go:276] 0 containers: []
	W0816 18:17:46.703473   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:46.703479   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:46.703545   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:46.735824   75402 cri.go:89] found id: ""
	I0816 18:17:46.735853   75402 logs.go:276] 0 containers: []
	W0816 18:17:46.735864   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:46.735871   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:46.735926   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:46.767122   75402 cri.go:89] found id: ""
	I0816 18:17:46.767146   75402 logs.go:276] 0 containers: []
	W0816 18:17:46.767154   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:46.767160   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:46.767207   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:46.798093   75402 cri.go:89] found id: ""
	I0816 18:17:46.798126   75402 logs.go:276] 0 containers: []
	W0816 18:17:46.798140   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:46.798152   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:46.798167   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:46.832699   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:46.832725   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:46.884212   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:46.884246   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:46.896896   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:46.896921   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:46.968805   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:46.968824   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:46.968838   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:45.440474   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:47.940127   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:45.206534   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:47.206973   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:45.948252   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:48.448086   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:49.552581   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:49.565134   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:49.565212   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:49.597012   75402 cri.go:89] found id: ""
	I0816 18:17:49.597042   75402 logs.go:276] 0 containers: []
	W0816 18:17:49.597057   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:49.597067   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:49.597133   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:49.628902   75402 cri.go:89] found id: ""
	I0816 18:17:49.628935   75402 logs.go:276] 0 containers: []
	W0816 18:17:49.628948   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:49.628957   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:49.629025   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:49.662668   75402 cri.go:89] found id: ""
	I0816 18:17:49.662698   75402 logs.go:276] 0 containers: []
	W0816 18:17:49.662709   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:49.662715   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:49.662778   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:49.696354   75402 cri.go:89] found id: ""
	I0816 18:17:49.696381   75402 logs.go:276] 0 containers: []
	W0816 18:17:49.696389   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:49.696395   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:49.696487   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:49.730801   75402 cri.go:89] found id: ""
	I0816 18:17:49.730838   75402 logs.go:276] 0 containers: []
	W0816 18:17:49.730849   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:49.730856   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:49.730921   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:49.764474   75402 cri.go:89] found id: ""
	I0816 18:17:49.764503   75402 logs.go:276] 0 containers: []
	W0816 18:17:49.764514   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:49.764522   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:49.764585   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:49.798577   75402 cri.go:89] found id: ""
	I0816 18:17:49.798616   75402 logs.go:276] 0 containers: []
	W0816 18:17:49.798627   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:49.798634   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:49.798703   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:49.830987   75402 cri.go:89] found id: ""
	I0816 18:17:49.831016   75402 logs.go:276] 0 containers: []
	W0816 18:17:49.831024   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:49.831032   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:49.831043   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:49.883397   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:49.883433   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:49.897208   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:49.897239   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:49.968363   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:49.968386   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:49.968398   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:50.056552   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:50.056583   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:52.596191   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:52.609592   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:52.609668   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:52.645775   75402 cri.go:89] found id: ""
	I0816 18:17:52.645807   75402 logs.go:276] 0 containers: []
	W0816 18:17:52.645817   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:52.645823   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:52.645869   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:52.677817   75402 cri.go:89] found id: ""
	I0816 18:17:52.677852   75402 logs.go:276] 0 containers: []
	W0816 18:17:52.677862   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:52.677870   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:52.677935   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:52.710618   75402 cri.go:89] found id: ""
	I0816 18:17:52.710648   75402 logs.go:276] 0 containers: []
	W0816 18:17:52.710658   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:52.710664   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:52.710716   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:52.745830   75402 cri.go:89] found id: ""
	I0816 18:17:52.745858   75402 logs.go:276] 0 containers: []
	W0816 18:17:52.745867   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:52.745872   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:52.745929   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:52.778511   75402 cri.go:89] found id: ""
	I0816 18:17:52.778538   75402 logs.go:276] 0 containers: []
	W0816 18:17:52.778548   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:52.778567   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:52.778632   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:52.810759   75402 cri.go:89] found id: ""
	I0816 18:17:52.810788   75402 logs.go:276] 0 containers: []
	W0816 18:17:52.810800   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:52.810807   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:52.810872   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:52.843786   75402 cri.go:89] found id: ""
	I0816 18:17:52.843814   75402 logs.go:276] 0 containers: []
	W0816 18:17:52.843824   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:52.843831   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:52.843886   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:52.876886   75402 cri.go:89] found id: ""
	I0816 18:17:52.876914   75402 logs.go:276] 0 containers: []
	W0816 18:17:52.876924   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:52.876934   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:52.876950   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:52.932519   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:52.932559   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:52.946645   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:52.946671   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:53.018156   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:53.018177   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:53.018190   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:53.095562   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:53.095600   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:49.940263   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:51.940433   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:49.707635   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:52.206027   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:50.449204   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:52.949591   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:55.633820   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:55.646170   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:55.646238   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:55.678147   75402 cri.go:89] found id: ""
	I0816 18:17:55.678181   75402 logs.go:276] 0 containers: []
	W0816 18:17:55.678194   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:55.678202   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:55.678264   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:55.710910   75402 cri.go:89] found id: ""
	I0816 18:17:55.710938   75402 logs.go:276] 0 containers: []
	W0816 18:17:55.710948   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:55.710956   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:55.711012   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:55.744822   75402 cri.go:89] found id: ""
	I0816 18:17:55.744853   75402 logs.go:276] 0 containers: []
	W0816 18:17:55.744863   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:55.744870   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:55.744931   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:55.791677   75402 cri.go:89] found id: ""
	I0816 18:17:55.791708   75402 logs.go:276] 0 containers: []
	W0816 18:17:55.791719   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:55.791727   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:55.791788   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:55.826448   75402 cri.go:89] found id: ""
	I0816 18:17:55.826481   75402 logs.go:276] 0 containers: []
	W0816 18:17:55.826492   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:55.826500   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:55.826564   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:55.861178   75402 cri.go:89] found id: ""
	I0816 18:17:55.861210   75402 logs.go:276] 0 containers: []
	W0816 18:17:55.861219   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:55.861225   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:55.861280   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:55.898073   75402 cri.go:89] found id: ""
	I0816 18:17:55.898099   75402 logs.go:276] 0 containers: []
	W0816 18:17:55.898110   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:55.898117   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:55.898184   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:55.931446   75402 cri.go:89] found id: ""
	I0816 18:17:55.931478   75402 logs.go:276] 0 containers: []
	W0816 18:17:55.931487   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:55.931498   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:55.931514   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:55.999910   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:55.999931   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:55.999943   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:56.077240   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:56.077312   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:56.115479   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:56.115506   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:56.166954   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:56.166989   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:54.440166   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:56.939865   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:54.206368   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:56.206710   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:58.207053   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:55.448566   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:57.948891   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:58.680571   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:58.692824   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:58.692890   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:58.729761   75402 cri.go:89] found id: ""
	I0816 18:17:58.729786   75402 logs.go:276] 0 containers: []
	W0816 18:17:58.729794   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:58.729799   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:58.729857   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:58.764943   75402 cri.go:89] found id: ""
	I0816 18:17:58.765082   75402 logs.go:276] 0 containers: []
	W0816 18:17:58.765113   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:58.765124   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:58.765179   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:58.801314   75402 cri.go:89] found id: ""
	I0816 18:17:58.801345   75402 logs.go:276] 0 containers: []
	W0816 18:17:58.801357   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:58.801365   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:58.801429   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:58.833936   75402 cri.go:89] found id: ""
	I0816 18:17:58.833973   75402 logs.go:276] 0 containers: []
	W0816 18:17:58.833982   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:58.833988   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:58.834046   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:58.870108   75402 cri.go:89] found id: ""
	I0816 18:17:58.870137   75402 logs.go:276] 0 containers: []
	W0816 18:17:58.870148   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:58.870155   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:58.870219   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:58.904157   75402 cri.go:89] found id: ""
	I0816 18:17:58.904184   75402 logs.go:276] 0 containers: []
	W0816 18:17:58.904194   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:58.904201   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:58.904264   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:58.937862   75402 cri.go:89] found id: ""
	I0816 18:17:58.937891   75402 logs.go:276] 0 containers: []
	W0816 18:17:58.937901   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:58.937909   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:58.937972   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:58.972465   75402 cri.go:89] found id: ""
	I0816 18:17:58.972495   75402 logs.go:276] 0 containers: []
	W0816 18:17:58.972506   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:58.972517   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:58.972532   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:59.047197   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:59.047223   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:59.047238   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:59.126634   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:59.126668   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:59.165528   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:59.165562   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:59.214294   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:59.214433   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:01.729662   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:01.742582   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:01.742642   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:01.776148   75402 cri.go:89] found id: ""
	I0816 18:18:01.776180   75402 logs.go:276] 0 containers: []
	W0816 18:18:01.776188   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:01.776197   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:01.776243   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:01.809186   75402 cri.go:89] found id: ""
	I0816 18:18:01.809218   75402 logs.go:276] 0 containers: []
	W0816 18:18:01.809229   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:01.809237   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:01.809307   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:01.842379   75402 cri.go:89] found id: ""
	I0816 18:18:01.842406   75402 logs.go:276] 0 containers: []
	W0816 18:18:01.842417   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:01.842425   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:01.842490   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:01.874706   75402 cri.go:89] found id: ""
	I0816 18:18:01.874739   75402 logs.go:276] 0 containers: []
	W0816 18:18:01.874747   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:01.874753   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:01.874813   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:01.915567   75402 cri.go:89] found id: ""
	I0816 18:18:01.915596   75402 logs.go:276] 0 containers: []
	W0816 18:18:01.915607   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:01.915615   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:01.915675   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:01.951527   75402 cri.go:89] found id: ""
	I0816 18:18:01.951559   75402 logs.go:276] 0 containers: []
	W0816 18:18:01.951569   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:01.951576   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:01.951638   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:01.983822   75402 cri.go:89] found id: ""
	I0816 18:18:01.983848   75402 logs.go:276] 0 containers: []
	W0816 18:18:01.983856   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:01.983861   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:01.983909   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:02.018976   75402 cri.go:89] found id: ""
	I0816 18:18:02.019003   75402 logs.go:276] 0 containers: []
	W0816 18:18:02.019012   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:02.019019   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:02.019033   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:02.071096   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:02.071131   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:02.085163   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:02.085189   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:02.154771   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:02.154789   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:02.154800   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:02.242068   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:02.242105   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:58.941456   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:01.440404   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:00.208085   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:02.705334   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:00.447843   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:02.448334   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:04.790311   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:04.803215   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:04.803298   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:04.835834   75402 cri.go:89] found id: ""
	I0816 18:18:04.835868   75402 logs.go:276] 0 containers: []
	W0816 18:18:04.835879   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:04.835886   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:04.835951   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:04.870000   75402 cri.go:89] found id: ""
	I0816 18:18:04.870032   75402 logs.go:276] 0 containers: []
	W0816 18:18:04.870042   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:04.870049   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:04.870111   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:04.906624   75402 cri.go:89] found id: ""
	I0816 18:18:04.906653   75402 logs.go:276] 0 containers: []
	W0816 18:18:04.906663   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:04.906670   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:04.906730   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:04.940115   75402 cri.go:89] found id: ""
	I0816 18:18:04.940139   75402 logs.go:276] 0 containers: []
	W0816 18:18:04.940148   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:04.940155   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:04.940213   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:04.974461   75402 cri.go:89] found id: ""
	I0816 18:18:04.974493   75402 logs.go:276] 0 containers: []
	W0816 18:18:04.974503   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:04.974510   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:04.974571   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:05.006593   75402 cri.go:89] found id: ""
	I0816 18:18:05.006618   75402 logs.go:276] 0 containers: []
	W0816 18:18:05.006628   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:05.006635   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:05.006691   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:05.040041   75402 cri.go:89] found id: ""
	I0816 18:18:05.040066   75402 logs.go:276] 0 containers: []
	W0816 18:18:05.040082   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:05.040089   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:05.040144   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:05.072968   75402 cri.go:89] found id: ""
	I0816 18:18:05.072996   75402 logs.go:276] 0 containers: []
	W0816 18:18:05.073005   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:05.073014   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:05.073025   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:05.124510   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:05.124543   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:05.145566   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:05.145592   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:05.221874   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:05.221898   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:05.221914   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:05.297283   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:05.297316   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:07.837564   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:07.850372   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:07.850441   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:07.882879   75402 cri.go:89] found id: ""
	I0816 18:18:07.882906   75402 logs.go:276] 0 containers: []
	W0816 18:18:07.882915   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:07.882920   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:07.882978   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:07.916983   75402 cri.go:89] found id: ""
	I0816 18:18:07.917011   75402 logs.go:276] 0 containers: []
	W0816 18:18:07.917019   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:07.917024   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:07.917075   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:07.953864   75402 cri.go:89] found id: ""
	I0816 18:18:07.953886   75402 logs.go:276] 0 containers: []
	W0816 18:18:07.953896   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:07.953903   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:07.953951   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:07.994375   75402 cri.go:89] found id: ""
	I0816 18:18:07.994399   75402 logs.go:276] 0 containers: []
	W0816 18:18:07.994408   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:07.994414   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:07.994472   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:08.029137   75402 cri.go:89] found id: ""
	I0816 18:18:08.029170   75402 logs.go:276] 0 containers: []
	W0816 18:18:08.029182   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:08.029189   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:08.029253   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:08.062331   75402 cri.go:89] found id: ""
	I0816 18:18:08.062358   75402 logs.go:276] 0 containers: []
	W0816 18:18:08.062367   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:08.062373   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:08.062430   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:08.097021   75402 cri.go:89] found id: ""
	I0816 18:18:08.097044   75402 logs.go:276] 0 containers: []
	W0816 18:18:08.097051   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:08.097056   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:08.097112   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:03.940724   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:06.441847   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:04.706298   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:06.707011   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:04.948066   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:06.948125   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:08.948992   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:08.131147   75402 cri.go:89] found id: ""
	I0816 18:18:08.131174   75402 logs.go:276] 0 containers: []
	W0816 18:18:08.131184   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:08.131192   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:08.131203   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:08.182334   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:08.182373   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:08.195459   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:08.195485   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:08.260333   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:08.260351   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:08.260363   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:08.344466   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:08.344506   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:10.881640   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:10.896400   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:10.896482   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:10.934034   75402 cri.go:89] found id: ""
	I0816 18:18:10.934068   75402 logs.go:276] 0 containers: []
	W0816 18:18:10.934076   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:10.934081   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:10.934130   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:10.966697   75402 cri.go:89] found id: ""
	I0816 18:18:10.966724   75402 logs.go:276] 0 containers: []
	W0816 18:18:10.966733   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:10.966741   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:10.966807   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:11.000540   75402 cri.go:89] found id: ""
	I0816 18:18:11.000568   75402 logs.go:276] 0 containers: []
	W0816 18:18:11.000579   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:11.000587   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:11.000665   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:11.034322   75402 cri.go:89] found id: ""
	I0816 18:18:11.034346   75402 logs.go:276] 0 containers: []
	W0816 18:18:11.034354   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:11.034360   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:11.034407   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:11.067081   75402 cri.go:89] found id: ""
	I0816 18:18:11.067108   75402 logs.go:276] 0 containers: []
	W0816 18:18:11.067116   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:11.067122   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:11.067170   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:11.099726   75402 cri.go:89] found id: ""
	I0816 18:18:11.099753   75402 logs.go:276] 0 containers: []
	W0816 18:18:11.099763   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:11.099770   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:11.099834   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:11.133187   75402 cri.go:89] found id: ""
	I0816 18:18:11.133216   75402 logs.go:276] 0 containers: []
	W0816 18:18:11.133226   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:11.133235   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:11.133315   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:11.167121   75402 cri.go:89] found id: ""
	I0816 18:18:11.167157   75402 logs.go:276] 0 containers: []
	W0816 18:18:11.167166   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:11.167177   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:11.167194   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:11.181396   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:11.181424   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:11.248286   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:11.248313   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:11.248325   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:11.328546   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:11.328583   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:11.365534   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:11.365576   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:08.939686   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:10.941097   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:13.440001   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:09.207018   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:11.207677   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:13.706818   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:10.949461   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:13.448057   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:13.919889   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:13.935097   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:13.935178   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:13.973196   75402 cri.go:89] found id: ""
	I0816 18:18:13.973225   75402 logs.go:276] 0 containers: []
	W0816 18:18:13.973236   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:13.973244   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:13.973328   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:14.011913   75402 cri.go:89] found id: ""
	I0816 18:18:14.011936   75402 logs.go:276] 0 containers: []
	W0816 18:18:14.011944   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:14.011950   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:14.012013   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:14.048418   75402 cri.go:89] found id: ""
	I0816 18:18:14.048447   75402 logs.go:276] 0 containers: []
	W0816 18:18:14.048459   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:14.048466   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:14.048515   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:14.082462   75402 cri.go:89] found id: ""
	I0816 18:18:14.082496   75402 logs.go:276] 0 containers: []
	W0816 18:18:14.082506   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:14.082514   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:14.082576   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:14.114958   75402 cri.go:89] found id: ""
	I0816 18:18:14.114986   75402 logs.go:276] 0 containers: []
	W0816 18:18:14.114996   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:14.115005   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:14.115067   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:14.154829   75402 cri.go:89] found id: ""
	I0816 18:18:14.154865   75402 logs.go:276] 0 containers: []
	W0816 18:18:14.154878   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:14.154888   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:14.154957   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:14.190012   75402 cri.go:89] found id: ""
	I0816 18:18:14.190045   75402 logs.go:276] 0 containers: []
	W0816 18:18:14.190053   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:14.190058   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:14.190108   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:14.223314   75402 cri.go:89] found id: ""
	I0816 18:18:14.223341   75402 logs.go:276] 0 containers: []
	W0816 18:18:14.223350   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:14.223360   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:14.223381   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:14.274995   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:14.275035   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:14.288518   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:14.288564   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:14.365668   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:14.365691   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:14.365705   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:14.445828   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:14.445866   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:16.981802   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:16.994729   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:16.994794   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:17.029790   75402 cri.go:89] found id: ""
	I0816 18:18:17.029821   75402 logs.go:276] 0 containers: []
	W0816 18:18:17.029839   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:17.029848   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:17.029912   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:17.063194   75402 cri.go:89] found id: ""
	I0816 18:18:17.063223   75402 logs.go:276] 0 containers: []
	W0816 18:18:17.063233   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:17.063240   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:17.063293   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:17.097808   75402 cri.go:89] found id: ""
	I0816 18:18:17.097831   75402 logs.go:276] 0 containers: []
	W0816 18:18:17.097839   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:17.097844   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:17.097900   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:17.132646   75402 cri.go:89] found id: ""
	I0816 18:18:17.132682   75402 logs.go:276] 0 containers: []
	W0816 18:18:17.132691   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:17.132697   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:17.132751   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:17.164285   75402 cri.go:89] found id: ""
	I0816 18:18:17.164316   75402 logs.go:276] 0 containers: []
	W0816 18:18:17.164328   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:17.164335   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:17.164391   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:17.195642   75402 cri.go:89] found id: ""
	I0816 18:18:17.195672   75402 logs.go:276] 0 containers: []
	W0816 18:18:17.195683   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:17.195691   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:17.195754   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:17.228005   75402 cri.go:89] found id: ""
	I0816 18:18:17.228033   75402 logs.go:276] 0 containers: []
	W0816 18:18:17.228041   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:17.228047   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:17.228107   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:17.279195   75402 cri.go:89] found id: ""
	I0816 18:18:17.279229   75402 logs.go:276] 0 containers: []
	W0816 18:18:17.279241   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:17.279253   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:17.279270   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:17.360084   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:17.360125   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:17.405184   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:17.405210   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:17.457453   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:17.457483   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:17.471472   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:17.471502   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:17.536478   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:15.939660   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:17.940456   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:16.207019   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:18.706191   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:15.450419   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:17.948912   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:20.036644   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:20.050169   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:20.050244   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:20.087943   75402 cri.go:89] found id: ""
	I0816 18:18:20.087971   75402 logs.go:276] 0 containers: []
	W0816 18:18:20.087981   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:20.087988   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:20.088051   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:20.119908   75402 cri.go:89] found id: ""
	I0816 18:18:20.119931   75402 logs.go:276] 0 containers: []
	W0816 18:18:20.119940   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:20.119945   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:20.120013   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:20.152115   75402 cri.go:89] found id: ""
	I0816 18:18:20.152146   75402 logs.go:276] 0 containers: []
	W0816 18:18:20.152156   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:20.152162   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:20.152209   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:20.189464   75402 cri.go:89] found id: ""
	I0816 18:18:20.189488   75402 logs.go:276] 0 containers: []
	W0816 18:18:20.189495   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:20.189500   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:20.189550   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:20.224779   75402 cri.go:89] found id: ""
	I0816 18:18:20.224807   75402 logs.go:276] 0 containers: []
	W0816 18:18:20.224817   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:20.224824   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:20.224888   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:20.257021   75402 cri.go:89] found id: ""
	I0816 18:18:20.257048   75402 logs.go:276] 0 containers: []
	W0816 18:18:20.257059   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:20.257067   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:20.257121   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:20.290991   75402 cri.go:89] found id: ""
	I0816 18:18:20.291023   75402 logs.go:276] 0 containers: []
	W0816 18:18:20.291032   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:20.291039   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:20.291099   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:20.323674   75402 cri.go:89] found id: ""
	I0816 18:18:20.323704   75402 logs.go:276] 0 containers: []
	W0816 18:18:20.323715   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:20.323726   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:20.323742   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:20.373411   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:20.373447   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:20.386954   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:20.386981   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:20.464366   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:20.464384   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:20.464403   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:20.541836   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:20.541881   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:23.085071   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:23.100460   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:23.100524   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:20.440656   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:22.942713   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:20.706771   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:23.207824   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:20.448676   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:22.948907   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:23.141239   75402 cri.go:89] found id: ""
	I0816 18:18:23.141269   75402 logs.go:276] 0 containers: []
	W0816 18:18:23.141280   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:23.141287   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:23.141354   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:23.172914   75402 cri.go:89] found id: ""
	I0816 18:18:23.172941   75402 logs.go:276] 0 containers: []
	W0816 18:18:23.172950   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:23.172958   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:23.173015   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:23.205593   75402 cri.go:89] found id: ""
	I0816 18:18:23.205621   75402 logs.go:276] 0 containers: []
	W0816 18:18:23.205632   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:23.205640   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:23.205706   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:23.239358   75402 cri.go:89] found id: ""
	I0816 18:18:23.239383   75402 logs.go:276] 0 containers: []
	W0816 18:18:23.239392   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:23.239401   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:23.239463   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:23.271798   75402 cri.go:89] found id: ""
	I0816 18:18:23.271828   75402 logs.go:276] 0 containers: []
	W0816 18:18:23.271838   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:23.271844   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:23.271911   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:23.305287   75402 cri.go:89] found id: ""
	I0816 18:18:23.305316   75402 logs.go:276] 0 containers: []
	W0816 18:18:23.305327   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:23.305335   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:23.305397   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:23.344041   75402 cri.go:89] found id: ""
	I0816 18:18:23.344067   75402 logs.go:276] 0 containers: []
	W0816 18:18:23.344075   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:23.344080   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:23.344134   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:23.376540   75402 cri.go:89] found id: ""
	I0816 18:18:23.376571   75402 logs.go:276] 0 containers: []
	W0816 18:18:23.376583   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:23.376601   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:23.376616   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:23.428265   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:23.428301   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:23.441377   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:23.441404   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:23.509219   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:23.509243   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:23.509259   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:23.589151   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:23.589186   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:26.126176   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:26.140228   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:26.140292   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:26.176768   75402 cri.go:89] found id: ""
	I0816 18:18:26.176807   75402 logs.go:276] 0 containers: []
	W0816 18:18:26.176820   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:26.176829   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:26.176887   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:26.212357   75402 cri.go:89] found id: ""
	I0816 18:18:26.212383   75402 logs.go:276] 0 containers: []
	W0816 18:18:26.212390   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:26.212396   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:26.212457   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:26.245256   75402 cri.go:89] found id: ""
	I0816 18:18:26.245290   75402 logs.go:276] 0 containers: []
	W0816 18:18:26.245302   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:26.245309   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:26.245370   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:26.277525   75402 cri.go:89] found id: ""
	I0816 18:18:26.277561   75402 logs.go:276] 0 containers: []
	W0816 18:18:26.277569   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:26.277575   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:26.277627   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:26.310928   75402 cri.go:89] found id: ""
	I0816 18:18:26.310956   75402 logs.go:276] 0 containers: []
	W0816 18:18:26.310967   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:26.310976   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:26.311052   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:26.344595   75402 cri.go:89] found id: ""
	I0816 18:18:26.344647   75402 logs.go:276] 0 containers: []
	W0816 18:18:26.344661   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:26.344669   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:26.344741   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:26.377776   75402 cri.go:89] found id: ""
	I0816 18:18:26.377805   75402 logs.go:276] 0 containers: []
	W0816 18:18:26.377814   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:26.377820   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:26.377872   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:26.411139   75402 cri.go:89] found id: ""
	I0816 18:18:26.411167   75402 logs.go:276] 0 containers: []
	W0816 18:18:26.411179   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:26.411190   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:26.411204   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:26.493802   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:26.493838   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:26.529542   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:26.529576   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:26.583544   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:26.583588   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:26.596429   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:26.596459   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:26.667858   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:25.441062   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:27.940609   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:25.706109   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:28.206196   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:25.448352   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:27.947950   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:29.168766   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:29.182032   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:29.182103   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:29.220213   75402 cri.go:89] found id: ""
	I0816 18:18:29.220239   75402 logs.go:276] 0 containers: []
	W0816 18:18:29.220247   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:29.220253   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:29.220300   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:29.257820   75402 cri.go:89] found id: ""
	I0816 18:18:29.257850   75402 logs.go:276] 0 containers: []
	W0816 18:18:29.257861   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:29.257867   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:29.257933   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:29.290450   75402 cri.go:89] found id: ""
	I0816 18:18:29.290473   75402 logs.go:276] 0 containers: []
	W0816 18:18:29.290480   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:29.290485   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:29.290546   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:29.328032   75402 cri.go:89] found id: ""
	I0816 18:18:29.328061   75402 logs.go:276] 0 containers: []
	W0816 18:18:29.328070   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:29.328076   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:29.328135   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:29.362104   75402 cri.go:89] found id: ""
	I0816 18:18:29.362132   75402 logs.go:276] 0 containers: []
	W0816 18:18:29.362141   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:29.362149   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:29.362218   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:29.395258   75402 cri.go:89] found id: ""
	I0816 18:18:29.395290   75402 logs.go:276] 0 containers: []
	W0816 18:18:29.395301   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:29.395309   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:29.395375   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:29.426617   75402 cri.go:89] found id: ""
	I0816 18:18:29.426646   75402 logs.go:276] 0 containers: []
	W0816 18:18:29.426656   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:29.426663   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:29.426725   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:29.462861   75402 cri.go:89] found id: ""
	I0816 18:18:29.462890   75402 logs.go:276] 0 containers: []
	W0816 18:18:29.462901   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:29.462912   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:29.462928   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:29.514882   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:29.514915   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:29.528101   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:29.528128   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:29.598983   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:29.599005   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:29.599020   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:29.684955   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:29.684991   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:32.230155   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:32.244158   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:32.244226   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:32.281993   75402 cri.go:89] found id: ""
	I0816 18:18:32.282020   75402 logs.go:276] 0 containers: []
	W0816 18:18:32.282031   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:32.282037   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:32.282100   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:32.316870   75402 cri.go:89] found id: ""
	I0816 18:18:32.316896   75402 logs.go:276] 0 containers: []
	W0816 18:18:32.316906   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:32.316914   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:32.316976   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:32.352597   75402 cri.go:89] found id: ""
	I0816 18:18:32.352637   75402 logs.go:276] 0 containers: []
	W0816 18:18:32.352649   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:32.352656   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:32.352722   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:32.387520   75402 cri.go:89] found id: ""
	I0816 18:18:32.387564   75402 logs.go:276] 0 containers: []
	W0816 18:18:32.387576   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:32.387584   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:32.387638   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:32.421499   75402 cri.go:89] found id: ""
	I0816 18:18:32.421526   75402 logs.go:276] 0 containers: []
	W0816 18:18:32.421537   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:32.421544   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:32.421603   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:32.460048   75402 cri.go:89] found id: ""
	I0816 18:18:32.460075   75402 logs.go:276] 0 containers: []
	W0816 18:18:32.460086   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:32.460093   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:32.460151   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:32.498148   75402 cri.go:89] found id: ""
	I0816 18:18:32.498176   75402 logs.go:276] 0 containers: []
	W0816 18:18:32.498184   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:32.498190   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:32.498248   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:32.530683   75402 cri.go:89] found id: ""
	I0816 18:18:32.530717   75402 logs.go:276] 0 containers: []
	W0816 18:18:32.530730   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:32.530741   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:32.530762   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:32.614776   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:32.614820   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:32.655628   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:32.655667   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:32.722763   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:32.722807   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:32.739817   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:32.739847   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:32.819297   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:30.440684   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:32.441210   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:30.206433   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:32.707436   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:30.448781   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:32.457660   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:35.320173   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:35.332427   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:35.332503   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:35.366316   75402 cri.go:89] found id: ""
	I0816 18:18:35.366346   75402 logs.go:276] 0 containers: []
	W0816 18:18:35.366357   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:35.366365   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:35.366433   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:35.399308   75402 cri.go:89] found id: ""
	I0816 18:18:35.399346   75402 logs.go:276] 0 containers: []
	W0816 18:18:35.399357   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:35.399367   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:35.399434   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:35.434926   75402 cri.go:89] found id: ""
	I0816 18:18:35.434958   75402 logs.go:276] 0 containers: []
	W0816 18:18:35.434971   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:35.434980   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:35.435042   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:35.473222   75402 cri.go:89] found id: ""
	I0816 18:18:35.473247   75402 logs.go:276] 0 containers: []
	W0816 18:18:35.473258   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:35.473266   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:35.473343   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:35.505484   75402 cri.go:89] found id: ""
	I0816 18:18:35.505521   75402 logs.go:276] 0 containers: []
	W0816 18:18:35.505533   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:35.505540   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:35.505608   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:35.540532   75402 cri.go:89] found id: ""
	I0816 18:18:35.540573   75402 logs.go:276] 0 containers: []
	W0816 18:18:35.540584   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:35.540590   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:35.540663   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:35.574205   75402 cri.go:89] found id: ""
	I0816 18:18:35.574235   75402 logs.go:276] 0 containers: []
	W0816 18:18:35.574245   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:35.574252   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:35.574343   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:35.614707   75402 cri.go:89] found id: ""
	I0816 18:18:35.614732   75402 logs.go:276] 0 containers: []
	W0816 18:18:35.614739   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:35.614747   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:35.614759   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:35.690830   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:35.690861   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:35.726601   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:35.726627   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:35.774706   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:35.774736   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:35.787557   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:35.787616   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:35.857474   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:34.940337   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:37.440507   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:34.701151   74828 pod_ready.go:82] duration metric: took 4m0.000965442s for pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace to be "Ready" ...
	E0816 18:18:34.701178   74828 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace to be "Ready" (will not retry!)
	I0816 18:18:34.701196   74828 pod_ready.go:39] duration metric: took 4m13.502588966s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:18:34.701228   74828 kubeadm.go:597] duration metric: took 4m21.306103533s to restartPrimaryControlPlane
	W0816 18:18:34.701293   74828 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0816 18:18:34.701330   74828 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 18:18:34.948583   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:37.447544   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:39.448942   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:38.358057   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:38.371128   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:38.371189   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:38.404812   75402 cri.go:89] found id: ""
	I0816 18:18:38.404844   75402 logs.go:276] 0 containers: []
	W0816 18:18:38.404855   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:38.404864   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:38.404926   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:38.437922   75402 cri.go:89] found id: ""
	I0816 18:18:38.437950   75402 logs.go:276] 0 containers: []
	W0816 18:18:38.437960   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:38.437967   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:38.438023   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:38.471474   75402 cri.go:89] found id: ""
	I0816 18:18:38.471509   75402 logs.go:276] 0 containers: []
	W0816 18:18:38.471519   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:38.471525   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:38.471582   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:38.510132   75402 cri.go:89] found id: ""
	I0816 18:18:38.510158   75402 logs.go:276] 0 containers: []
	W0816 18:18:38.510168   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:38.510184   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:38.510246   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:38.542212   75402 cri.go:89] found id: ""
	I0816 18:18:38.542251   75402 logs.go:276] 0 containers: []
	W0816 18:18:38.542262   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:38.542269   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:38.542341   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:38.579037   75402 cri.go:89] found id: ""
	I0816 18:18:38.579068   75402 logs.go:276] 0 containers: []
	W0816 18:18:38.579076   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:38.579082   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:38.579129   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:38.619219   75402 cri.go:89] found id: ""
	I0816 18:18:38.619252   75402 logs.go:276] 0 containers: []
	W0816 18:18:38.619263   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:38.619272   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:38.619335   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:38.655124   75402 cri.go:89] found id: ""
	I0816 18:18:38.655149   75402 logs.go:276] 0 containers: []
	W0816 18:18:38.655169   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:38.655180   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:38.655194   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:38.737857   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:38.737894   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:38.779777   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:38.779806   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:38.831556   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:38.831590   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:38.844496   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:38.844523   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:38.914543   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:41.415612   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:41.428187   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:41.428251   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:41.462932   75402 cri.go:89] found id: ""
	I0816 18:18:41.462964   75402 logs.go:276] 0 containers: []
	W0816 18:18:41.462975   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:41.462983   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:41.463043   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:41.497712   75402 cri.go:89] found id: ""
	I0816 18:18:41.497739   75402 logs.go:276] 0 containers: []
	W0816 18:18:41.497748   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:41.497754   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:41.497804   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:41.528430   75402 cri.go:89] found id: ""
	I0816 18:18:41.528455   75402 logs.go:276] 0 containers: []
	W0816 18:18:41.528463   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:41.528468   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:41.528527   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:41.560048   75402 cri.go:89] found id: ""
	I0816 18:18:41.560071   75402 logs.go:276] 0 containers: []
	W0816 18:18:41.560081   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:41.560088   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:41.560142   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:41.592536   75402 cri.go:89] found id: ""
	I0816 18:18:41.592566   75402 logs.go:276] 0 containers: []
	W0816 18:18:41.592577   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:41.592585   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:41.592663   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:41.626850   75402 cri.go:89] found id: ""
	I0816 18:18:41.626884   75402 logs.go:276] 0 containers: []
	W0816 18:18:41.626894   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:41.626902   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:41.626965   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:41.660452   75402 cri.go:89] found id: ""
	I0816 18:18:41.660478   75402 logs.go:276] 0 containers: []
	W0816 18:18:41.660486   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:41.660491   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:41.660542   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:41.695990   75402 cri.go:89] found id: ""
	I0816 18:18:41.696012   75402 logs.go:276] 0 containers: []
	W0816 18:18:41.696020   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:41.696028   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:41.696039   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:41.733107   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:41.733134   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:41.782812   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:41.782843   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:41.795954   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:41.795984   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:41.867473   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:41.867526   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:41.867545   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:39.442037   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:41.940088   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:41.948682   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:43.942215   75006 pod_ready.go:82] duration metric: took 4m0.000164284s for pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace to be "Ready" ...
	E0816 18:18:43.942239   75006 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace to be "Ready" (will not retry!)
	I0816 18:18:43.942255   75006 pod_ready.go:39] duration metric: took 4m12.163955241s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:18:43.942279   75006 kubeadm.go:597] duration metric: took 4m21.898271101s to restartPrimaryControlPlane
	W0816 18:18:43.942326   75006 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0816 18:18:43.942352   75006 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 18:18:44.450340   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:44.463299   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:44.463361   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:44.495068   75402 cri.go:89] found id: ""
	I0816 18:18:44.495098   75402 logs.go:276] 0 containers: []
	W0816 18:18:44.495108   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:44.495116   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:44.495221   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:44.529615   75402 cri.go:89] found id: ""
	I0816 18:18:44.529638   75402 logs.go:276] 0 containers: []
	W0816 18:18:44.529646   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:44.529651   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:44.529701   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:44.565275   75402 cri.go:89] found id: ""
	I0816 18:18:44.565298   75402 logs.go:276] 0 containers: []
	W0816 18:18:44.565306   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:44.565321   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:44.565384   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:44.598554   75402 cri.go:89] found id: ""
	I0816 18:18:44.598590   75402 logs.go:276] 0 containers: []
	W0816 18:18:44.598601   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:44.598609   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:44.598673   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:44.631389   75402 cri.go:89] found id: ""
	I0816 18:18:44.631422   75402 logs.go:276] 0 containers: []
	W0816 18:18:44.631436   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:44.631446   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:44.631519   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:44.663986   75402 cri.go:89] found id: ""
	I0816 18:18:44.664013   75402 logs.go:276] 0 containers: []
	W0816 18:18:44.664023   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:44.664031   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:44.664095   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:44.700238   75402 cri.go:89] found id: ""
	I0816 18:18:44.700263   75402 logs.go:276] 0 containers: []
	W0816 18:18:44.700272   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:44.700277   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:44.700330   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:44.732737   75402 cri.go:89] found id: ""
	I0816 18:18:44.732766   75402 logs.go:276] 0 containers: []
	W0816 18:18:44.732779   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:44.732790   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:44.732807   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:44.806427   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:44.806462   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:44.842965   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:44.842994   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:44.895745   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:44.895781   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:44.909850   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:44.909885   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:44.979315   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:47.479563   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:47.491876   75402 kubeadm.go:597] duration metric: took 4m4.431091965s to restartPrimaryControlPlane
	W0816 18:18:47.491939   75402 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0816 18:18:47.491962   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 18:18:43.941047   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:46.440592   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:48.441208   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:51.168302   75402 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.676317513s)
	I0816 18:18:51.168387   75402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 18:18:51.182492   75402 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 18:18:51.192403   75402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 18:18:51.202058   75402 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 18:18:51.202075   75402 kubeadm.go:157] found existing configuration files:
	
	I0816 18:18:51.202115   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 18:18:51.210661   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 18:18:51.210721   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 18:18:51.219979   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 18:18:51.228422   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 18:18:51.228488   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 18:18:51.237159   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 18:18:51.245555   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 18:18:51.245622   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 18:18:51.253986   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 18:18:51.261885   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 18:18:51.261927   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 18:18:51.270479   75402 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 18:18:51.335784   75402 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0816 18:18:51.335883   75402 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 18:18:51.482910   75402 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 18:18:51.483069   75402 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 18:18:51.483228   75402 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0816 18:18:51.652730   75402 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 18:18:51.655077   75402 out.go:235]   - Generating certificates and keys ...
	I0816 18:18:51.655185   75402 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 18:18:51.655304   75402 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 18:18:51.655425   75402 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 18:18:51.655521   75402 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 18:18:51.657408   75402 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 18:18:51.657485   75402 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 18:18:51.657561   75402 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 18:18:51.657645   75402 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 18:18:51.657748   75402 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 18:18:51.657854   75402 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 18:18:51.657911   75402 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 18:18:51.657984   75402 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 18:18:51.720786   75402 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 18:18:51.991165   75402 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 18:18:52.140983   75402 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 18:18:52.453361   75402 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 18:18:52.467210   75402 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 18:18:52.469222   75402 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 18:18:52.469338   75402 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 18:18:52.590938   75402 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 18:18:52.592875   75402 out.go:235]   - Booting up control plane ...
	I0816 18:18:52.592987   75402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 18:18:52.602597   75402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 18:18:52.603616   75402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 18:18:52.604417   75402 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 18:18:52.606669   75402 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0816 18:18:50.939639   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:52.940202   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:54.940917   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:57.439382   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:59.443139   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:01.940496   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:00.803654   74828 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.102297191s)
	I0816 18:19:00.803740   74828 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 18:19:00.818126   74828 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 18:19:00.827602   74828 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 18:19:00.836389   74828 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 18:19:00.836410   74828 kubeadm.go:157] found existing configuration files:
	
	I0816 18:19:00.836455   74828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 18:19:00.844830   74828 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 18:19:00.844880   74828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 18:19:00.853736   74828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 18:19:00.862795   74828 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 18:19:00.862855   74828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 18:19:00.872056   74828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 18:19:00.880410   74828 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 18:19:00.880461   74828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 18:19:00.889000   74828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 18:19:00.897508   74828 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 18:19:00.897568   74828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 18:19:00.906256   74828 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 18:19:00.953336   74828 kubeadm.go:310] W0816 18:19:00.929461    3053 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 18:19:00.955337   74828 kubeadm.go:310] W0816 18:19:00.931382    3053 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 18:19:01.068247   74828 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 18:19:03.940545   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:06.439727   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:08.440027   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:09.225829   74828 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0816 18:19:09.225908   74828 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 18:19:09.226014   74828 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 18:19:09.226126   74828 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 18:19:09.226242   74828 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0816 18:19:09.226329   74828 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 18:19:09.228065   74828 out.go:235]   - Generating certificates and keys ...
	I0816 18:19:09.228133   74828 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 18:19:09.228183   74828 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 18:19:09.228252   74828 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 18:19:09.228315   74828 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 18:19:09.228403   74828 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 18:19:09.228489   74828 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 18:19:09.228584   74828 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 18:19:09.228686   74828 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 18:19:09.228787   74828 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 18:19:09.228864   74828 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 18:19:09.228903   74828 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 18:19:09.228983   74828 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 18:19:09.229052   74828 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 18:19:09.229147   74828 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0816 18:19:09.229234   74828 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 18:19:09.229332   74828 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 18:19:09.229410   74828 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 18:19:09.229532   74828 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 18:19:09.229607   74828 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 18:19:09.230874   74828 out.go:235]   - Booting up control plane ...
	I0816 18:19:09.230948   74828 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 18:19:09.231032   74828 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 18:19:09.231090   74828 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 18:19:09.231202   74828 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 18:19:09.231321   74828 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 18:19:09.231381   74828 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 18:19:09.231572   74828 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0816 18:19:09.231662   74828 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0816 18:19:09.231711   74828 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.32263ms
	I0816 18:19:09.231774   74828 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0816 18:19:09.231824   74828 kubeadm.go:310] [api-check] The API server is healthy after 5.002367118s
	I0816 18:19:09.231923   74828 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0816 18:19:09.232091   74828 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0816 18:19:09.232166   74828 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0816 18:19:09.232419   74828 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-864476 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0816 18:19:09.232497   74828 kubeadm.go:310] [bootstrap-token] Using token: 6m1jus.xr9uhx26t28q092p
	I0816 18:19:09.233962   74828 out.go:235]   - Configuring RBAC rules ...
	I0816 18:19:09.234068   74828 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0816 18:19:09.234164   74828 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0816 18:19:09.234315   74828 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0816 18:19:09.234425   74828 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0816 18:19:09.234522   74828 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0816 18:19:09.234615   74828 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0816 18:19:09.234775   74828 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0816 18:19:09.234830   74828 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0816 18:19:09.234892   74828 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0816 18:19:09.234901   74828 kubeadm.go:310] 
	I0816 18:19:09.234971   74828 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0816 18:19:09.234980   74828 kubeadm.go:310] 
	I0816 18:19:09.235067   74828 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0816 18:19:09.235076   74828 kubeadm.go:310] 
	I0816 18:19:09.235115   74828 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0816 18:19:09.235194   74828 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0816 18:19:09.235271   74828 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0816 18:19:09.235280   74828 kubeadm.go:310] 
	I0816 18:19:09.235367   74828 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0816 18:19:09.235376   74828 kubeadm.go:310] 
	I0816 18:19:09.235448   74828 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0816 18:19:09.235459   74828 kubeadm.go:310] 
	I0816 18:19:09.235533   74828 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0816 18:19:09.235607   74828 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0816 18:19:09.235677   74828 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0816 18:19:09.235683   74828 kubeadm.go:310] 
	I0816 18:19:09.235795   74828 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0816 18:19:09.235907   74828 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0816 18:19:09.235916   74828 kubeadm.go:310] 
	I0816 18:19:09.235986   74828 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 6m1jus.xr9uhx26t28q092p \
	I0816 18:19:09.236080   74828 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:340cd7e6f4afd769ffe0b97bb40a910498795aaf29fab06cd6b8c9a3957ccd57 \
	I0816 18:19:09.236099   74828 kubeadm.go:310] 	--control-plane 
	I0816 18:19:09.236105   74828 kubeadm.go:310] 
	I0816 18:19:09.236177   74828 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0816 18:19:09.236185   74828 kubeadm.go:310] 
	I0816 18:19:09.236268   74828 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 6m1jus.xr9uhx26t28q092p \
	I0816 18:19:09.236403   74828 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:340cd7e6f4afd769ffe0b97bb40a910498795aaf29fab06cd6b8c9a3957ccd57 
	I0816 18:19:09.236416   74828 cni.go:84] Creating CNI manager for ""
	I0816 18:19:09.236422   74828 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 18:19:09.237971   74828 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 18:19:10.069497   75006 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.127122656s)
	I0816 18:19:10.069585   75006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 18:19:10.085322   75006 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 18:19:10.098736   75006 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 18:19:10.108163   75006 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 18:19:10.108183   75006 kubeadm.go:157] found existing configuration files:
	
	I0816 18:19:10.108224   75006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0816 18:19:10.117330   75006 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 18:19:10.117382   75006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 18:19:10.127090   75006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0816 18:19:10.135574   75006 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 18:19:10.135648   75006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 18:19:10.146127   75006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0816 18:19:10.154474   75006 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 18:19:10.154533   75006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 18:19:10.163245   75006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0816 18:19:10.171315   75006 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 18:19:10.171375   75006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 18:19:10.181088   75006 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 18:19:10.225495   75006 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0816 18:19:10.225571   75006 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 18:19:10.327332   75006 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 18:19:10.327442   75006 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 18:19:10.327586   75006 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0816 18:19:10.335739   75006 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 18:19:10.337610   75006 out.go:235]   - Generating certificates and keys ...
	I0816 18:19:10.337730   75006 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 18:19:10.337818   75006 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 18:19:10.337935   75006 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 18:19:10.338054   75006 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 18:19:10.338174   75006 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 18:19:10.338254   75006 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 18:19:10.338359   75006 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 18:19:10.338452   75006 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 18:19:10.338562   75006 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 18:19:10.338668   75006 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 18:19:10.338718   75006 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 18:19:10.338796   75006 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 18:19:10.437447   75006 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 18:19:10.868191   75006 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0816 18:19:10.961497   75006 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 18:19:11.363158   75006 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 18:19:11.963929   75006 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 18:19:11.964410   75006 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 18:19:11.967675   75006 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 18:19:09.239250   74828 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 18:19:09.250270   74828 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 18:19:09.267205   74828 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 18:19:09.267346   74828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:09.267366   74828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-864476 minikube.k8s.io/updated_at=2024_08_16T18_19_09_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd minikube.k8s.io/name=no-preload-864476 minikube.k8s.io/primary=true
	I0816 18:19:09.282111   74828 ops.go:34] apiserver oom_adj: -16
	I0816 18:19:09.471160   74828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:09.971453   74828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:10.471576   74828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:10.971748   74828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:11.471954   74828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:11.971371   74828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:12.471626   74828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:12.972021   74828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:13.472254   74828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:13.588350   74828 kubeadm.go:1113] duration metric: took 4.321062687s to wait for elevateKubeSystemPrivileges
	I0816 18:19:13.588392   74828 kubeadm.go:394] duration metric: took 5m0.245036951s to StartCluster
	I0816 18:19:13.588413   74828 settings.go:142] acquiring lock: {Name:mk7d6bbff611e99a9222156e58c7ea3b0a5541f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:19:13.588500   74828 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19461-9545/kubeconfig
	I0816 18:19:13.591118   74828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/kubeconfig: {Name:mk7d5abb3a1b9b654c5d7490d32aeae2c9a1bdd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:19:13.591418   74828 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.50 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 18:19:13.591683   74828 config.go:182] Loaded profile config "no-preload-864476": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 18:19:13.591744   74828 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 18:19:13.591809   74828 addons.go:69] Setting storage-provisioner=true in profile "no-preload-864476"
	I0816 18:19:13.591839   74828 addons.go:234] Setting addon storage-provisioner=true in "no-preload-864476"
	W0816 18:19:13.591851   74828 addons.go:243] addon storage-provisioner should already be in state true
	I0816 18:19:13.591882   74828 host.go:66] Checking if "no-preload-864476" exists ...
	I0816 18:19:13.592025   74828 addons.go:69] Setting default-storageclass=true in profile "no-preload-864476"
	I0816 18:19:13.592070   74828 addons.go:69] Setting metrics-server=true in profile "no-preload-864476"
	I0816 18:19:13.592135   74828 addons.go:234] Setting addon metrics-server=true in "no-preload-864476"
	W0816 18:19:13.592150   74828 addons.go:243] addon metrics-server should already be in state true
	I0816 18:19:13.592073   74828 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-864476"
	I0816 18:19:13.592272   74828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:13.592206   74828 host.go:66] Checking if "no-preload-864476" exists ...
	I0816 18:19:13.592326   74828 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:13.592654   74828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:13.592677   74828 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:13.592731   74828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:13.592753   74828 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:13.592790   74828 out.go:177] * Verifying Kubernetes components...
	I0816 18:19:13.594236   74828 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:19:13.613019   74828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42847
	I0816 18:19:13.613061   74828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44393
	I0816 18:19:13.613087   74828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40547
	I0816 18:19:13.613498   74828 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:13.613552   74828 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:13.613708   74828 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:13.614094   74828 main.go:141] libmachine: Using API Version  1
	I0816 18:19:13.614113   74828 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:13.614198   74828 main.go:141] libmachine: Using API Version  1
	I0816 18:19:13.614222   74828 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:13.614403   74828 main.go:141] libmachine: Using API Version  1
	I0816 18:19:13.614420   74828 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:13.614478   74828 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:13.614675   74828 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:13.614728   74828 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:13.614856   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetState
	I0816 18:19:13.615039   74828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:13.615068   74828 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:13.615401   74828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:13.615442   74828 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:13.619787   74828 addons.go:234] Setting addon default-storageclass=true in "no-preload-864476"
	W0816 18:19:13.619815   74828 addons.go:243] addon default-storageclass should already be in state true
	I0816 18:19:13.619848   74828 host.go:66] Checking if "no-preload-864476" exists ...
	I0816 18:19:13.620274   74828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:13.620438   74828 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:13.642013   74828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43679
	I0816 18:19:13.642196   74828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46207
	I0816 18:19:13.642654   74828 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:13.643201   74828 main.go:141] libmachine: Using API Version  1
	I0816 18:19:13.643227   74828 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:13.643304   74828 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:13.643888   74828 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:13.644065   74828 main.go:141] libmachine: Using API Version  1
	I0816 18:19:13.644086   74828 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:13.644537   74828 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:13.644548   74828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:13.644591   74828 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:13.645002   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetState
	I0816 18:19:13.646881   74828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40749
	I0816 18:19:13.647127   74828 main.go:141] libmachine: (no-preload-864476) Calling .DriverName
	I0816 18:19:13.647406   74828 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:13.648126   74828 main.go:141] libmachine: Using API Version  1
	I0816 18:19:13.648156   74828 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:13.648725   74828 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:13.648935   74828 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0816 18:19:13.649121   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetState
	I0816 18:19:13.649823   74828 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 18:19:13.649840   74828 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0816 18:19:13.649861   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:19:13.651524   74828 main.go:141] libmachine: (no-preload-864476) Calling .DriverName
	I0816 18:19:13.652917   74828 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:19:10.441027   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:12.939870   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:13.653916   74828 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 18:19:13.653933   74828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 18:19:13.653952   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:19:13.654035   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:19:13.654463   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:19:13.654482   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:19:13.654665   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHPort
	I0816 18:19:13.654883   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:19:13.655044   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHUsername
	I0816 18:19:13.655247   74828 sshutil.go:53] new ssh client: &{IP:192.168.50.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/no-preload-864476/id_rsa Username:docker}
	I0816 18:19:13.657315   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:19:13.657699   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:19:13.657783   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:19:13.657974   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHPort
	I0816 18:19:13.658125   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:19:13.658247   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHUsername
	I0816 18:19:13.658362   74828 sshutil.go:53] new ssh client: &{IP:192.168.50.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/no-preload-864476/id_rsa Username:docker}
	I0816 18:19:13.670111   74828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45349
	I0816 18:19:13.670711   74828 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:13.671220   74828 main.go:141] libmachine: Using API Version  1
	I0816 18:19:13.671239   74828 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:13.671585   74828 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:13.671778   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetState
	I0816 18:19:13.673274   74828 main.go:141] libmachine: (no-preload-864476) Calling .DriverName
	I0816 18:19:13.673480   74828 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 18:19:13.673493   74828 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 18:19:13.673511   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:19:13.677160   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:19:13.677542   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:19:13.677564   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:19:13.677854   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHPort
	I0816 18:19:13.678049   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:19:13.678170   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHUsername
	I0816 18:19:13.678263   74828 sshutil.go:53] new ssh client: &{IP:192.168.50.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/no-preload-864476/id_rsa Username:docker}
	I0816 18:19:11.970291   75006 out.go:235]   - Booting up control plane ...
	I0816 18:19:11.970385   75006 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 18:19:11.970516   75006 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 18:19:11.970617   75006 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 18:19:11.988374   75006 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 18:19:11.997980   75006 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 18:19:11.998045   75006 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 18:19:12.132297   75006 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0816 18:19:12.132447   75006 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0816 18:19:13.135489   75006 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.003222114s
	I0816 18:19:13.135584   75006 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0816 18:19:13.840111   74828 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 18:19:13.903130   74828 node_ready.go:35] waiting up to 6m0s for node "no-preload-864476" to be "Ready" ...
	I0816 18:19:13.915130   74828 node_ready.go:49] node "no-preload-864476" has status "Ready":"True"
	I0816 18:19:13.915163   74828 node_ready.go:38] duration metric: took 12.001127ms for node "no-preload-864476" to be "Ready" ...
	I0816 18:19:13.915174   74828 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:19:13.926756   74828 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-6zfgr" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:13.944598   74828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 18:19:13.971002   74828 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 18:19:13.971036   74828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0816 18:19:13.998897   74828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 18:19:14.015731   74828 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 18:19:14.015754   74828 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0816 18:19:14.080186   74828 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 18:19:14.080212   74828 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0816 18:19:14.187279   74828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 18:19:15.075984   74828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.077053329s)
	I0816 18:19:15.076058   74828 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:15.076071   74828 main.go:141] libmachine: (no-preload-864476) Calling .Close
	I0816 18:19:15.076364   74828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.131733705s)
	I0816 18:19:15.076478   74828 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:15.076495   74828 main.go:141] libmachine: (no-preload-864476) Calling .Close
	I0816 18:19:15.076405   74828 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:15.076567   74828 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:15.076591   74828 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:15.076600   74828 main.go:141] libmachine: (no-preload-864476) Calling .Close
	I0816 18:19:15.076436   74828 main.go:141] libmachine: (no-preload-864476) DBG | Closing plugin on server side
	I0816 18:19:15.076786   74828 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:15.076838   74828 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:15.076859   74828 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:15.076879   74828 main.go:141] libmachine: (no-preload-864476) Calling .Close
	I0816 18:19:15.076969   74828 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:15.076987   74828 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:15.077443   74828 main.go:141] libmachine: (no-preload-864476) DBG | Closing plugin on server side
	I0816 18:19:15.077517   74828 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:15.077535   74828 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:15.164872   74828 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:15.164903   74828 main.go:141] libmachine: (no-preload-864476) Calling .Close
	I0816 18:19:15.165218   74828 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:15.165238   74828 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:15.373294   74828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.1859614s)
	I0816 18:19:15.373399   74828 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:15.373417   74828 main.go:141] libmachine: (no-preload-864476) Calling .Close
	I0816 18:19:15.373716   74828 main.go:141] libmachine: (no-preload-864476) DBG | Closing plugin on server side
	I0816 18:19:15.373769   74828 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:15.373804   74828 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:15.373825   74828 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:15.373837   74828 main.go:141] libmachine: (no-preload-864476) Calling .Close
	I0816 18:19:15.374124   74828 main.go:141] libmachine: (no-preload-864476) DBG | Closing plugin on server side
	I0816 18:19:15.374130   74828 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:15.374181   74828 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:15.374192   74828 addons.go:475] Verifying addon metrics-server=true in "no-preload-864476"
	I0816 18:19:15.375801   74828 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0816 18:19:17.638005   75006 kubeadm.go:310] [api-check] The API server is healthy after 4.502130995s
	I0816 18:19:17.658334   75006 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0816 18:19:17.678882   75006 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0816 18:19:17.709612   75006 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0816 18:19:17.709881   75006 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-256678 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0816 18:19:17.724755   75006 kubeadm.go:310] [bootstrap-token] Using token: cdypho.k0vxtmnp4c93945s
	I0816 18:19:14.941895   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:17.440923   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:15.377611   74828 addons.go:510] duration metric: took 1.785861834s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0816 18:19:15.934515   74828 pod_ready.go:103] pod "coredns-6f6b679f8f-6zfgr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:18.435321   74828 pod_ready.go:103] pod "coredns-6f6b679f8f-6zfgr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:17.726222   75006 out.go:235]   - Configuring RBAC rules ...
	I0816 18:19:17.726361   75006 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0816 18:19:17.733325   75006 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0816 18:19:17.740707   75006 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0816 18:19:17.747325   75006 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0816 18:19:17.751554   75006 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0816 18:19:17.761084   75006 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0816 18:19:18.044607   75006 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0816 18:19:18.485134   75006 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0816 18:19:19.044481   75006 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0816 18:19:19.045968   75006 kubeadm.go:310] 
	I0816 18:19:19.046038   75006 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0816 18:19:19.046069   75006 kubeadm.go:310] 
	I0816 18:19:19.046185   75006 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0816 18:19:19.046198   75006 kubeadm.go:310] 
	I0816 18:19:19.046229   75006 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0816 18:19:19.046298   75006 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0816 18:19:19.046343   75006 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0816 18:19:19.046349   75006 kubeadm.go:310] 
	I0816 18:19:19.046396   75006 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0816 18:19:19.046413   75006 kubeadm.go:310] 
	I0816 18:19:19.046504   75006 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0816 18:19:19.046529   75006 kubeadm.go:310] 
	I0816 18:19:19.046614   75006 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0816 18:19:19.046718   75006 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0816 18:19:19.046813   75006 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0816 18:19:19.046828   75006 kubeadm.go:310] 
	I0816 18:19:19.046941   75006 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0816 18:19:19.047047   75006 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0816 18:19:19.047056   75006 kubeadm.go:310] 
	I0816 18:19:19.047153   75006 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token cdypho.k0vxtmnp4c93945s \
	I0816 18:19:19.047304   75006 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:340cd7e6f4afd769ffe0b97bb40a910498795aaf29fab06cd6b8c9a3957ccd57 \
	I0816 18:19:19.047346   75006 kubeadm.go:310] 	--control-plane 
	I0816 18:19:19.047358   75006 kubeadm.go:310] 
	I0816 18:19:19.047470   75006 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0816 18:19:19.047480   75006 kubeadm.go:310] 
	I0816 18:19:19.047596   75006 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token cdypho.k0vxtmnp4c93945s \
	I0816 18:19:19.047740   75006 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:340cd7e6f4afd769ffe0b97bb40a910498795aaf29fab06cd6b8c9a3957ccd57 
	I0816 18:19:19.048871   75006 kubeadm.go:310] W0816 18:19:10.202021    2564 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 18:19:19.049167   75006 kubeadm.go:310] W0816 18:19:10.202700    2564 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 18:19:19.049279   75006 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 18:19:19.049304   75006 cni.go:84] Creating CNI manager for ""
	I0816 18:19:19.049318   75006 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 18:19:19.051543   75006 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 18:19:19.052677   75006 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 18:19:19.063536   75006 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 18:19:19.084460   75006 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 18:19:19.084540   75006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:19.084608   75006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-256678 minikube.k8s.io/updated_at=2024_08_16T18_19_19_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd minikube.k8s.io/name=default-k8s-diff-port-256678 minikube.k8s.io/primary=true
	I0816 18:19:19.257760   75006 ops.go:34] apiserver oom_adj: -16
	I0816 18:19:19.258124   75006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:19.759000   75006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:19.940737   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:22.440273   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:20.934243   74828 pod_ready.go:103] pod "coredns-6f6b679f8f-6zfgr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:23.433046   74828 pod_ready.go:103] pod "coredns-6f6b679f8f-6zfgr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:20.258798   75006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:20.759112   75006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:21.258598   75006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:21.758433   75006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:22.258181   75006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:22.758276   75006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:23.258184   75006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:23.758168   75006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:23.846653   75006 kubeadm.go:1113] duration metric: took 4.762173901s to wait for elevateKubeSystemPrivileges
	I0816 18:19:23.846688   75006 kubeadm.go:394] duration metric: took 5m1.846731834s to StartCluster
	I0816 18:19:23.846708   75006 settings.go:142] acquiring lock: {Name:mk7d6bbff611e99a9222156e58c7ea3b0a5541f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:19:23.846784   75006 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19461-9545/kubeconfig
	I0816 18:19:23.848375   75006 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/kubeconfig: {Name:mk7d5abb3a1b9b654c5d7490d32aeae2c9a1bdd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:19:23.848662   75006 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.144 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 18:19:23.848750   75006 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 18:19:23.848814   75006 config.go:182] Loaded profile config "default-k8s-diff-port-256678": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 18:19:23.848840   75006 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-256678"
	I0816 18:19:23.848858   75006 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-256678"
	I0816 18:19:23.848866   75006 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-256678"
	I0816 18:19:23.848878   75006 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-256678"
	I0816 18:19:23.848882   75006 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-256678"
	W0816 18:19:23.848887   75006 addons.go:243] addon storage-provisioner should already be in state true
	W0816 18:19:23.848890   75006 addons.go:243] addon metrics-server should already be in state true
	I0816 18:19:23.848915   75006 host.go:66] Checking if "default-k8s-diff-port-256678" exists ...
	I0816 18:19:23.848918   75006 host.go:66] Checking if "default-k8s-diff-port-256678" exists ...
	I0816 18:19:23.848914   75006 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-256678"
	I0816 18:19:23.849232   75006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:23.849259   75006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:23.849271   75006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:23.849293   75006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:23.849362   75006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:23.849404   75006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:23.850478   75006 out.go:177] * Verifying Kubernetes components...
	I0816 18:19:23.852034   75006 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:19:23.865786   75006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32949
	I0816 18:19:23.865939   75006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32833
	I0816 18:19:23.866248   75006 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:23.866304   75006 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:23.866398   75006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46809
	I0816 18:19:23.866816   75006 main.go:141] libmachine: Using API Version  1
	I0816 18:19:23.866845   75006 main.go:141] libmachine: Using API Version  1
	I0816 18:19:23.866860   75006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:23.866863   75006 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:23.866935   75006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:23.867328   75006 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:23.867333   75006 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:23.867430   75006 main.go:141] libmachine: Using API Version  1
	I0816 18:19:23.867447   75006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:23.867517   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetState
	I0816 18:19:23.867742   75006 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:23.867871   75006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:23.867897   75006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:23.868227   75006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:23.868247   75006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:23.870993   75006 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-256678"
	W0816 18:19:23.871020   75006 addons.go:243] addon default-storageclass should already be in state true
	I0816 18:19:23.871051   75006 host.go:66] Checking if "default-k8s-diff-port-256678" exists ...
	I0816 18:19:23.871403   75006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:23.871433   75006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:23.885139   75006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42813
	I0816 18:19:23.885814   75006 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:23.886386   75006 main.go:141] libmachine: Using API Version  1
	I0816 18:19:23.886403   75006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:23.886814   75006 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:23.886856   75006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39359
	I0816 18:19:23.887024   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetState
	I0816 18:19:23.887202   75006 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:23.887542   75006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36361
	I0816 18:19:23.887784   75006 main.go:141] libmachine: Using API Version  1
	I0816 18:19:23.887797   75006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:23.887863   75006 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:23.888165   75006 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:23.888372   75006 main.go:141] libmachine: Using API Version  1
	I0816 18:19:23.888389   75006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:23.889026   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .DriverName
	I0816 18:19:23.889254   75006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:23.889268   75006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:23.889518   75006 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:23.889758   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetState
	I0816 18:19:23.890483   75006 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:19:23.891262   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .DriverName
	I0816 18:19:23.891838   75006 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 18:19:23.891859   75006 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 18:19:23.891877   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:19:23.892581   75006 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0816 18:19:23.893621   75006 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 18:19:23.893684   75006 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0816 18:19:23.893882   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:19:23.894413   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:19:23.894973   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:19:23.894994   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:19:23.895161   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHPort
	I0816 18:19:23.895322   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:19:23.895578   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHUsername
	I0816 18:19:23.895757   75006 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/default-k8s-diff-port-256678/id_rsa Username:docker}
	I0816 18:19:23.897167   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:19:23.897666   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:19:23.897685   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:19:23.897802   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHPort
	I0816 18:19:23.897972   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:19:23.898132   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHUsername
	I0816 18:19:23.898248   75006 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/default-k8s-diff-port-256678/id_rsa Username:docker}
	I0816 18:19:23.906377   75006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43895
	I0816 18:19:23.906708   75006 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:23.907497   75006 main.go:141] libmachine: Using API Version  1
	I0816 18:19:23.907513   75006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:23.907932   75006 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:23.908240   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetState
	I0816 18:19:23.909917   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .DriverName
	I0816 18:19:23.910141   75006 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 18:19:23.910159   75006 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 18:19:23.910177   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:19:23.912435   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:19:23.912678   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:19:23.912710   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:19:23.912858   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHPort
	I0816 18:19:23.912982   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:19:23.913066   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHUsername
	I0816 18:19:23.913138   75006 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/default-k8s-diff-port-256678/id_rsa Username:docker}
	I0816 18:19:24.062487   75006 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 18:19:24.083148   75006 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-256678" to be "Ready" ...
	I0816 18:19:24.092886   75006 node_ready.go:49] node "default-k8s-diff-port-256678" has status "Ready":"True"
	I0816 18:19:24.092907   75006 node_ready.go:38] duration metric: took 9.72996ms for node "default-k8s-diff-port-256678" to be "Ready" ...
	I0816 18:19:24.092916   75006 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:19:24.099123   75006 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-hx7sb" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:24.184211   75006 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 18:19:24.197461   75006 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 18:19:24.197491   75006 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0816 18:19:24.219263   75006 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 18:19:24.258463   75006 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 18:19:24.258498   75006 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0816 18:19:24.355822   75006 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 18:19:24.355902   75006 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0816 18:19:24.436401   75006 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 18:19:24.866038   75006 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:24.866125   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .Close
	I0816 18:19:24.866058   75006 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:24.866163   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .Close
	I0816 18:19:24.866478   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | Closing plugin on server side
	I0816 18:19:24.866517   75006 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:24.866526   75006 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:24.866536   75006 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:24.866546   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .Close
	I0816 18:19:24.866600   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | Closing plugin on server side
	I0816 18:19:24.866626   75006 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:24.866636   75006 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:24.866649   75006 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:24.866676   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .Close
	I0816 18:19:24.866778   75006 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:24.866793   75006 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:24.866810   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | Closing plugin on server side
	I0816 18:19:24.866888   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | Closing plugin on server side
	I0816 18:19:24.866923   75006 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:24.866932   75006 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:24.886041   75006 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:24.886065   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .Close
	I0816 18:19:24.886338   75006 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:24.886359   75006 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:24.886384   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | Closing plugin on server side
	I0816 18:19:25.225367   75006 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:25.225397   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .Close
	I0816 18:19:25.225704   75006 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:25.225720   75006 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:25.225730   75006 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:25.225739   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .Close
	I0816 18:19:25.225961   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | Closing plugin on server side
	I0816 18:19:25.226005   75006 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:25.226025   75006 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:25.226043   75006 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-256678"
	I0816 18:19:25.227605   75006 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0816 18:19:23.934167   74828 pod_ready.go:93] pod "coredns-6f6b679f8f-6zfgr" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:23.934191   74828 pod_ready.go:82] duration metric: took 10.007408518s for pod "coredns-6f6b679f8f-6zfgr" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:23.934200   74828 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-qr4q9" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:23.940226   74828 pod_ready.go:93] pod "coredns-6f6b679f8f-qr4q9" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:23.940249   74828 pod_ready.go:82] duration metric: took 6.040513ms for pod "coredns-6f6b679f8f-qr4q9" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:23.940260   74828 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:23.945330   74828 pod_ready.go:93] pod "etcd-no-preload-864476" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:23.945351   74828 pod_ready.go:82] duration metric: took 5.082362ms for pod "etcd-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:23.945361   74828 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:23.949772   74828 pod_ready.go:93] pod "kube-apiserver-no-preload-864476" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:23.949800   74828 pod_ready.go:82] duration metric: took 4.429575ms for pod "kube-apiserver-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:23.949810   74828 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:23.954308   74828 pod_ready.go:93] pod "kube-controller-manager-no-preload-864476" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:23.954328   74828 pod_ready.go:82] duration metric: took 4.510361ms for pod "kube-controller-manager-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:23.954338   74828 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6g6zx" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:24.331265   74828 pod_ready.go:93] pod "kube-proxy-6g6zx" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:24.331306   74828 pod_ready.go:82] duration metric: took 376.9609ms for pod "kube-proxy-6g6zx" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:24.331320   74828 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:24.730715   74828 pod_ready.go:93] pod "kube-scheduler-no-preload-864476" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:24.730740   74828 pod_ready.go:82] duration metric: took 399.412376ms for pod "kube-scheduler-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:24.730748   74828 pod_ready.go:39] duration metric: took 10.815561534s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:19:24.730761   74828 api_server.go:52] waiting for apiserver process to appear ...
	I0816 18:19:24.730820   74828 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:19:24.746674   74828 api_server.go:72] duration metric: took 11.155216371s to wait for apiserver process to appear ...
	I0816 18:19:24.746697   74828 api_server.go:88] waiting for apiserver healthz status ...
	I0816 18:19:24.746714   74828 api_server.go:253] Checking apiserver healthz at https://192.168.50.50:8443/healthz ...
	I0816 18:19:24.750801   74828 api_server.go:279] https://192.168.50.50:8443/healthz returned 200:
	ok
	I0816 18:19:24.751835   74828 api_server.go:141] control plane version: v1.31.0
	I0816 18:19:24.751864   74828 api_server.go:131] duration metric: took 5.159229ms to wait for apiserver health ...
	I0816 18:19:24.751872   74828 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 18:19:24.935471   74828 system_pods.go:59] 9 kube-system pods found
	I0816 18:19:24.935510   74828 system_pods.go:61] "coredns-6f6b679f8f-6zfgr" [99157766-5089-4abe-a888-ec5992e5720a] Running
	I0816 18:19:24.935520   74828 system_pods.go:61] "coredns-6f6b679f8f-qr4q9" [d20f51f3-6786-496b-a6bc-7457462e46e9] Running
	I0816 18:19:24.935539   74828 system_pods.go:61] "etcd-no-preload-864476" [246e2b57-dbfe-4fd2-bc9d-ef927d48ba0b] Running
	I0816 18:19:24.935548   74828 system_pods.go:61] "kube-apiserver-no-preload-864476" [0e386448-037f-4543-941a-63f07e0d3186] Running
	I0816 18:19:24.935555   74828 system_pods.go:61] "kube-controller-manager-no-preload-864476" [71617b5c-9968-4d49-ac6c-7728712ac880] Running
	I0816 18:19:24.935562   74828 system_pods.go:61] "kube-proxy-6g6zx" [71a027eb-99e3-4b48-b9f1-2fc80cad9d2e] Running
	I0816 18:19:24.935572   74828 system_pods.go:61] "kube-scheduler-no-preload-864476" [c9b6ef2a-41fa-408b-86b7-eae10db4bec6] Running
	I0816 18:19:24.935584   74828 system_pods.go:61] "metrics-server-6867b74b74-r6cph" [a842267c-2c75-4799-aefc-2fb92ccb9129] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 18:19:24.935596   74828 system_pods.go:61] "storage-provisioner" [c05cdb7c-d74e-4008-a0fc-5eb6df9595af] Running
	I0816 18:19:24.935607   74828 system_pods.go:74] duration metric: took 183.727841ms to wait for pod list to return data ...
	I0816 18:19:24.935621   74828 default_sa.go:34] waiting for default service account to be created ...
	I0816 18:19:25.132713   74828 default_sa.go:45] found service account: "default"
	I0816 18:19:25.132740   74828 default_sa.go:55] duration metric: took 197.112152ms for default service account to be created ...
	I0816 18:19:25.132750   74828 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 18:19:25.335012   74828 system_pods.go:86] 9 kube-system pods found
	I0816 18:19:25.335043   74828 system_pods.go:89] "coredns-6f6b679f8f-6zfgr" [99157766-5089-4abe-a888-ec5992e5720a] Running
	I0816 18:19:25.335048   74828 system_pods.go:89] "coredns-6f6b679f8f-qr4q9" [d20f51f3-6786-496b-a6bc-7457462e46e9] Running
	I0816 18:19:25.335052   74828 system_pods.go:89] "etcd-no-preload-864476" [246e2b57-dbfe-4fd2-bc9d-ef927d48ba0b] Running
	I0816 18:19:25.335057   74828 system_pods.go:89] "kube-apiserver-no-preload-864476" [0e386448-037f-4543-941a-63f07e0d3186] Running
	I0816 18:19:25.335061   74828 system_pods.go:89] "kube-controller-manager-no-preload-864476" [71617b5c-9968-4d49-ac6c-7728712ac880] Running
	I0816 18:19:25.335064   74828 system_pods.go:89] "kube-proxy-6g6zx" [71a027eb-99e3-4b48-b9f1-2fc80cad9d2e] Running
	I0816 18:19:25.335068   74828 system_pods.go:89] "kube-scheduler-no-preload-864476" [c9b6ef2a-41fa-408b-86b7-eae10db4bec6] Running
	I0816 18:19:25.335075   74828 system_pods.go:89] "metrics-server-6867b74b74-r6cph" [a842267c-2c75-4799-aefc-2fb92ccb9129] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 18:19:25.335081   74828 system_pods.go:89] "storage-provisioner" [c05cdb7c-d74e-4008-a0fc-5eb6df9595af] Running
	I0816 18:19:25.335089   74828 system_pods.go:126] duration metric: took 202.33381ms to wait for k8s-apps to be running ...
	I0816 18:19:25.335098   74828 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 18:19:25.335141   74828 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 18:19:25.349420   74828 system_svc.go:56] duration metric: took 14.310938ms WaitForService to wait for kubelet
	I0816 18:19:25.349457   74828 kubeadm.go:582] duration metric: took 11.758002576s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 18:19:25.349480   74828 node_conditions.go:102] verifying NodePressure condition ...
	I0816 18:19:25.532145   74828 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 18:19:25.532175   74828 node_conditions.go:123] node cpu capacity is 2
	I0816 18:19:25.532189   74828 node_conditions.go:105] duration metric: took 182.702662ms to run NodePressure ...
	I0816 18:19:25.532200   74828 start.go:241] waiting for startup goroutines ...
	I0816 18:19:25.532209   74828 start.go:246] waiting for cluster config update ...
	I0816 18:19:25.532222   74828 start.go:255] writing updated cluster config ...
	I0816 18:19:25.532529   74828 ssh_runner.go:195] Run: rm -f paused
	I0816 18:19:25.588070   74828 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0816 18:19:25.589615   74828 out.go:177] * Done! kubectl is now configured to use "no-preload-864476" cluster and "default" namespace by default
	I0816 18:19:24.440489   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:25.441683   74510 pod_ready.go:82] duration metric: took 4m0.007816418s for pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace to be "Ready" ...
	E0816 18:19:25.441706   74510 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0816 18:19:25.441714   74510 pod_ready.go:39] duration metric: took 4m6.551547163s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:19:25.441726   74510 api_server.go:52] waiting for apiserver process to appear ...
	I0816 18:19:25.441753   74510 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:19:25.441805   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:19:25.492207   74510 cri.go:89] found id: "8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b"
	I0816 18:19:25.492235   74510 cri.go:89] found id: ""
	I0816 18:19:25.492245   74510 logs.go:276] 1 containers: [8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b]
	I0816 18:19:25.492313   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:25.497307   74510 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:19:25.497388   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:19:25.537185   74510 cri.go:89] found id: "fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468"
	I0816 18:19:25.537211   74510 cri.go:89] found id: ""
	I0816 18:19:25.537220   74510 logs.go:276] 1 containers: [fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468]
	I0816 18:19:25.537422   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:25.546564   74510 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:19:25.546644   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:19:25.602794   74510 cri.go:89] found id: "3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d"
	I0816 18:19:25.602817   74510 cri.go:89] found id: ""
	I0816 18:19:25.602827   74510 logs.go:276] 1 containers: [3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d]
	I0816 18:19:25.602879   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:25.609018   74510 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:19:25.609097   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:19:25.657942   74510 cri.go:89] found id: "99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc"
	I0816 18:19:25.657970   74510 cri.go:89] found id: ""
	I0816 18:19:25.657980   74510 logs.go:276] 1 containers: [99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc]
	I0816 18:19:25.658044   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:25.663485   74510 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:19:25.663551   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:19:25.709526   74510 cri.go:89] found id: "92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf"
	I0816 18:19:25.709554   74510 cri.go:89] found id: ""
	I0816 18:19:25.709564   74510 logs.go:276] 1 containers: [92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf]
	I0816 18:19:25.709612   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:25.715845   74510 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:19:25.715898   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:19:25.766505   74510 cri.go:89] found id: "72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c"
	I0816 18:19:25.766522   74510 cri.go:89] found id: ""
	I0816 18:19:25.766529   74510 logs.go:276] 1 containers: [72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c]
	I0816 18:19:25.766573   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:25.771051   74510 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:19:25.771127   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:19:25.810669   74510 cri.go:89] found id: ""
	I0816 18:19:25.810699   74510 logs.go:276] 0 containers: []
	W0816 18:19:25.810711   74510 logs.go:278] No container was found matching "kindnet"
	I0816 18:19:25.810720   74510 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 18:19:25.810779   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 18:19:25.851412   74510 cri.go:89] found id: "08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970"
	I0816 18:19:25.851432   74510 cri.go:89] found id: "81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e"
	I0816 18:19:25.851438   74510 cri.go:89] found id: ""
	I0816 18:19:25.851454   74510 logs.go:276] 2 containers: [08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970 81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e]
	I0816 18:19:25.851507   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:25.856154   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:25.860812   74510 logs.go:123] Gathering logs for etcd [fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468] ...
	I0816 18:19:25.860837   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468"
	I0816 18:19:25.910929   74510 logs.go:123] Gathering logs for coredns [3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d] ...
	I0816 18:19:25.910957   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d"
	I0816 18:19:25.951932   74510 logs.go:123] Gathering logs for storage-provisioner [08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970] ...
	I0816 18:19:25.951959   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970"
	I0816 18:19:25.999861   74510 logs.go:123] Gathering logs for storage-provisioner [81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e] ...
	I0816 18:19:25.999894   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e"
	I0816 18:19:26.036535   74510 logs.go:123] Gathering logs for container status ...
	I0816 18:19:26.036559   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:19:26.089637   74510 logs.go:123] Gathering logs for kubelet ...
	I0816 18:19:26.089675   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:19:26.157679   74510 logs.go:123] Gathering logs for dmesg ...
	I0816 18:19:26.157714   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:19:26.171402   74510 logs.go:123] Gathering logs for kube-scheduler [99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc] ...
	I0816 18:19:26.171432   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc"
	I0816 18:19:26.209537   74510 logs.go:123] Gathering logs for kube-proxy [92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf] ...
	I0816 18:19:26.209564   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf"
	I0816 18:19:26.252702   74510 logs.go:123] Gathering logs for kube-controller-manager [72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c] ...
	I0816 18:19:26.252732   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c"
	I0816 18:19:26.303169   74510 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:19:26.303203   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:19:26.784058   74510 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:19:26.784090   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 18:19:26.904095   74510 logs.go:123] Gathering logs for kube-apiserver [8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b] ...
	I0816 18:19:26.904137   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b"
	I0816 18:19:25.228674   75006 addons.go:510] duration metric: took 1.37992722s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0816 18:19:26.105147   75006 pod_ready.go:103] pod "coredns-6f6b679f8f-hx7sb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:28.107202   75006 pod_ready.go:103] pod "coredns-6f6b679f8f-hx7sb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:32.607933   75402 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0816 18:19:32.608136   75402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:19:32.608430   75402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:19:29.459100   74510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:19:29.476158   74510 api_server.go:72] duration metric: took 4m17.827179017s to wait for apiserver process to appear ...
	I0816 18:19:29.476183   74510 api_server.go:88] waiting for apiserver healthz status ...
	I0816 18:19:29.476222   74510 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:19:29.476279   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:19:29.509739   74510 cri.go:89] found id: "8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b"
	I0816 18:19:29.509767   74510 cri.go:89] found id: ""
	I0816 18:19:29.509776   74510 logs.go:276] 1 containers: [8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b]
	I0816 18:19:29.509836   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:29.516078   74510 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:19:29.516150   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:19:29.553766   74510 cri.go:89] found id: "fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468"
	I0816 18:19:29.553795   74510 cri.go:89] found id: ""
	I0816 18:19:29.553805   74510 logs.go:276] 1 containers: [fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468]
	I0816 18:19:29.553857   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:29.558145   74510 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:19:29.558210   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:19:29.599559   74510 cri.go:89] found id: "3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d"
	I0816 18:19:29.599583   74510 cri.go:89] found id: ""
	I0816 18:19:29.599594   74510 logs.go:276] 1 containers: [3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d]
	I0816 18:19:29.599651   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:29.604108   74510 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:19:29.604187   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:19:29.641990   74510 cri.go:89] found id: "99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc"
	I0816 18:19:29.642009   74510 cri.go:89] found id: ""
	I0816 18:19:29.642016   74510 logs.go:276] 1 containers: [99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc]
	I0816 18:19:29.642062   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:29.645990   74510 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:19:29.646047   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:19:29.679480   74510 cri.go:89] found id: "92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf"
	I0816 18:19:29.679505   74510 cri.go:89] found id: ""
	I0816 18:19:29.679514   74510 logs.go:276] 1 containers: [92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf]
	I0816 18:19:29.679571   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:29.683361   74510 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:19:29.683425   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:19:29.733167   74510 cri.go:89] found id: "72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c"
	I0816 18:19:29.733197   74510 cri.go:89] found id: ""
	I0816 18:19:29.733208   74510 logs.go:276] 1 containers: [72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c]
	I0816 18:19:29.733266   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:29.737449   74510 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:19:29.737518   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:19:29.771597   74510 cri.go:89] found id: ""
	I0816 18:19:29.771628   74510 logs.go:276] 0 containers: []
	W0816 18:19:29.771639   74510 logs.go:278] No container was found matching "kindnet"
	I0816 18:19:29.771647   74510 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 18:19:29.771714   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 18:19:29.812346   74510 cri.go:89] found id: "08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970"
	I0816 18:19:29.812375   74510 cri.go:89] found id: "81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e"
	I0816 18:19:29.812381   74510 cri.go:89] found id: ""
	I0816 18:19:29.812390   74510 logs.go:276] 2 containers: [08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970 81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e]
	I0816 18:19:29.812447   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:29.817909   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:29.821575   74510 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:19:29.821602   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:19:30.288789   74510 logs.go:123] Gathering logs for container status ...
	I0816 18:19:30.288836   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:19:30.332874   74510 logs.go:123] Gathering logs for dmesg ...
	I0816 18:19:30.332904   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:19:30.347128   74510 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:19:30.347168   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 18:19:30.456809   74510 logs.go:123] Gathering logs for etcd [fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468] ...
	I0816 18:19:30.456845   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468"
	I0816 18:19:30.505332   74510 logs.go:123] Gathering logs for kube-scheduler [99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc] ...
	I0816 18:19:30.505362   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc"
	I0816 18:19:30.540765   74510 logs.go:123] Gathering logs for kube-proxy [92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf] ...
	I0816 18:19:30.540798   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf"
	I0816 18:19:30.576047   74510 logs.go:123] Gathering logs for storage-provisioner [81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e] ...
	I0816 18:19:30.576077   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e"
	I0816 18:19:30.611956   74510 logs.go:123] Gathering logs for kubelet ...
	I0816 18:19:30.611992   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:19:30.678135   74510 logs.go:123] Gathering logs for kube-apiserver [8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b] ...
	I0816 18:19:30.678177   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b"
	I0816 18:19:30.732409   74510 logs.go:123] Gathering logs for coredns [3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d] ...
	I0816 18:19:30.732437   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d"
	I0816 18:19:30.773306   74510 logs.go:123] Gathering logs for kube-controller-manager [72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c] ...
	I0816 18:19:30.773331   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c"
	I0816 18:19:30.827732   74510 logs.go:123] Gathering logs for storage-provisioner [08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970] ...
	I0816 18:19:30.827763   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970"
	I0816 18:19:33.367134   74510 api_server.go:253] Checking apiserver healthz at https://192.168.61.218:8443/healthz ...
	I0816 18:19:33.371523   74510 api_server.go:279] https://192.168.61.218:8443/healthz returned 200:
	ok
	I0816 18:19:33.372537   74510 api_server.go:141] control plane version: v1.31.0
	I0816 18:19:33.372560   74510 api_server.go:131] duration metric: took 3.896368169s to wait for apiserver health ...
	I0816 18:19:33.372568   74510 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 18:19:33.372589   74510 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:19:33.372653   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:19:33.409551   74510 cri.go:89] found id: "8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b"
	I0816 18:19:33.409579   74510 cri.go:89] found id: ""
	I0816 18:19:33.409590   74510 logs.go:276] 1 containers: [8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b]
	I0816 18:19:33.409648   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:33.413727   74510 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:19:33.413802   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:19:33.457246   74510 cri.go:89] found id: "fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468"
	I0816 18:19:33.457268   74510 cri.go:89] found id: ""
	I0816 18:19:33.457277   74510 logs.go:276] 1 containers: [fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468]
	I0816 18:19:33.457337   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:33.461490   74510 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:19:33.461556   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:19:33.497141   74510 cri.go:89] found id: "3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d"
	I0816 18:19:33.497169   74510 cri.go:89] found id: ""
	I0816 18:19:33.497180   74510 logs.go:276] 1 containers: [3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d]
	I0816 18:19:33.497241   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:33.501353   74510 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:19:33.501421   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:19:33.537797   74510 cri.go:89] found id: "99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc"
	I0816 18:19:33.537816   74510 cri.go:89] found id: ""
	I0816 18:19:33.537823   74510 logs.go:276] 1 containers: [99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc]
	I0816 18:19:33.537877   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:33.541727   74510 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:19:33.541784   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:19:33.575882   74510 cri.go:89] found id: "92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf"
	I0816 18:19:33.575905   74510 cri.go:89] found id: ""
	I0816 18:19:33.575913   74510 logs.go:276] 1 containers: [92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf]
	I0816 18:19:33.575964   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:33.579592   74510 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:19:33.579644   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:19:33.614425   74510 cri.go:89] found id: "72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c"
	I0816 18:19:33.614447   74510 cri.go:89] found id: ""
	I0816 18:19:33.614455   74510 logs.go:276] 1 containers: [72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c]
	I0816 18:19:33.614507   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:33.618130   74510 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:19:33.618178   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:19:33.652369   74510 cri.go:89] found id: ""
	I0816 18:19:33.652393   74510 logs.go:276] 0 containers: []
	W0816 18:19:33.652403   74510 logs.go:278] No container was found matching "kindnet"
	I0816 18:19:33.652410   74510 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 18:19:33.652463   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 18:19:33.687276   74510 cri.go:89] found id: "08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970"
	I0816 18:19:33.687295   74510 cri.go:89] found id: "81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e"
	I0816 18:19:33.687301   74510 cri.go:89] found id: ""
	I0816 18:19:33.687309   74510 logs.go:276] 2 containers: [08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970 81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e]
	I0816 18:19:33.687361   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:33.691100   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:33.695148   74510 logs.go:123] Gathering logs for kube-proxy [92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf] ...
	I0816 18:19:33.695179   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf"
	I0816 18:19:30.110901   75006 pod_ready.go:103] pod "coredns-6f6b679f8f-hx7sb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:32.606195   75006 pod_ready.go:103] pod "coredns-6f6b679f8f-hx7sb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:34.110732   75006 pod_ready.go:93] pod "coredns-6f6b679f8f-hx7sb" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:34.110764   75006 pod_ready.go:82] duration metric: took 10.011612904s for pod "coredns-6f6b679f8f-hx7sb" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.110778   75006 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-t74vf" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.116373   75006 pod_ready.go:93] pod "coredns-6f6b679f8f-t74vf" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:34.116392   75006 pod_ready.go:82] duration metric: took 5.607377ms for pod "coredns-6f6b679f8f-t74vf" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.116401   75006 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.124005   75006 pod_ready.go:93] pod "etcd-default-k8s-diff-port-256678" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:34.124027   75006 pod_ready.go:82] duration metric: took 7.618878ms for pod "etcd-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.124039   75006 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.129603   75006 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-256678" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:34.129623   75006 pod_ready.go:82] duration metric: took 5.575452ms for pod "kube-apiserver-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.129633   75006 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.145449   75006 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-256678" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:34.145474   75006 pod_ready.go:82] duration metric: took 15.831669ms for pod "kube-controller-manager-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.145486   75006 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qsskg" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.506455   75006 pod_ready.go:93] pod "kube-proxy-qsskg" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:34.506477   75006 pod_ready.go:82] duration metric: took 360.982998ms for pod "kube-proxy-qsskg" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.506486   75006 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.905345   75006 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-256678" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:34.905365   75006 pod_ready.go:82] duration metric: took 398.872303ms for pod "kube-scheduler-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.905373   75006 pod_ready.go:39] duration metric: took 10.812448791s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:19:34.905386   75006 api_server.go:52] waiting for apiserver process to appear ...
	I0816 18:19:34.905430   75006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:19:34.920554   75006 api_server.go:72] duration metric: took 11.071846456s to wait for apiserver process to appear ...
	I0816 18:19:34.920574   75006 api_server.go:88] waiting for apiserver healthz status ...
	I0816 18:19:34.920589   75006 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I0816 18:19:34.927194   75006 api_server.go:279] https://192.168.72.144:8444/healthz returned 200:
	ok
	I0816 18:19:34.928420   75006 api_server.go:141] control plane version: v1.31.0
	I0816 18:19:34.928437   75006 api_server.go:131] duration metric: took 7.857168ms to wait for apiserver health ...
	I0816 18:19:34.928443   75006 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 18:19:35.107220   75006 system_pods.go:59] 9 kube-system pods found
	I0816 18:19:35.107248   75006 system_pods.go:61] "coredns-6f6b679f8f-hx7sb" [4ebcdf34-c4e8-47bd-83f3-c56ea7bfb7d4] Running
	I0816 18:19:35.107254   75006 system_pods.go:61] "coredns-6f6b679f8f-t74vf" [41afd723-b034-460e-8e5f-197c8d8bcd7a] Running
	I0816 18:19:35.107258   75006 system_pods.go:61] "etcd-default-k8s-diff-port-256678" [46e68942-a5fc-433d-bf35-70f87a1b5962] Running
	I0816 18:19:35.107262   75006 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-256678" [0083826c-61fc-4597-84d9-a529df660696] Running
	I0816 18:19:35.107267   75006 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-256678" [e96435e2-1034-46d7-9f70-ba4435962528] Running
	I0816 18:19:35.107270   75006 system_pods.go:61] "kube-proxy-qsskg" [c863ca3c-8451-4fa7-b22d-c709e67bd26b] Running
	I0816 18:19:35.107274   75006 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-256678" [83bd764c-55ee-4fc4-8ebc-567b3fba1f95] Running
	I0816 18:19:35.107280   75006 system_pods.go:61] "metrics-server-6867b74b74-vmt5v" [8446e983-380f-42a8-ab5b-ce9b6d67ebad] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 18:19:35.107288   75006 system_pods.go:61] "storage-provisioner" [491e3d8e-5a8b-4187-a682-411c6fb9dd92] Running
	I0816 18:19:35.107296   75006 system_pods.go:74] duration metric: took 178.847431ms to wait for pod list to return data ...
	I0816 18:19:35.107302   75006 default_sa.go:34] waiting for default service account to be created ...
	I0816 18:19:35.303619   75006 default_sa.go:45] found service account: "default"
	I0816 18:19:35.303646   75006 default_sa.go:55] duration metric: took 196.337687ms for default service account to be created ...
	I0816 18:19:35.303655   75006 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 18:19:35.508401   75006 system_pods.go:86] 9 kube-system pods found
	I0816 18:19:35.508442   75006 system_pods.go:89] "coredns-6f6b679f8f-hx7sb" [4ebcdf34-c4e8-47bd-83f3-c56ea7bfb7d4] Running
	I0816 18:19:35.508452   75006 system_pods.go:89] "coredns-6f6b679f8f-t74vf" [41afd723-b034-460e-8e5f-197c8d8bcd7a] Running
	I0816 18:19:35.508460   75006 system_pods.go:89] "etcd-default-k8s-diff-port-256678" [46e68942-a5fc-433d-bf35-70f87a1b5962] Running
	I0816 18:19:35.508466   75006 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-256678" [0083826c-61fc-4597-84d9-a529df660696] Running
	I0816 18:19:35.508471   75006 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-256678" [e96435e2-1034-46d7-9f70-ba4435962528] Running
	I0816 18:19:35.508477   75006 system_pods.go:89] "kube-proxy-qsskg" [c863ca3c-8451-4fa7-b22d-c709e67bd26b] Running
	I0816 18:19:35.508483   75006 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-256678" [83bd764c-55ee-4fc4-8ebc-567b3fba1f95] Running
	I0816 18:19:35.508494   75006 system_pods.go:89] "metrics-server-6867b74b74-vmt5v" [8446e983-380f-42a8-ab5b-ce9b6d67ebad] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 18:19:35.508504   75006 system_pods.go:89] "storage-provisioner" [491e3d8e-5a8b-4187-a682-411c6fb9dd92] Running
	I0816 18:19:35.508521   75006 system_pods.go:126] duration metric: took 204.859728ms to wait for k8s-apps to be running ...
	I0816 18:19:35.508544   75006 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 18:19:35.508605   75006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 18:19:35.523660   75006 system_svc.go:56] duration metric: took 15.109288ms WaitForService to wait for kubelet
	I0816 18:19:35.523687   75006 kubeadm.go:582] duration metric: took 11.674985717s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 18:19:35.523704   75006 node_conditions.go:102] verifying NodePressure condition ...
	I0816 18:19:35.704770   75006 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 18:19:35.704797   75006 node_conditions.go:123] node cpu capacity is 2
	I0816 18:19:35.704808   75006 node_conditions.go:105] duration metric: took 181.099433ms to run NodePressure ...
	I0816 18:19:35.704818   75006 start.go:241] waiting for startup goroutines ...
	I0816 18:19:35.704824   75006 start.go:246] waiting for cluster config update ...
	I0816 18:19:35.704834   75006 start.go:255] writing updated cluster config ...
	I0816 18:19:35.705096   75006 ssh_runner.go:195] Run: rm -f paused
	I0816 18:19:35.753637   75006 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0816 18:19:35.755747   75006 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-256678" cluster and "default" namespace by default
	I0816 18:19:33.732856   74510 logs.go:123] Gathering logs for kube-controller-manager [72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c] ...
	I0816 18:19:33.732881   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c"
	I0816 18:19:33.796167   74510 logs.go:123] Gathering logs for storage-provisioner [08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970] ...
	I0816 18:19:33.796215   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970"
	I0816 18:19:33.835842   74510 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:19:33.835869   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 18:19:33.956412   74510 logs.go:123] Gathering logs for kube-apiserver [8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b] ...
	I0816 18:19:33.956450   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b"
	I0816 18:19:34.004102   74510 logs.go:123] Gathering logs for etcd [fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468] ...
	I0816 18:19:34.004137   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468"
	I0816 18:19:34.050504   74510 logs.go:123] Gathering logs for coredns [3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d] ...
	I0816 18:19:34.050548   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d"
	I0816 18:19:34.087815   74510 logs.go:123] Gathering logs for kube-scheduler [99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc] ...
	I0816 18:19:34.087850   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc"
	I0816 18:19:34.124096   74510 logs.go:123] Gathering logs for kubelet ...
	I0816 18:19:34.124127   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:19:34.193377   74510 logs.go:123] Gathering logs for dmesg ...
	I0816 18:19:34.193410   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:19:34.206480   74510 logs.go:123] Gathering logs for storage-provisioner [81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e] ...
	I0816 18:19:34.206505   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e"
	I0816 18:19:34.240262   74510 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:19:34.240305   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:19:34.591979   74510 logs.go:123] Gathering logs for container status ...
	I0816 18:19:34.592014   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:19:37.142552   74510 system_pods.go:59] 8 kube-system pods found
	I0816 18:19:37.142580   74510 system_pods.go:61] "coredns-6f6b679f8f-8njs2" [f29c31e1-4c2a-4dd8-ba60-62998504c55e] Running
	I0816 18:19:37.142585   74510 system_pods.go:61] "etcd-embed-certs-777541" [9cad9a1c-cea5-4271-9a68-3689ecdad607] Running
	I0816 18:19:37.142590   74510 system_pods.go:61] "kube-apiserver-embed-certs-777541" [a5105f98-368f-4687-8eca-ecc66ae59b42] Running
	I0816 18:19:37.142593   74510 system_pods.go:61] "kube-controller-manager-embed-certs-777541" [63cb72fb-167b-446b-874d-6ee665ad8a55] Running
	I0816 18:19:37.142596   74510 system_pods.go:61] "kube-proxy-j5rl7" [fcbc8903-6fa2-4f55-9ec0-92b77e21fb08] Running
	I0816 18:19:37.142600   74510 system_pods.go:61] "kube-scheduler-embed-certs-777541" [f2224375-d2f9-4e85-9eb8-26070a90642f] Running
	I0816 18:19:37.142605   74510 system_pods.go:61] "metrics-server-6867b74b74-6hkzb" [3e01da8d-7ddf-47cc-9079-5162cf2c2b53] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 18:19:37.142609   74510 system_pods.go:61] "storage-provisioner" [6fc6c4da-0e0f-45cc-84a6-bd4907f5e852] Running
	I0816 18:19:37.142616   74510 system_pods.go:74] duration metric: took 3.770043434s to wait for pod list to return data ...
	I0816 18:19:37.142625   74510 default_sa.go:34] waiting for default service account to be created ...
	I0816 18:19:37.145135   74510 default_sa.go:45] found service account: "default"
	I0816 18:19:37.145161   74510 default_sa.go:55] duration metric: took 2.530779ms for default service account to be created ...
	I0816 18:19:37.145169   74510 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 18:19:37.149397   74510 system_pods.go:86] 8 kube-system pods found
	I0816 18:19:37.149423   74510 system_pods.go:89] "coredns-6f6b679f8f-8njs2" [f29c31e1-4c2a-4dd8-ba60-62998504c55e] Running
	I0816 18:19:37.149431   74510 system_pods.go:89] "etcd-embed-certs-777541" [9cad9a1c-cea5-4271-9a68-3689ecdad607] Running
	I0816 18:19:37.149437   74510 system_pods.go:89] "kube-apiserver-embed-certs-777541" [a5105f98-368f-4687-8eca-ecc66ae59b42] Running
	I0816 18:19:37.149443   74510 system_pods.go:89] "kube-controller-manager-embed-certs-777541" [63cb72fb-167b-446b-874d-6ee665ad8a55] Running
	I0816 18:19:37.149451   74510 system_pods.go:89] "kube-proxy-j5rl7" [fcbc8903-6fa2-4f55-9ec0-92b77e21fb08] Running
	I0816 18:19:37.149458   74510 system_pods.go:89] "kube-scheduler-embed-certs-777541" [f2224375-d2f9-4e85-9eb8-26070a90642f] Running
	I0816 18:19:37.149471   74510 system_pods.go:89] "metrics-server-6867b74b74-6hkzb" [3e01da8d-7ddf-47cc-9079-5162cf2c2b53] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 18:19:37.149480   74510 system_pods.go:89] "storage-provisioner" [6fc6c4da-0e0f-45cc-84a6-bd4907f5e852] Running
	I0816 18:19:37.149491   74510 system_pods.go:126] duration metric: took 4.31556ms to wait for k8s-apps to be running ...
	I0816 18:19:37.149502   74510 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 18:19:37.149564   74510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 18:19:37.166663   74510 system_svc.go:56] duration metric: took 17.15398ms WaitForService to wait for kubelet
	I0816 18:19:37.166692   74510 kubeadm.go:582] duration metric: took 4m25.517719342s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 18:19:37.166711   74510 node_conditions.go:102] verifying NodePressure condition ...
	I0816 18:19:37.170081   74510 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 18:19:37.170102   74510 node_conditions.go:123] node cpu capacity is 2
	I0816 18:19:37.170112   74510 node_conditions.go:105] duration metric: took 3.396116ms to run NodePressure ...
	I0816 18:19:37.170122   74510 start.go:241] waiting for startup goroutines ...
	I0816 18:19:37.170129   74510 start.go:246] waiting for cluster config update ...
	I0816 18:19:37.170138   74510 start.go:255] writing updated cluster config ...
	I0816 18:19:37.170406   74510 ssh_runner.go:195] Run: rm -f paused
	I0816 18:19:37.218383   74510 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0816 18:19:37.220397   74510 out.go:177] * Done! kubectl is now configured to use "embed-certs-777541" cluster and "default" namespace by default
	I0816 18:19:37.609143   75402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:19:37.609401   75402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:19:47.609941   75402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:19:47.610185   75402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:20:07.611108   75402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:20:07.611350   75402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:20:47.613446   75402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:20:47.613708   75402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:20:47.613742   75402 kubeadm.go:310] 
	I0816 18:20:47.613809   75402 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0816 18:20:47.613902   75402 kubeadm.go:310] 		timed out waiting for the condition
	I0816 18:20:47.613926   75402 kubeadm.go:310] 
	I0816 18:20:47.613976   75402 kubeadm.go:310] 	This error is likely caused by:
	I0816 18:20:47.614028   75402 kubeadm.go:310] 		- The kubelet is not running
	I0816 18:20:47.614160   75402 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0816 18:20:47.614174   75402 kubeadm.go:310] 
	I0816 18:20:47.614323   75402 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0816 18:20:47.614383   75402 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0816 18:20:47.614432   75402 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0816 18:20:47.614441   75402 kubeadm.go:310] 
	I0816 18:20:47.614601   75402 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0816 18:20:47.614730   75402 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0816 18:20:47.614751   75402 kubeadm.go:310] 
	I0816 18:20:47.614875   75402 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0816 18:20:47.614982   75402 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0816 18:20:47.615101   75402 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0816 18:20:47.615217   75402 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0816 18:20:47.615230   75402 kubeadm.go:310] 
	I0816 18:20:47.616865   75402 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 18:20:47.616971   75402 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0816 18:20:47.617028   75402 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0816 18:20:47.617173   75402 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0816 18:20:47.617226   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 18:20:48.158066   75402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 18:20:48.172568   75402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 18:20:48.182445   75402 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 18:20:48.182468   75402 kubeadm.go:157] found existing configuration files:
	
	I0816 18:20:48.182527   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 18:20:48.191779   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 18:20:48.191847   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 18:20:48.201531   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 18:20:48.210495   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 18:20:48.210568   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 18:20:48.219701   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 18:20:48.228170   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 18:20:48.228242   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 18:20:48.237366   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 18:20:48.246335   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 18:20:48.246393   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 18:20:48.255655   75402 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 18:20:48.321873   75402 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0816 18:20:48.321930   75402 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 18:20:48.462199   75402 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 18:20:48.462324   75402 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 18:20:48.462448   75402 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0816 18:20:48.646565   75402 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 18:20:48.648485   75402 out.go:235]   - Generating certificates and keys ...
	I0816 18:20:48.648605   75402 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 18:20:48.648748   75402 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 18:20:48.648895   75402 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 18:20:48.648994   75402 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 18:20:48.649088   75402 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 18:20:48.649185   75402 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 18:20:48.649282   75402 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 18:20:48.649368   75402 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 18:20:48.649485   75402 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 18:20:48.649595   75402 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 18:20:48.649649   75402 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 18:20:48.649753   75402 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 18:20:48.864525   75402 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 18:20:49.035729   75402 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 18:20:49.086765   75402 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 18:20:49.222612   75402 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 18:20:49.239121   75402 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 18:20:49.240158   75402 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 18:20:49.240200   75402 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 18:20:49.366027   75402 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 18:20:49.367770   75402 out.go:235]   - Booting up control plane ...
	I0816 18:20:49.367907   75402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 18:20:49.373047   75402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 18:20:49.373886   75402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 18:20:49.374691   75402 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 18:20:49.379220   75402 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0816 18:21:29.381362   75402 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0816 18:21:29.381473   75402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:21:29.381700   75402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:21:34.381889   75402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:21:34.382065   75402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:21:44.382765   75402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:21:44.382964   75402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:22:04.383485   75402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:22:04.383748   75402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:22:44.382265   75402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:22:44.382558   75402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:22:44.382572   75402 kubeadm.go:310] 
	I0816 18:22:44.382628   75402 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0816 18:22:44.382715   75402 kubeadm.go:310] 		timed out waiting for the condition
	I0816 18:22:44.382741   75402 kubeadm.go:310] 
	I0816 18:22:44.382789   75402 kubeadm.go:310] 	This error is likely caused by:
	I0816 18:22:44.382837   75402 kubeadm.go:310] 		- The kubelet is not running
	I0816 18:22:44.382986   75402 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0816 18:22:44.382997   75402 kubeadm.go:310] 
	I0816 18:22:44.383149   75402 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0816 18:22:44.383202   75402 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0816 18:22:44.383246   75402 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0816 18:22:44.383258   75402 kubeadm.go:310] 
	I0816 18:22:44.383421   75402 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0816 18:22:44.383534   75402 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0816 18:22:44.383549   75402 kubeadm.go:310] 
	I0816 18:22:44.383743   75402 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0816 18:22:44.383877   75402 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0816 18:22:44.383993   75402 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0816 18:22:44.384092   75402 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0816 18:22:44.384103   75402 kubeadm.go:310] 
	I0816 18:22:44.384783   75402 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 18:22:44.384895   75402 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0816 18:22:44.384986   75402 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0816 18:22:44.385062   75402 kubeadm.go:394] duration metric: took 8m1.372176417s to StartCluster
	I0816 18:22:44.385108   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:22:44.385173   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:22:44.425862   75402 cri.go:89] found id: ""
	I0816 18:22:44.425892   75402 logs.go:276] 0 containers: []
	W0816 18:22:44.425901   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:22:44.425909   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:22:44.425982   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:22:44.461988   75402 cri.go:89] found id: ""
	I0816 18:22:44.462019   75402 logs.go:276] 0 containers: []
	W0816 18:22:44.462030   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:22:44.462038   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:22:44.462109   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:22:44.496063   75402 cri.go:89] found id: ""
	I0816 18:22:44.496095   75402 logs.go:276] 0 containers: []
	W0816 18:22:44.496106   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:22:44.496114   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:22:44.496175   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:22:44.529875   75402 cri.go:89] found id: ""
	I0816 18:22:44.529899   75402 logs.go:276] 0 containers: []
	W0816 18:22:44.529906   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:22:44.529912   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:22:44.529958   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:22:44.565745   75402 cri.go:89] found id: ""
	I0816 18:22:44.565781   75402 logs.go:276] 0 containers: []
	W0816 18:22:44.565791   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:22:44.565798   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:22:44.565860   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:22:44.604122   75402 cri.go:89] found id: ""
	I0816 18:22:44.604149   75402 logs.go:276] 0 containers: []
	W0816 18:22:44.604160   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:22:44.604168   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:22:44.604228   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:22:44.636607   75402 cri.go:89] found id: ""
	I0816 18:22:44.636658   75402 logs.go:276] 0 containers: []
	W0816 18:22:44.636669   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:22:44.636677   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:22:44.636736   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:22:44.670942   75402 cri.go:89] found id: ""
	I0816 18:22:44.670973   75402 logs.go:276] 0 containers: []
	W0816 18:22:44.670981   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:22:44.670989   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:22:44.671001   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:22:44.722403   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:22:44.722433   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:22:44.738587   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:22:44.738627   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:22:44.854530   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:22:44.854563   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:22:44.854579   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:22:44.957308   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:22:44.957342   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0816 18:22:44.997652   75402 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0816 18:22:44.997714   75402 out.go:270] * 
	W0816 18:22:44.997804   75402 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0816 18:22:44.997828   75402 out.go:270] * 
	W0816 18:22:44.998787   75402 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 18:22:45.002189   75402 out.go:201] 
	W0816 18:22:45.003254   75402 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0816 18:22:45.003310   75402 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0816 18:22:45.003340   75402 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0816 18:22:45.004826   75402 out.go:201] 
	
	
	==> CRI-O <==
	Aug 16 18:28:37 default-k8s-diff-port-256678 crio[724]: time="2024-08-16 18:28:37.834738533Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723832917834712669,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=048b59aa-6de0-4e24-91f3-a37a8d95c994 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 18:28:37 default-k8s-diff-port-256678 crio[724]: time="2024-08-16 18:28:37.835296673Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=06f77105-9a12-4b97-ae0d-6290bf63960d name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:28:37 default-k8s-diff-port-256678 crio[724]: time="2024-08-16 18:28:37.835364445Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=06f77105-9a12-4b97-ae0d-6290bf63960d name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:28:37 default-k8s-diff-port-256678 crio[724]: time="2024-08-16 18:28:37.836223943Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:44ffea7ac7a4fe457d2f8e864b98109d89702ab7760e8b91fa031af7842f3ee0,PodSandboxId:befe37e8816455cec081a73ae9c7e33e3d73e53db4af187a50be5ac80e5e833d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723832365774370023,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-t74vf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41afd723-b034-460e-8e5f-197c8d8bcd7a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8150e1ec7b21f4494ae4b9f9dd2874f68eac8136968363add1726c684a6ecfa2,PodSandboxId:04fd86f5a1cdd6a05692a954edd76e4f4e1e3bee1b25577f90db51fc121a1c58,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723832365521802309,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-hx7sb,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 4ebcdf34-c4e8-47bd-83f3-c56ea7bfb7d4,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e868145890802c78b2224210ec0fc9d6e76a46b800dfcf40d962dd8776c4d4c,PodSandboxId:3f5f107543d865946007d7e55aaa014a788bb20f53ddfc0b695f1ebfd4f7ac1e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_RUNNING,CreatedAt:1723832365304056913,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 491e3d8e-5a8b-4187-a682-411c6fb9dd92,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:172b97dc3d12c4ee85db2aa377199c187b484e2cf6dd686fa942659c1c155a5a,PodSandboxId:a9a4d48c479ae912b315f439a6a7dc6584c867463dd6b44c9bbe103d6d9dab33,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING
,CreatedAt:1723832364285774094,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qsskg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c863ca3c-8451-4fa7-b22d-c709e67bd26b,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f18a0112d14e18ef684dc2fe9092d50c1f5512d044a42c2a6517cb0a45ad8fd9,PodSandboxId:4b2d9901dd4e3547e9e046b4946c7f2c329387fe0cac35378e8a1e704904bafe,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723832353352324985,Labels:ma
p[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-256678,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 495866009f6d3fb6a7d309d47e72d3ce,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b09f30797f03bc13b3cea0e942fad8ad2a711ee5bfc9ae535f3e636bf7801f4d,PodSandboxId:6c4dbc8c596f1d872ab17a7d965a41e7b3f91972a9dfa1690a3d39018c3c9657,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723832353316994754,Labels:map[string]string{io.kub
ernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-256678,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec8451e8c42a3a85d47f8c1e58894360,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7c0d25b7b476bcadc1d1d410ec17838a8b45cedb0bb40fc76adc6ce146ce252,PodSandboxId:0d896848c3a212ee599c6e2393bec96f3c9acb22f16ba783572287d9335a59ca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723832353304207151,Labels:map[string]string
{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-256678,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d383f69673220a201da4925ed691535,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c862407ecc8544e43f5fe5a09b60d7fc3df75cd26ec1342c489eda6f3bdd32a,PodSandboxId:192b1742f764a2d69f974f54f922665a9bd21f8e11c4b3c15230d54af4956b90,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723832353298245359,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-256678,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faa47c22ff820064872f0dddee3e5397,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c2cdc235d0c89b96b197444acb5a9714191d8fe3722cbff5bcb5513a73de8ed,PodSandboxId:8e408789e7ce797b7059b60b63d172305a262759954de690dbd784330d52e507,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723832064378467122,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-256678,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faa47c22ff820064872f0dddee3e5397,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=06f77105-9a12-4b97-ae0d-6290bf63960d name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:28:37 default-k8s-diff-port-256678 crio[724]: time="2024-08-16 18:28:37.888982231Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4c49de94-9496-4575-9f1a-1ccb21e53692 name=/runtime.v1.RuntimeService/Version
	Aug 16 18:28:37 default-k8s-diff-port-256678 crio[724]: time="2024-08-16 18:28:37.889066654Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4c49de94-9496-4575-9f1a-1ccb21e53692 name=/runtime.v1.RuntimeService/Version
	Aug 16 18:28:37 default-k8s-diff-port-256678 crio[724]: time="2024-08-16 18:28:37.890790482Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b6839de5-acb7-4ada-999d-850acb49fcba name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 18:28:37 default-k8s-diff-port-256678 crio[724]: time="2024-08-16 18:28:37.891458389Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723832917891415139,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b6839de5-acb7-4ada-999d-850acb49fcba name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 18:28:37 default-k8s-diff-port-256678 crio[724]: time="2024-08-16 18:28:37.892283180Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=34c2eed0-92a6-4469-a7bf-6769e615aee5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:28:37 default-k8s-diff-port-256678 crio[724]: time="2024-08-16 18:28:37.892360748Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=34c2eed0-92a6-4469-a7bf-6769e615aee5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:28:37 default-k8s-diff-port-256678 crio[724]: time="2024-08-16 18:28:37.892610882Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:44ffea7ac7a4fe457d2f8e864b98109d89702ab7760e8b91fa031af7842f3ee0,PodSandboxId:befe37e8816455cec081a73ae9c7e33e3d73e53db4af187a50be5ac80e5e833d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723832365774370023,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-t74vf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41afd723-b034-460e-8e5f-197c8d8bcd7a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8150e1ec7b21f4494ae4b9f9dd2874f68eac8136968363add1726c684a6ecfa2,PodSandboxId:04fd86f5a1cdd6a05692a954edd76e4f4e1e3bee1b25577f90db51fc121a1c58,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723832365521802309,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-hx7sb,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 4ebcdf34-c4e8-47bd-83f3-c56ea7bfb7d4,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e868145890802c78b2224210ec0fc9d6e76a46b800dfcf40d962dd8776c4d4c,PodSandboxId:3f5f107543d865946007d7e55aaa014a788bb20f53ddfc0b695f1ebfd4f7ac1e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_RUNNING,CreatedAt:1723832365304056913,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 491e3d8e-5a8b-4187-a682-411c6fb9dd92,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:172b97dc3d12c4ee85db2aa377199c187b484e2cf6dd686fa942659c1c155a5a,PodSandboxId:a9a4d48c479ae912b315f439a6a7dc6584c867463dd6b44c9bbe103d6d9dab33,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING
,CreatedAt:1723832364285774094,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qsskg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c863ca3c-8451-4fa7-b22d-c709e67bd26b,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f18a0112d14e18ef684dc2fe9092d50c1f5512d044a42c2a6517cb0a45ad8fd9,PodSandboxId:4b2d9901dd4e3547e9e046b4946c7f2c329387fe0cac35378e8a1e704904bafe,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723832353352324985,Labels:ma
p[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-256678,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 495866009f6d3fb6a7d309d47e72d3ce,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b09f30797f03bc13b3cea0e942fad8ad2a711ee5bfc9ae535f3e636bf7801f4d,PodSandboxId:6c4dbc8c596f1d872ab17a7d965a41e7b3f91972a9dfa1690a3d39018c3c9657,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723832353316994754,Labels:map[string]string{io.kub
ernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-256678,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec8451e8c42a3a85d47f8c1e58894360,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7c0d25b7b476bcadc1d1d410ec17838a8b45cedb0bb40fc76adc6ce146ce252,PodSandboxId:0d896848c3a212ee599c6e2393bec96f3c9acb22f16ba783572287d9335a59ca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723832353304207151,Labels:map[string]string
{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-256678,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d383f69673220a201da4925ed691535,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c862407ecc8544e43f5fe5a09b60d7fc3df75cd26ec1342c489eda6f3bdd32a,PodSandboxId:192b1742f764a2d69f974f54f922665a9bd21f8e11c4b3c15230d54af4956b90,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723832353298245359,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-256678,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faa47c22ff820064872f0dddee3e5397,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c2cdc235d0c89b96b197444acb5a9714191d8fe3722cbff5bcb5513a73de8ed,PodSandboxId:8e408789e7ce797b7059b60b63d172305a262759954de690dbd784330d52e507,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723832064378467122,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-256678,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faa47c22ff820064872f0dddee3e5397,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=34c2eed0-92a6-4469-a7bf-6769e615aee5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:28:37 default-k8s-diff-port-256678 crio[724]: time="2024-08-16 18:28:37.939487733Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b57dd755-b3f6-4570-b296-970528155911 name=/runtime.v1.RuntimeService/Version
	Aug 16 18:28:37 default-k8s-diff-port-256678 crio[724]: time="2024-08-16 18:28:37.939602264Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b57dd755-b3f6-4570-b296-970528155911 name=/runtime.v1.RuntimeService/Version
	Aug 16 18:28:37 default-k8s-diff-port-256678 crio[724]: time="2024-08-16 18:28:37.941103121Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5713ca25-fd14-4ca7-8310-f8dc08fda5dc name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 18:28:37 default-k8s-diff-port-256678 crio[724]: time="2024-08-16 18:28:37.942004148Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723832917941974048,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5713ca25-fd14-4ca7-8310-f8dc08fda5dc name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 18:28:37 default-k8s-diff-port-256678 crio[724]: time="2024-08-16 18:28:37.943226319Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d2dabcd4-d829-4fdc-bef7-502d19b3db4f name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:28:37 default-k8s-diff-port-256678 crio[724]: time="2024-08-16 18:28:37.943320256Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d2dabcd4-d829-4fdc-bef7-502d19b3db4f name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:28:37 default-k8s-diff-port-256678 crio[724]: time="2024-08-16 18:28:37.943611227Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:44ffea7ac7a4fe457d2f8e864b98109d89702ab7760e8b91fa031af7842f3ee0,PodSandboxId:befe37e8816455cec081a73ae9c7e33e3d73e53db4af187a50be5ac80e5e833d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723832365774370023,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-t74vf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41afd723-b034-460e-8e5f-197c8d8bcd7a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8150e1ec7b21f4494ae4b9f9dd2874f68eac8136968363add1726c684a6ecfa2,PodSandboxId:04fd86f5a1cdd6a05692a954edd76e4f4e1e3bee1b25577f90db51fc121a1c58,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723832365521802309,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-hx7sb,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 4ebcdf34-c4e8-47bd-83f3-c56ea7bfb7d4,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e868145890802c78b2224210ec0fc9d6e76a46b800dfcf40d962dd8776c4d4c,PodSandboxId:3f5f107543d865946007d7e55aaa014a788bb20f53ddfc0b695f1ebfd4f7ac1e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_RUNNING,CreatedAt:1723832365304056913,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 491e3d8e-5a8b-4187-a682-411c6fb9dd92,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:172b97dc3d12c4ee85db2aa377199c187b484e2cf6dd686fa942659c1c155a5a,PodSandboxId:a9a4d48c479ae912b315f439a6a7dc6584c867463dd6b44c9bbe103d6d9dab33,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING
,CreatedAt:1723832364285774094,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qsskg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c863ca3c-8451-4fa7-b22d-c709e67bd26b,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f18a0112d14e18ef684dc2fe9092d50c1f5512d044a42c2a6517cb0a45ad8fd9,PodSandboxId:4b2d9901dd4e3547e9e046b4946c7f2c329387fe0cac35378e8a1e704904bafe,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723832353352324985,Labels:ma
p[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-256678,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 495866009f6d3fb6a7d309d47e72d3ce,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b09f30797f03bc13b3cea0e942fad8ad2a711ee5bfc9ae535f3e636bf7801f4d,PodSandboxId:6c4dbc8c596f1d872ab17a7d965a41e7b3f91972a9dfa1690a3d39018c3c9657,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723832353316994754,Labels:map[string]string{io.kub
ernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-256678,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec8451e8c42a3a85d47f8c1e58894360,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7c0d25b7b476bcadc1d1d410ec17838a8b45cedb0bb40fc76adc6ce146ce252,PodSandboxId:0d896848c3a212ee599c6e2393bec96f3c9acb22f16ba783572287d9335a59ca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723832353304207151,Labels:map[string]string
{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-256678,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d383f69673220a201da4925ed691535,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c862407ecc8544e43f5fe5a09b60d7fc3df75cd26ec1342c489eda6f3bdd32a,PodSandboxId:192b1742f764a2d69f974f54f922665a9bd21f8e11c4b3c15230d54af4956b90,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723832353298245359,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-256678,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faa47c22ff820064872f0dddee3e5397,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c2cdc235d0c89b96b197444acb5a9714191d8fe3722cbff5bcb5513a73de8ed,PodSandboxId:8e408789e7ce797b7059b60b63d172305a262759954de690dbd784330d52e507,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723832064378467122,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-256678,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faa47c22ff820064872f0dddee3e5397,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d2dabcd4-d829-4fdc-bef7-502d19b3db4f name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:28:37 default-k8s-diff-port-256678 crio[724]: time="2024-08-16 18:28:37.988070411Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ae89e6b1-ed76-4106-b362-10b80d103d49 name=/runtime.v1.RuntimeService/Version
	Aug 16 18:28:37 default-k8s-diff-port-256678 crio[724]: time="2024-08-16 18:28:37.988191734Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ae89e6b1-ed76-4106-b362-10b80d103d49 name=/runtime.v1.RuntimeService/Version
	Aug 16 18:28:37 default-k8s-diff-port-256678 crio[724]: time="2024-08-16 18:28:37.989334551Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0130e3b8-5e24-4ea0-b2c5-d62a77b17023 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 18:28:37 default-k8s-diff-port-256678 crio[724]: time="2024-08-16 18:28:37.989767113Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723832917989740804,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0130e3b8-5e24-4ea0-b2c5-d62a77b17023 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 18:28:37 default-k8s-diff-port-256678 crio[724]: time="2024-08-16 18:28:37.990403832Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cf204384-4800-4a1a-9806-68e387996c3c name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:28:37 default-k8s-diff-port-256678 crio[724]: time="2024-08-16 18:28:37.990463427Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cf204384-4800-4a1a-9806-68e387996c3c name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:28:37 default-k8s-diff-port-256678 crio[724]: time="2024-08-16 18:28:37.990772860Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:44ffea7ac7a4fe457d2f8e864b98109d89702ab7760e8b91fa031af7842f3ee0,PodSandboxId:befe37e8816455cec081a73ae9c7e33e3d73e53db4af187a50be5ac80e5e833d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723832365774370023,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-t74vf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41afd723-b034-460e-8e5f-197c8d8bcd7a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8150e1ec7b21f4494ae4b9f9dd2874f68eac8136968363add1726c684a6ecfa2,PodSandboxId:04fd86f5a1cdd6a05692a954edd76e4f4e1e3bee1b25577f90db51fc121a1c58,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723832365521802309,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-hx7sb,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 4ebcdf34-c4e8-47bd-83f3-c56ea7bfb7d4,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e868145890802c78b2224210ec0fc9d6e76a46b800dfcf40d962dd8776c4d4c,PodSandboxId:3f5f107543d865946007d7e55aaa014a788bb20f53ddfc0b695f1ebfd4f7ac1e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_RUNNING,CreatedAt:1723832365304056913,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 491e3d8e-5a8b-4187-a682-411c6fb9dd92,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:172b97dc3d12c4ee85db2aa377199c187b484e2cf6dd686fa942659c1c155a5a,PodSandboxId:a9a4d48c479ae912b315f439a6a7dc6584c867463dd6b44c9bbe103d6d9dab33,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING
,CreatedAt:1723832364285774094,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qsskg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c863ca3c-8451-4fa7-b22d-c709e67bd26b,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f18a0112d14e18ef684dc2fe9092d50c1f5512d044a42c2a6517cb0a45ad8fd9,PodSandboxId:4b2d9901dd4e3547e9e046b4946c7f2c329387fe0cac35378e8a1e704904bafe,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723832353352324985,Labels:ma
p[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-256678,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 495866009f6d3fb6a7d309d47e72d3ce,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b09f30797f03bc13b3cea0e942fad8ad2a711ee5bfc9ae535f3e636bf7801f4d,PodSandboxId:6c4dbc8c596f1d872ab17a7d965a41e7b3f91972a9dfa1690a3d39018c3c9657,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723832353316994754,Labels:map[string]string{io.kub
ernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-256678,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec8451e8c42a3a85d47f8c1e58894360,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7c0d25b7b476bcadc1d1d410ec17838a8b45cedb0bb40fc76adc6ce146ce252,PodSandboxId:0d896848c3a212ee599c6e2393bec96f3c9acb22f16ba783572287d9335a59ca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723832353304207151,Labels:map[string]string
{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-256678,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d383f69673220a201da4925ed691535,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c862407ecc8544e43f5fe5a09b60d7fc3df75cd26ec1342c489eda6f3bdd32a,PodSandboxId:192b1742f764a2d69f974f54f922665a9bd21f8e11c4b3c15230d54af4956b90,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723832353298245359,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-256678,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faa47c22ff820064872f0dddee3e5397,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c2cdc235d0c89b96b197444acb5a9714191d8fe3722cbff5bcb5513a73de8ed,PodSandboxId:8e408789e7ce797b7059b60b63d172305a262759954de690dbd784330d52e507,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723832064378467122,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-256678,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faa47c22ff820064872f0dddee3e5397,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cf204384-4800-4a1a-9806-68e387996c3c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	44ffea7ac7a4f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   befe37e881645       coredns-6f6b679f8f-t74vf
	8150e1ec7b21f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   04fd86f5a1cdd       coredns-6f6b679f8f-hx7sb
	6e86814589080       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   3f5f107543d86       storage-provisioner
	172b97dc3d12c       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   9 minutes ago       Running             kube-proxy                0                   a9a4d48c479ae       kube-proxy-qsskg
	f18a0112d14e1       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   4b2d9901dd4e3       etcd-default-k8s-diff-port-256678
	b09f30797f03b       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   9 minutes ago       Running             kube-scheduler            2                   6c4dbc8c596f1       kube-scheduler-default-k8s-diff-port-256678
	e7c0d25b7b476       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   9 minutes ago       Running             kube-controller-manager   2                   0d896848c3a21       kube-controller-manager-default-k8s-diff-port-256678
	4c862407ecc85       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   9 minutes ago       Running             kube-apiserver            2                   192b1742f764a       kube-apiserver-default-k8s-diff-port-256678
	6c2cdc235d0c8       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   14 minutes ago      Exited              kube-apiserver            1                   8e408789e7ce7       kube-apiserver-default-k8s-diff-port-256678
	
	
	==> coredns [44ffea7ac7a4fe457d2f8e864b98109d89702ab7760e8b91fa031af7842f3ee0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [8150e1ec7b21f4494ae4b9f9dd2874f68eac8136968363add1726c684a6ecfa2] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-256678
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-256678
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd
	                    minikube.k8s.io/name=default-k8s-diff-port-256678
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_16T18_19_19_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 18:19:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-256678
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 18:28:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 18:24:35 +0000   Fri, 16 Aug 2024 18:19:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 18:24:35 +0000   Fri, 16 Aug 2024 18:19:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 18:24:35 +0000   Fri, 16 Aug 2024 18:19:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 18:24:35 +0000   Fri, 16 Aug 2024 18:19:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.144
	  Hostname:    default-k8s-diff-port-256678
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3ac74fca6129435b88a4d0646225ea02
	  System UUID:                3ac74fca-6129-435b-88a4-d0646225ea02
	  Boot ID:                    ee2a0432-1e4d-4a1e-a4f0-5190b5e93053
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-hx7sb                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m15s
	  kube-system                 coredns-6f6b679f8f-t74vf                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m15s
	  kube-system                 etcd-default-k8s-diff-port-256678                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m21s
	  kube-system                 kube-apiserver-default-k8s-diff-port-256678             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m20s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-256678    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m20s
	  kube-system                 kube-proxy-qsskg                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m15s
	  kube-system                 kube-scheduler-default-k8s-diff-port-256678             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m20s
	  kube-system                 metrics-server-6867b74b74-vmt5v                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m13s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m13s  kube-proxy       
	  Normal  Starting                 9m20s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m20s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m20s  kubelet          Node default-k8s-diff-port-256678 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m20s  kubelet          Node default-k8s-diff-port-256678 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m20s  kubelet          Node default-k8s-diff-port-256678 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m16s  node-controller  Node default-k8s-diff-port-256678 event: Registered Node default-k8s-diff-port-256678 in Controller
	
	
	==> dmesg <==
	[  +0.037204] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Aug16 18:14] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.944422] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.562124] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.963553] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.077412] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059571] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +0.216898] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +0.127168] systemd-fstab-generator[679]: Ignoring "noauto" option for root device
	[  +0.282903] systemd-fstab-generator[708]: Ignoring "noauto" option for root device
	[  +4.421739] systemd-fstab-generator[806]: Ignoring "noauto" option for root device
	[  +0.066664] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.676767] systemd-fstab-generator[925]: Ignoring "noauto" option for root device
	[  +5.591055] kauditd_printk_skb: 97 callbacks suppressed
	[  +7.635861] kauditd_printk_skb: 54 callbacks suppressed
	[ +23.329118] kauditd_printk_skb: 31 callbacks suppressed
	[Aug16 18:19] kauditd_printk_skb: 6 callbacks suppressed
	[  +2.009646] systemd-fstab-generator[2591]: Ignoring "noauto" option for root device
	[  +4.679864] kauditd_printk_skb: 54 callbacks suppressed
	[  +1.377937] systemd-fstab-generator[2914]: Ignoring "noauto" option for root device
	[  +5.809685] systemd-fstab-generator[3043]: Ignoring "noauto" option for root device
	[  +0.132143] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.543795] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [f18a0112d14e18ef684dc2fe9092d50c1f5512d044a42c2a6517cb0a45ad8fd9] <==
	{"level":"info","ts":"2024-08-16T18:19:13.815597Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-16T18:19:13.815998Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"d88fef82e3f5d4b9","initial-advertise-peer-urls":["https://192.168.72.144:2380"],"listen-peer-urls":["https://192.168.72.144:2380"],"advertise-client-urls":["https://192.168.72.144:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.144:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-16T18:19:13.816104Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.72.144:2380"}
	{"level":"info","ts":"2024-08-16T18:19:13.817939Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.72.144:2380"}
	{"level":"info","ts":"2024-08-16T18:19:13.817971Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-16T18:19:14.641490Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d88fef82e3f5d4b9 is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-16T18:19:14.641538Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d88fef82e3f5d4b9 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-16T18:19:14.641576Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d88fef82e3f5d4b9 received MsgPreVoteResp from d88fef82e3f5d4b9 at term 1"}
	{"level":"info","ts":"2024-08-16T18:19:14.641596Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d88fef82e3f5d4b9 became candidate at term 2"}
	{"level":"info","ts":"2024-08-16T18:19:14.641601Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d88fef82e3f5d4b9 received MsgVoteResp from d88fef82e3f5d4b9 at term 2"}
	{"level":"info","ts":"2024-08-16T18:19:14.641610Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d88fef82e3f5d4b9 became leader at term 2"}
	{"level":"info","ts":"2024-08-16T18:19:14.641617Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d88fef82e3f5d4b9 elected leader d88fef82e3f5d4b9 at term 2"}
	{"level":"info","ts":"2024-08-16T18:19:14.642823Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-16T18:19:14.643793Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"d88fef82e3f5d4b9","local-member-attributes":"{Name:default-k8s-diff-port-256678 ClientURLs:[https://192.168.72.144:2379]}","request-path":"/0/members/d88fef82e3f5d4b9/attributes","cluster-id":"c71c7df57c5a06f3","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-16T18:19:14.643832Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-16T18:19:14.644088Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c71c7df57c5a06f3","local-member-id":"d88fef82e3f5d4b9","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-16T18:19:14.644173Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-16T18:19:14.644203Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-16T18:19:14.644213Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-16T18:19:14.645119Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T18:19:14.645782Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.144:2379"}
	{"level":"info","ts":"2024-08-16T18:19:14.646694Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T18:19:14.646937Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-16T18:19:14.646962Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-16T18:19:14.648442Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 18:28:38 up 14 min,  0 users,  load average: 0.08, 0.21, 0.18
	Linux default-k8s-diff-port-256678 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [4c862407ecc8544e43f5fe5a09b60d7fc3df75cd26ec1342c489eda6f3bdd32a] <==
	E0816 18:24:17.027287       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0816 18:24:17.027313       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0816 18:24:17.028486       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0816 18:24:17.028617       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0816 18:25:17.029523       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 18:25:17.029799       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0816 18:25:17.029545       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 18:25:17.030020       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0816 18:25:17.031084       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0816 18:25:17.031161       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0816 18:27:17.031691       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 18:27:17.031813       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0816 18:27:17.032033       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 18:27:17.032113       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0816 18:27:17.032941       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0816 18:27:17.034026       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [6c2cdc235d0c89b96b197444acb5a9714191d8fe3722cbff5bcb5513a73de8ed] <==
	W0816 18:19:04.810128       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:19:04.911173       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:19:04.921759       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:19:05.029094       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:19:05.101982       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:19:05.128998       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:19:05.201164       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:19:05.215409       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:19:05.267239       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:19:09.154311       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:19:09.193562       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:19:09.352225       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:19:09.402268       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:19:09.464759       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:19:09.533596       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:19:09.556668       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:19:09.562204       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:19:09.565715       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:19:09.631149       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:19:09.674643       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:19:09.732607       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:19:09.768782       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:19:09.798812       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:19:09.837773       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:19:09.879757       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [e7c0d25b7b476bcadc1d1d410ec17838a8b45cedb0bb40fc76adc6ce146ce252] <==
	E0816 18:23:22.890567       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 18:23:23.426135       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 18:23:52.897698       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 18:23:53.434763       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 18:24:22.905216       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 18:24:23.442452       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0816 18:24:35.157385       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-256678"
	E0816 18:24:52.914942       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 18:24:53.450558       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 18:25:22.923562       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 18:25:23.458287       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0816 18:25:31.332371       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="287.47µs"
	I0816 18:25:46.332862       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="143.151µs"
	E0816 18:25:52.929585       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 18:25:53.465403       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 18:26:22.937639       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 18:26:23.474117       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 18:26:52.944454       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 18:26:53.481596       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 18:27:22.950639       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 18:27:23.488385       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 18:27:52.956224       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 18:27:53.495228       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 18:28:22.962291       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 18:28:23.503193       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [172b97dc3d12c4ee85db2aa377199c187b484e2cf6dd686fa942659c1c155a5a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0816 18:19:24.764045       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0816 18:19:24.776363       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.144"]
	E0816 18:19:24.779765       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0816 18:19:24.919416       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0816 18:19:24.919473       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0816 18:19:24.919505       1 server_linux.go:169] "Using iptables Proxier"
	I0816 18:19:24.926552       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0816 18:19:24.926835       1 server.go:483] "Version info" version="v1.31.0"
	I0816 18:19:24.926858       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 18:19:24.928613       1 config.go:197] "Starting service config controller"
	I0816 18:19:24.928643       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0816 18:19:24.928660       1 config.go:104] "Starting endpoint slice config controller"
	I0816 18:19:24.928663       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0816 18:19:24.929095       1 config.go:326] "Starting node config controller"
	I0816 18:19:24.929125       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0816 18:19:25.031495       1 shared_informer.go:320] Caches are synced for node config
	I0816 18:19:25.031587       1 shared_informer.go:320] Caches are synced for service config
	I0816 18:19:25.032448       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [b09f30797f03bc13b3cea0e942fad8ad2a711ee5bfc9ae535f3e636bf7801f4d] <==
	W0816 18:19:16.079057       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 18:19:16.079095       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0816 18:19:16.079140       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0816 18:19:16.079178       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 18:19:16.079146       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 18:19:16.079241       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 18:19:16.931449       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 18:19:16.931505       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0816 18:19:16.952914       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 18:19:16.953314       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 18:19:17.018831       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 18:19:17.018917       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0816 18:19:17.025436       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 18:19:17.025491       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 18:19:17.081370       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 18:19:17.081419       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 18:19:17.100126       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 18:19:17.100178       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 18:19:17.238986       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 18:19:17.239034       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0816 18:19:17.259577       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 18:19:17.259699       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 18:19:17.266963       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 18:19:17.267040       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0816 18:19:19.570544       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 16 18:27:33 default-k8s-diff-port-256678 kubelet[2921]: E0816 18:27:33.316083    2921 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-vmt5v" podUID="8446e983-380f-42a8-ab5b-ce9b6d67ebad"
	Aug 16 18:27:38 default-k8s-diff-port-256678 kubelet[2921]: E0816 18:27:38.497518    2921 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723832858497144606,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:27:38 default-k8s-diff-port-256678 kubelet[2921]: E0816 18:27:38.497563    2921 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723832858497144606,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:27:45 default-k8s-diff-port-256678 kubelet[2921]: E0816 18:27:45.316139    2921 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-vmt5v" podUID="8446e983-380f-42a8-ab5b-ce9b6d67ebad"
	Aug 16 18:27:48 default-k8s-diff-port-256678 kubelet[2921]: E0816 18:27:48.499087    2921 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723832868498738667,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:27:48 default-k8s-diff-port-256678 kubelet[2921]: E0816 18:27:48.499695    2921 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723832868498738667,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:27:56 default-k8s-diff-port-256678 kubelet[2921]: E0816 18:27:56.316981    2921 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-vmt5v" podUID="8446e983-380f-42a8-ab5b-ce9b6d67ebad"
	Aug 16 18:27:58 default-k8s-diff-port-256678 kubelet[2921]: E0816 18:27:58.503092    2921 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723832878502566493,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:27:58 default-k8s-diff-port-256678 kubelet[2921]: E0816 18:27:58.503136    2921 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723832878502566493,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:28:08 default-k8s-diff-port-256678 kubelet[2921]: E0816 18:28:08.317389    2921 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-vmt5v" podUID="8446e983-380f-42a8-ab5b-ce9b6d67ebad"
	Aug 16 18:28:08 default-k8s-diff-port-256678 kubelet[2921]: E0816 18:28:08.504914    2921 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723832888504378814,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:28:08 default-k8s-diff-port-256678 kubelet[2921]: E0816 18:28:08.504954    2921 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723832888504378814,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:28:18 default-k8s-diff-port-256678 kubelet[2921]: E0816 18:28:18.339344    2921 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 18:28:18 default-k8s-diff-port-256678 kubelet[2921]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 18:28:18 default-k8s-diff-port-256678 kubelet[2921]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 18:28:18 default-k8s-diff-port-256678 kubelet[2921]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 18:28:18 default-k8s-diff-port-256678 kubelet[2921]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 18:28:18 default-k8s-diff-port-256678 kubelet[2921]: E0816 18:28:18.506224    2921 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723832898505782999,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:28:18 default-k8s-diff-port-256678 kubelet[2921]: E0816 18:28:18.506485    2921 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723832898505782999,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:28:22 default-k8s-diff-port-256678 kubelet[2921]: E0816 18:28:22.316290    2921 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-vmt5v" podUID="8446e983-380f-42a8-ab5b-ce9b6d67ebad"
	Aug 16 18:28:28 default-k8s-diff-port-256678 kubelet[2921]: E0816 18:28:28.507788    2921 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723832908507546749,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:28:28 default-k8s-diff-port-256678 kubelet[2921]: E0816 18:28:28.507813    2921 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723832908507546749,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:28:37 default-k8s-diff-port-256678 kubelet[2921]: E0816 18:28:37.316827    2921 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-vmt5v" podUID="8446e983-380f-42a8-ab5b-ce9b6d67ebad"
	Aug 16 18:28:38 default-k8s-diff-port-256678 kubelet[2921]: E0816 18:28:38.511560    2921 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723832918510685281,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:28:38 default-k8s-diff-port-256678 kubelet[2921]: E0816 18:28:38.511603    2921 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723832918510685281,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [6e868145890802c78b2224210ec0fc9d6e76a46b800dfcf40d962dd8776c4d4c] <==
	I0816 18:19:25.473730       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0816 18:19:25.496086       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0816 18:19:25.498032       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0816 18:19:25.527943       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0816 18:19:25.528208       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-256678_48c84abd-b3ad-478d-8e7c-ddb17557069c!
	I0816 18:19:25.531820       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9e42718b-b3c0-450b-9e14-b9e25bb5af15", APIVersion:"v1", ResourceVersion:"387", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-256678_48c84abd-b3ad-478d-8e7c-ddb17557069c became leader
	I0816 18:19:25.631747       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-256678_48c84abd-b3ad-478d-8e7c-ddb17557069c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-256678 -n default-k8s-diff-port-256678
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-256678 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-vmt5v
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-256678 describe pod metrics-server-6867b74b74-vmt5v
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-256678 describe pod metrics-server-6867b74b74-vmt5v: exit status 1 (99.244775ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-vmt5v" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-256678 describe pod metrics-server-6867b74b74-vmt5v: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.67s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.62s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0816 18:20:14.631447   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/custom-flannel-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:21:12.269943   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:21:23.210489   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/auto-791304/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-777541 -n embed-certs-777541
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-08-16 18:28:37.733313328 +0000 UTC m=+6018.051858362
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-777541 -n embed-certs-777541
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-777541 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-777541 logs -n 25: (2.383524202s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p custom-flannel-791304 sudo cat                      | custom-flannel-791304        | jenkins | v1.33.1 | 16 Aug 24 18:05 UTC | 16 Aug 24 18:05 UTC |
	|         | /lib/systemd/system/containerd.service                 |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-791304                               | custom-flannel-791304        | jenkins | v1.33.1 | 16 Aug 24 18:05 UTC | 16 Aug 24 18:05 UTC |
	|         | sudo cat                                               |                              |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-791304 sudo                          | custom-flannel-791304        | jenkins | v1.33.1 | 16 Aug 24 18:05 UTC | 16 Aug 24 18:05 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-791304 sudo                          | custom-flannel-791304        | jenkins | v1.33.1 | 16 Aug 24 18:05 UTC | 16 Aug 24 18:05 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-791304 sudo                          | custom-flannel-791304        | jenkins | v1.33.1 | 16 Aug 24 18:05 UTC | 16 Aug 24 18:05 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-791304 sudo                          | custom-flannel-791304        | jenkins | v1.33.1 | 16 Aug 24 18:05 UTC | 16 Aug 24 18:05 UTC |
	|         | find /etc/crio -type f -exec                           |                              |         |         |                     |                     |
	|         | sh -c 'echo {}; cat {}' \;                             |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-791304 sudo                          | custom-flannel-791304        | jenkins | v1.33.1 | 16 Aug 24 18:05 UTC | 16 Aug 24 18:05 UTC |
	|         | crio config                                            |                              |         |         |                     |                     |
	| delete  | -p custom-flannel-791304                               | custom-flannel-791304        | jenkins | v1.33.1 | 16 Aug 24 18:05 UTC | 16 Aug 24 18:05 UTC |
	| start   | -p                                                     | default-k8s-diff-port-256678 | jenkins | v1.33.1 | 16 Aug 24 18:05 UTC | 16 Aug 24 18:07 UTC |
	|         | default-k8s-diff-port-256678                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-777541            | embed-certs-777541           | jenkins | v1.33.1 | 16 Aug 24 18:06 UTC | 16 Aug 24 18:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-777541                                  | embed-certs-777541           | jenkins | v1.33.1 | 16 Aug 24 18:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-864476             | no-preload-864476            | jenkins | v1.33.1 | 16 Aug 24 18:07 UTC | 16 Aug 24 18:07 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-864476                                   | no-preload-864476            | jenkins | v1.33.1 | 16 Aug 24 18:07 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-256678  | default-k8s-diff-port-256678 | jenkins | v1.33.1 | 16 Aug 24 18:07 UTC | 16 Aug 24 18:07 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-256678 | jenkins | v1.33.1 | 16 Aug 24 18:07 UTC |                     |
	|         | default-k8s-diff-port-256678                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-777541                 | embed-certs-777541           | jenkins | v1.33.1 | 16 Aug 24 18:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-783465        | old-k8s-version-783465       | jenkins | v1.33.1 | 16 Aug 24 18:08 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-777541                                  | embed-certs-777541           | jenkins | v1.33.1 | 16 Aug 24 18:08 UTC | 16 Aug 24 18:19 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-864476                  | no-preload-864476            | jenkins | v1.33.1 | 16 Aug 24 18:09 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-864476                                   | no-preload-864476            | jenkins | v1.33.1 | 16 Aug 24 18:09 UTC | 16 Aug 24 18:19 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-256678       | default-k8s-diff-port-256678 | jenkins | v1.33.1 | 16 Aug 24 18:09 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-256678 | jenkins | v1.33.1 | 16 Aug 24 18:09 UTC | 16 Aug 24 18:19 UTC |
	|         | default-k8s-diff-port-256678                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-783465                              | old-k8s-version-783465       | jenkins | v1.33.1 | 16 Aug 24 18:10 UTC | 16 Aug 24 18:10 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-783465             | old-k8s-version-783465       | jenkins | v1.33.1 | 16 Aug 24 18:10 UTC | 16 Aug 24 18:10 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-783465                              | old-k8s-version-783465       | jenkins | v1.33.1 | 16 Aug 24 18:10 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 18:10:53
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 18:10:53.101149   75402 out.go:345] Setting OutFile to fd 1 ...
	I0816 18:10:53.101401   75402 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 18:10:53.101412   75402 out.go:358] Setting ErrFile to fd 2...
	I0816 18:10:53.101418   75402 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 18:10:53.101600   75402 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-9545/.minikube/bin
	I0816 18:10:53.102131   75402 out.go:352] Setting JSON to false
	I0816 18:10:53.103018   75402 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6751,"bootTime":1723825102,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0816 18:10:53.103076   75402 start.go:139] virtualization: kvm guest
	I0816 18:10:53.105216   75402 out.go:177] * [old-k8s-version-783465] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0816 18:10:53.106496   75402 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 18:10:53.106504   75402 notify.go:220] Checking for updates...
	I0816 18:10:53.109235   75402 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 18:10:53.110572   75402 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19461-9545/kubeconfig
	I0816 18:10:53.111747   75402 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19461-9545/.minikube
	I0816 18:10:53.113164   75402 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 18:10:53.114589   75402 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 18:10:53.116284   75402 config.go:182] Loaded profile config "old-k8s-version-783465": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0816 18:10:53.116746   75402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:10:53.116806   75402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:10:53.132445   75402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46631
	I0816 18:10:53.132886   75402 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:10:53.133456   75402 main.go:141] libmachine: Using API Version  1
	I0816 18:10:53.133494   75402 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:10:53.133836   75402 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:10:53.134015   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	I0816 18:10:53.135791   75402 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0816 18:10:53.136942   75402 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 18:10:53.137229   75402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:10:53.137260   75402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:10:53.151853   75402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34617
	I0816 18:10:53.152327   75402 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:10:53.152881   75402 main.go:141] libmachine: Using API Version  1
	I0816 18:10:53.152905   75402 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:10:53.153159   75402 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:10:53.153307   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	I0816 18:10:53.188002   75402 out.go:177] * Using the kvm2 driver based on existing profile
	I0816 18:10:53.189287   75402 start.go:297] selected driver: kvm2
	I0816 18:10:53.189309   75402 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-783465 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-783465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 18:10:53.189432   75402 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 18:10:53.190098   75402 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 18:10:53.190187   75402 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19461-9545/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 18:10:53.205024   75402 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0816 18:10:53.205386   75402 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 18:10:53.205417   75402 cni.go:84] Creating CNI manager for ""
	I0816 18:10:53.205425   75402 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 18:10:53.205458   75402 start.go:340] cluster config:
	{Name:old-k8s-version-783465 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-783465 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 18:10:53.205557   75402 iso.go:125] acquiring lock: {Name:mke35866c1d0f078e1f027c2d727692722810e04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 18:10:53.207241   75402 out.go:177] * Starting "old-k8s-version-783465" primary control-plane node in "old-k8s-version-783465" cluster
	I0816 18:10:53.208254   75402 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0816 18:10:53.208286   75402 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0816 18:10:53.208298   75402 cache.go:56] Caching tarball of preloaded images
	I0816 18:10:53.208386   75402 preload.go:172] Found /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 18:10:53.208400   75402 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0816 18:10:53.208510   75402 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/config.json ...
	I0816 18:10:53.208736   75402 start.go:360] acquireMachinesLock for old-k8s-version-783465: {Name:mke1ffa1f2d6d714bdd85e184816ba8f4dfd08f1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 18:10:54.604889   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:10:57.676891   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:03.756940   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:06.828911   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:12.908885   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:15.980925   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:22.060891   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:25.132961   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:31.212919   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:34.284876   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:40.365032   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:43.436910   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:49.516914   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:52.588969   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:58.668915   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:01.740965   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:07.820898   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:10.892922   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:16.972913   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:20.044913   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:26.124921   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:29.196968   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:35.276952   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:38.348971   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:44.428932   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:47.500897   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:53.580923   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:56.652927   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:13:02.732992   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:13:05.804929   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:13:11.884953   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:13:14.956943   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:13:21.036963   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:13:24.108915   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:13:30.188851   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:13:33.260936   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:13:36.264963   74828 start.go:364] duration metric: took 4m2.37855556s to acquireMachinesLock for "no-preload-864476"
	I0816 18:13:36.265020   74828 start.go:96] Skipping create...Using existing machine configuration
	I0816 18:13:36.265027   74828 fix.go:54] fixHost starting: 
	I0816 18:13:36.265379   74828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:13:36.265409   74828 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:13:36.280707   74828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35227
	I0816 18:13:36.281167   74828 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:13:36.281747   74828 main.go:141] libmachine: Using API Version  1
	I0816 18:13:36.281778   74828 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:13:36.282122   74828 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:13:36.282330   74828 main.go:141] libmachine: (no-preload-864476) Calling .DriverName
	I0816 18:13:36.282457   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetState
	I0816 18:13:36.284064   74828 fix.go:112] recreateIfNeeded on no-preload-864476: state=Stopped err=<nil>
	I0816 18:13:36.284084   74828 main.go:141] libmachine: (no-preload-864476) Calling .DriverName
	W0816 18:13:36.284217   74828 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 18:13:36.286749   74828 out.go:177] * Restarting existing kvm2 VM for "no-preload-864476" ...
	I0816 18:13:36.262619   74510 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 18:13:36.262654   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetMachineName
	I0816 18:13:36.262944   74510 buildroot.go:166] provisioning hostname "embed-certs-777541"
	I0816 18:13:36.262967   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetMachineName
	I0816 18:13:36.263222   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:13:36.264803   74510 machine.go:96] duration metric: took 4m37.429582668s to provisionDockerMachine
	I0816 18:13:36.264858   74510 fix.go:56] duration metric: took 4m37.449862851s for fixHost
	I0816 18:13:36.264867   74510 start.go:83] releasing machines lock for "embed-certs-777541", held for 4m37.449881856s
	W0816 18:13:36.264895   74510 start.go:714] error starting host: provision: host is not running
	W0816 18:13:36.264994   74510 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0816 18:13:36.265005   74510 start.go:729] Will try again in 5 seconds ...
	I0816 18:13:36.288329   74828 main.go:141] libmachine: (no-preload-864476) Calling .Start
	I0816 18:13:36.288484   74828 main.go:141] libmachine: (no-preload-864476) Ensuring networks are active...
	I0816 18:13:36.289285   74828 main.go:141] libmachine: (no-preload-864476) Ensuring network default is active
	I0816 18:13:36.289912   74828 main.go:141] libmachine: (no-preload-864476) Ensuring network mk-no-preload-864476 is active
	I0816 18:13:36.290318   74828 main.go:141] libmachine: (no-preload-864476) Getting domain xml...
	I0816 18:13:36.291176   74828 main.go:141] libmachine: (no-preload-864476) Creating domain...
	I0816 18:13:37.504191   74828 main.go:141] libmachine: (no-preload-864476) Waiting to get IP...
	I0816 18:13:37.505110   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:37.505575   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:37.505621   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:37.505543   75973 retry.go:31] will retry after 308.411866ms: waiting for machine to come up
	I0816 18:13:37.816219   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:37.816877   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:37.816931   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:37.816852   75973 retry.go:31] will retry after 321.445064ms: waiting for machine to come up
	I0816 18:13:38.140594   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:38.141059   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:38.141082   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:38.141018   75973 retry.go:31] will retry after 337.935433ms: waiting for machine to come up
	I0816 18:13:38.480699   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:38.481110   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:38.481135   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:38.481033   75973 retry.go:31] will retry after 449.775503ms: waiting for machine to come up
	I0816 18:13:41.266589   74510 start.go:360] acquireMachinesLock for embed-certs-777541: {Name:mke1ffa1f2d6d714bdd85e184816ba8f4dfd08f1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 18:13:38.932812   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:38.933232   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:38.933259   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:38.933171   75973 retry.go:31] will retry after 482.676832ms: waiting for machine to come up
	I0816 18:13:39.417939   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:39.418323   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:39.418350   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:39.418276   75973 retry.go:31] will retry after 740.37516ms: waiting for machine to come up
	I0816 18:13:40.160491   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:40.160917   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:40.160942   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:40.160867   75973 retry.go:31] will retry after 1.10464436s: waiting for machine to come up
	I0816 18:13:41.267213   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:41.267654   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:41.267680   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:41.267613   75973 retry.go:31] will retry after 1.395131164s: waiting for machine to come up
	I0816 18:13:42.664731   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:42.665229   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:42.665252   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:42.665181   75973 retry.go:31] will retry after 1.560403289s: waiting for machine to come up
	I0816 18:13:44.226847   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:44.227375   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:44.227404   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:44.227342   75973 retry.go:31] will retry after 1.647944685s: waiting for machine to come up
	I0816 18:13:45.876965   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:45.877411   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:45.877440   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:45.877366   75973 retry.go:31] will retry after 1.971325886s: waiting for machine to come up
	I0816 18:13:47.849950   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:47.850457   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:47.850490   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:47.850383   75973 retry.go:31] will retry after 2.95642392s: waiting for machine to come up
	I0816 18:13:50.810560   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:50.811013   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:50.811045   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:50.810930   75973 retry.go:31] will retry after 4.510008193s: waiting for machine to come up
	I0816 18:13:56.529339   75006 start.go:364] duration metric: took 4m6.515818295s to acquireMachinesLock for "default-k8s-diff-port-256678"
	I0816 18:13:56.529444   75006 start.go:96] Skipping create...Using existing machine configuration
	I0816 18:13:56.529459   75006 fix.go:54] fixHost starting: 
	I0816 18:13:56.529851   75006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:13:56.529890   75006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:13:56.547077   75006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45661
	I0816 18:13:56.547585   75006 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:13:56.548068   75006 main.go:141] libmachine: Using API Version  1
	I0816 18:13:56.548091   75006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:13:56.548421   75006 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:13:56.548610   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .DriverName
	I0816 18:13:56.548766   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetState
	I0816 18:13:56.550373   75006 fix.go:112] recreateIfNeeded on default-k8s-diff-port-256678: state=Stopped err=<nil>
	I0816 18:13:56.550414   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .DriverName
	W0816 18:13:56.550604   75006 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 18:13:56.552781   75006 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-256678" ...
	I0816 18:13:55.326062   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.326558   74828 main.go:141] libmachine: (no-preload-864476) Found IP for machine: 192.168.50.50
	I0816 18:13:55.326576   74828 main.go:141] libmachine: (no-preload-864476) Reserving static IP address...
	I0816 18:13:55.326593   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has current primary IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.327109   74828 main.go:141] libmachine: (no-preload-864476) Reserved static IP address: 192.168.50.50
	I0816 18:13:55.327142   74828 main.go:141] libmachine: (no-preload-864476) Waiting for SSH to be available...
	I0816 18:13:55.327167   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "no-preload-864476", mac: "52:54:00:f3:50:53", ip: "192.168.50.50"} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:55.327191   74828 main.go:141] libmachine: (no-preload-864476) DBG | skip adding static IP to network mk-no-preload-864476 - found existing host DHCP lease matching {name: "no-preload-864476", mac: "52:54:00:f3:50:53", ip: "192.168.50.50"}
	I0816 18:13:55.327205   74828 main.go:141] libmachine: (no-preload-864476) DBG | Getting to WaitForSSH function...
	I0816 18:13:55.329001   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.329350   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:55.329378   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.329534   74828 main.go:141] libmachine: (no-preload-864476) DBG | Using SSH client type: external
	I0816 18:13:55.329574   74828 main.go:141] libmachine: (no-preload-864476) DBG | Using SSH private key: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/no-preload-864476/id_rsa (-rw-------)
	I0816 18:13:55.329604   74828 main.go:141] libmachine: (no-preload-864476) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.50 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19461-9545/.minikube/machines/no-preload-864476/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 18:13:55.329622   74828 main.go:141] libmachine: (no-preload-864476) DBG | About to run SSH command:
	I0816 18:13:55.329636   74828 main.go:141] libmachine: (no-preload-864476) DBG | exit 0
	I0816 18:13:55.452553   74828 main.go:141] libmachine: (no-preload-864476) DBG | SSH cmd err, output: <nil>: 
	I0816 18:13:55.452964   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetConfigRaw
	I0816 18:13:55.453557   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetIP
	I0816 18:13:55.455951   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.456334   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:55.456370   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.456564   74828 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/no-preload-864476/config.json ...
	I0816 18:13:55.456782   74828 machine.go:93] provisionDockerMachine start ...
	I0816 18:13:55.456801   74828 main.go:141] libmachine: (no-preload-864476) Calling .DriverName
	I0816 18:13:55.456983   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:13:55.459149   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.459547   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:55.459570   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.459730   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHPort
	I0816 18:13:55.459918   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:55.460068   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:55.460207   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHUsername
	I0816 18:13:55.460418   74828 main.go:141] libmachine: Using SSH client type: native
	I0816 18:13:55.460603   74828 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.50 22 <nil> <nil>}
	I0816 18:13:55.460637   74828 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 18:13:55.564875   74828 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 18:13:55.564903   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetMachineName
	I0816 18:13:55.565203   74828 buildroot.go:166] provisioning hostname "no-preload-864476"
	I0816 18:13:55.565229   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetMachineName
	I0816 18:13:55.565455   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:13:55.568114   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.568578   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:55.568612   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.568777   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHPort
	I0816 18:13:55.568912   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:55.569023   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:55.569200   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHUsername
	I0816 18:13:55.569448   74828 main.go:141] libmachine: Using SSH client type: native
	I0816 18:13:55.569649   74828 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.50 22 <nil> <nil>}
	I0816 18:13:55.569667   74828 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-864476 && echo "no-preload-864476" | sudo tee /etc/hostname
	I0816 18:13:55.686349   74828 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-864476
	
	I0816 18:13:55.686378   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:13:55.689171   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.689572   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:55.689608   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.689792   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHPort
	I0816 18:13:55.690008   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:55.690183   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:55.690418   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHUsername
	I0816 18:13:55.690623   74828 main.go:141] libmachine: Using SSH client type: native
	I0816 18:13:55.690782   74828 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.50 22 <nil> <nil>}
	I0816 18:13:55.690798   74828 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-864476' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-864476/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-864476' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 18:13:55.800352   74828 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 18:13:55.800386   74828 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19461-9545/.minikube CaCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19461-9545/.minikube}
	I0816 18:13:55.800436   74828 buildroot.go:174] setting up certificates
	I0816 18:13:55.800452   74828 provision.go:84] configureAuth start
	I0816 18:13:55.800470   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetMachineName
	I0816 18:13:55.800793   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetIP
	I0816 18:13:55.803388   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.803786   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:55.803822   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.804025   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:13:55.806567   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.806977   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:55.807003   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.807129   74828 provision.go:143] copyHostCerts
	I0816 18:13:55.807178   74828 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem, removing ...
	I0816 18:13:55.807198   74828 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem
	I0816 18:13:55.807286   74828 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem (1082 bytes)
	I0816 18:13:55.807401   74828 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem, removing ...
	I0816 18:13:55.807412   74828 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem
	I0816 18:13:55.807439   74828 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem (1123 bytes)
	I0816 18:13:55.807554   74828 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem, removing ...
	I0816 18:13:55.807565   74828 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem
	I0816 18:13:55.807588   74828 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem (1679 bytes)
	I0816 18:13:55.807648   74828 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem org=jenkins.no-preload-864476 san=[127.0.0.1 192.168.50.50 localhost minikube no-preload-864476]
	I0816 18:13:55.881474   74828 provision.go:177] copyRemoteCerts
	I0816 18:13:55.881529   74828 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 18:13:55.881558   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:13:55.884424   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.884952   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:55.884983   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.885138   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHPort
	I0816 18:13:55.885335   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:55.885486   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHUsername
	I0816 18:13:55.885669   74828 sshutil.go:53] new ssh client: &{IP:192.168.50.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/no-preload-864476/id_rsa Username:docker}
	I0816 18:13:55.966915   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0816 18:13:55.989812   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 18:13:56.011744   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 18:13:56.032745   74828 provision.go:87] duration metric: took 232.276991ms to configureAuth
	I0816 18:13:56.032778   74828 buildroot.go:189] setting minikube options for container-runtime
	I0816 18:13:56.033001   74828 config.go:182] Loaded profile config "no-preload-864476": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 18:13:56.033096   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:13:56.035919   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:56.036283   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:56.036311   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:56.036499   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHPort
	I0816 18:13:56.036713   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:56.036861   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:56.036975   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHUsername
	I0816 18:13:56.037100   74828 main.go:141] libmachine: Using SSH client type: native
	I0816 18:13:56.037275   74828 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.50 22 <nil> <nil>}
	I0816 18:13:56.037294   74828 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 18:13:56.296112   74828 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 18:13:56.296140   74828 machine.go:96] duration metric: took 839.343895ms to provisionDockerMachine
	I0816 18:13:56.296152   74828 start.go:293] postStartSetup for "no-preload-864476" (driver="kvm2")
	I0816 18:13:56.296162   74828 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 18:13:56.296177   74828 main.go:141] libmachine: (no-preload-864476) Calling .DriverName
	I0816 18:13:56.296537   74828 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 18:13:56.296570   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:13:56.299838   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:56.300364   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:56.300396   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:56.300603   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHPort
	I0816 18:13:56.300833   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:56.300985   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHUsername
	I0816 18:13:56.301187   74828 sshutil.go:53] new ssh client: &{IP:192.168.50.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/no-preload-864476/id_rsa Username:docker}
	I0816 18:13:56.383095   74828 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 18:13:56.387172   74828 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 18:13:56.387200   74828 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/addons for local assets ...
	I0816 18:13:56.387286   74828 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/files for local assets ...
	I0816 18:13:56.387392   74828 filesync.go:149] local asset: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem -> 167532.pem in /etc/ssl/certs
	I0816 18:13:56.387550   74828 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 18:13:56.396072   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /etc/ssl/certs/167532.pem (1708 bytes)
	I0816 18:13:56.419470   74828 start.go:296] duration metric: took 123.306644ms for postStartSetup
	I0816 18:13:56.419509   74828 fix.go:56] duration metric: took 20.154482872s for fixHost
	I0816 18:13:56.419529   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:13:56.422047   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:56.422454   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:56.422503   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:56.422573   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHPort
	I0816 18:13:56.422764   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:56.422963   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:56.423150   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHUsername
	I0816 18:13:56.423388   74828 main.go:141] libmachine: Using SSH client type: native
	I0816 18:13:56.423597   74828 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.50 22 <nil> <nil>}
	I0816 18:13:56.423610   74828 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 18:13:56.529164   74828 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723832036.506687395
	
	I0816 18:13:56.529190   74828 fix.go:216] guest clock: 1723832036.506687395
	I0816 18:13:56.529200   74828 fix.go:229] Guest: 2024-08-16 18:13:56.506687395 +0000 UTC Remote: 2024-08-16 18:13:56.419513163 +0000 UTC m=+262.671840210 (delta=87.174232ms)
	I0816 18:13:56.529229   74828 fix.go:200] guest clock delta is within tolerance: 87.174232ms
	I0816 18:13:56.529246   74828 start.go:83] releasing machines lock for "no-preload-864476", held for 20.264231324s
	I0816 18:13:56.529276   74828 main.go:141] libmachine: (no-preload-864476) Calling .DriverName
	I0816 18:13:56.529645   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetIP
	I0816 18:13:56.532279   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:56.532599   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:56.532660   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:56.532824   74828 main.go:141] libmachine: (no-preload-864476) Calling .DriverName
	I0816 18:13:56.533348   74828 main.go:141] libmachine: (no-preload-864476) Calling .DriverName
	I0816 18:13:56.533522   74828 main.go:141] libmachine: (no-preload-864476) Calling .DriverName
	I0816 18:13:56.533604   74828 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 18:13:56.533663   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:13:56.533759   74828 ssh_runner.go:195] Run: cat /version.json
	I0816 18:13:56.533786   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:13:56.536427   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:56.536711   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:56.536822   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:56.536845   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:56.536996   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHPort
	I0816 18:13:56.537071   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:56.537105   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:56.537191   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:56.537334   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHUsername
	I0816 18:13:56.537430   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHPort
	I0816 18:13:56.537497   74828 sshutil.go:53] new ssh client: &{IP:192.168.50.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/no-preload-864476/id_rsa Username:docker}
	I0816 18:13:56.537582   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:56.537728   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHUsername
	I0816 18:13:56.537964   74828 sshutil.go:53] new ssh client: &{IP:192.168.50.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/no-preload-864476/id_rsa Username:docker}
	I0816 18:13:56.654319   74828 ssh_runner.go:195] Run: systemctl --version
	I0816 18:13:56.660640   74828 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 18:13:56.806359   74828 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 18:13:56.812415   74828 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 18:13:56.812489   74828 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 18:13:56.828095   74828 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 18:13:56.828122   74828 start.go:495] detecting cgroup driver to use...
	I0816 18:13:56.828186   74828 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 18:13:56.843041   74828 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 18:13:56.856322   74828 docker.go:217] disabling cri-docker service (if available) ...
	I0816 18:13:56.856386   74828 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 18:13:56.869899   74828 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 18:13:56.884609   74828 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 18:13:56.990986   74828 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 18:13:57.134218   74828 docker.go:233] disabling docker service ...
	I0816 18:13:57.134283   74828 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 18:13:57.156415   74828 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 18:13:57.172969   74828 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 18:13:57.328279   74828 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 18:13:57.448217   74828 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 18:13:57.461630   74828 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 18:13:57.478199   74828 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 18:13:57.478271   74828 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:13:57.487845   74828 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 18:13:57.487918   74828 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:13:57.497895   74828 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:13:57.509260   74828 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:13:57.519090   74828 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 18:13:57.529351   74828 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:13:57.539816   74828 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:13:57.559271   74828 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:13:57.573027   74828 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 18:13:57.583410   74828 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 18:13:57.583490   74828 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 18:13:57.598762   74828 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 18:13:57.609589   74828 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:13:57.727016   74828 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 18:13:57.876815   74828 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 18:13:57.876876   74828 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 18:13:57.882172   74828 start.go:563] Will wait 60s for crictl version
	I0816 18:13:57.882241   74828 ssh_runner.go:195] Run: which crictl
	I0816 18:13:57.885706   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 18:13:57.926981   74828 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 18:13:57.927070   74828 ssh_runner.go:195] Run: crio --version
	I0816 18:13:57.957802   74828 ssh_runner.go:195] Run: crio --version
	I0816 18:13:57.984920   74828 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 18:13:57.986450   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetIP
	I0816 18:13:57.989584   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:57.990205   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:57.990257   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:57.990552   74828 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0816 18:13:57.994584   74828 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 18:13:58.007996   74828 kubeadm.go:883] updating cluster {Name:no-preload-864476 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-864476 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.50 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 18:13:58.008137   74828 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 18:13:58.008184   74828 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 18:13:58.041643   74828 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0816 18:13:58.041672   74828 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0816 18:13:58.041751   74828 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:13:58.041778   74828 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0816 18:13:58.041794   74828 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0816 18:13:58.041741   74828 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0816 18:13:58.041779   74828 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 18:13:58.041899   74828 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0816 18:13:58.041918   74828 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0816 18:13:58.041798   74828 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0816 18:13:58.043387   74828 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0816 18:13:58.043471   74828 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0816 18:13:58.043386   74828 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:13:58.043471   74828 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 18:13:58.043388   74828 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0816 18:13:58.043387   74828 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0816 18:13:58.043386   74828 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0816 18:13:58.043394   74828 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0816 18:13:58.289223   74828 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0816 18:13:58.299125   74828 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0816 18:13:58.308703   74828 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0816 18:13:58.339031   74828 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 18:13:58.351467   74828 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0816 18:13:58.351514   74828 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0816 18:13:58.351572   74828 ssh_runner.go:195] Run: which crictl
	I0816 18:13:58.358019   74828 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0816 18:13:58.359198   74828 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0816 18:13:58.385487   74828 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0816 18:13:58.385529   74828 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0816 18:13:58.385571   74828 ssh_runner.go:195] Run: which crictl
	I0816 18:13:58.392417   74828 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0816 18:13:58.506834   74828 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0816 18:13:58.506886   74828 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 18:13:58.506896   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0816 18:13:58.506924   74828 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0816 18:13:58.506963   74828 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0816 18:13:58.507003   74828 ssh_runner.go:195] Run: which crictl
	I0816 18:13:58.506928   74828 ssh_runner.go:195] Run: which crictl
	I0816 18:13:58.507072   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0816 18:13:58.507004   74828 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0816 18:13:58.507099   74828 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0816 18:13:58.507124   74828 ssh_runner.go:195] Run: which crictl
	I0816 18:13:58.507160   74828 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0816 18:13:58.507181   74828 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0816 18:13:58.507228   74828 ssh_runner.go:195] Run: which crictl
	I0816 18:13:58.562410   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0816 18:13:58.562469   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0816 18:13:58.562481   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0816 18:13:58.562554   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 18:13:58.562590   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0816 18:13:58.562628   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0816 18:13:58.686069   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0816 18:13:58.690288   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0816 18:13:58.690352   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0816 18:13:58.692851   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 18:13:58.692911   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0816 18:13:58.693027   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0816 18:13:58.777263   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0816 18:13:56.554238   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .Start
	I0816 18:13:56.554426   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Ensuring networks are active...
	I0816 18:13:56.555221   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Ensuring network default is active
	I0816 18:13:56.555599   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Ensuring network mk-default-k8s-diff-port-256678 is active
	I0816 18:13:56.556004   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Getting domain xml...
	I0816 18:13:56.556809   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Creating domain...
	I0816 18:13:57.825641   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting to get IP...
	I0816 18:13:57.826681   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:13:57.827158   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:13:57.827219   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:13:57.827129   76107 retry.go:31] will retry after 267.923612ms: waiting for machine to come up
	I0816 18:13:58.096794   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:13:58.097184   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:13:58.097219   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:13:58.097158   76107 retry.go:31] will retry after 286.726817ms: waiting for machine to come up
	I0816 18:13:58.386213   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:13:58.386757   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:13:58.386782   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:13:58.386704   76107 retry.go:31] will retry after 386.697374ms: waiting for machine to come up
	I0816 18:13:58.775483   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:13:58.775989   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:13:58.776014   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:13:58.775949   76107 retry.go:31] will retry after 554.398617ms: waiting for machine to come up
	I0816 18:13:59.331517   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:13:59.332002   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:13:59.332024   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:13:59.331943   76107 retry.go:31] will retry after 589.24333ms: waiting for machine to come up
	I0816 18:13:58.823309   74828 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0816 18:13:58.823318   74828 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0816 18:13:58.823410   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 18:13:58.823434   74828 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0816 18:13:58.823437   74828 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0816 18:13:58.823549   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0816 18:13:58.836312   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0816 18:13:58.894363   74828 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0816 18:13:58.894428   74828 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0816 18:13:58.894447   74828 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0816 18:13:58.894495   74828 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0816 18:13:58.894495   74828 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0816 18:13:58.933183   74828 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0816 18:13:58.933290   74828 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0816 18:13:58.934389   74828 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0816 18:13:58.934456   74828 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0816 18:13:58.934491   74828 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0816 18:13:58.934550   74828 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0816 18:13:58.934569   74828 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0816 18:13:58.934682   74828 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:14:00.792156   74828 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (1.897633034s)
	I0816 18:14:00.792196   74828 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0816 18:14:00.792224   74828 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (1.89763588s)
	I0816 18:14:00.792257   74828 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0816 18:14:00.792230   74828 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0816 18:14:00.792281   74828 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.858968807s)
	I0816 18:14:00.792300   74828 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0816 18:14:00.792317   74828 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0816 18:14:00.792355   74828 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1: (1.85778817s)
	I0816 18:14:00.792370   74828 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0816 18:14:00.792415   74828 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.857704749s)
	I0816 18:14:00.792422   74828 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.857843473s)
	I0816 18:14:00.792436   74828 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0816 18:14:00.792457   74828 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0816 18:14:00.792491   74828 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:14:00.792528   74828 ssh_runner.go:195] Run: which crictl
	I0816 18:14:00.797103   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:14:03.171070   74828 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.378727123s)
	I0816 18:14:03.171118   74828 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0816 18:14:03.171149   74828 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.374004458s)
	I0816 18:14:03.171155   74828 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0816 18:14:03.171274   74828 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0816 18:14:03.171225   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:13:59.922834   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:13:59.923439   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:13:59.923467   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:13:59.923368   76107 retry.go:31] will retry after 779.656786ms: waiting for machine to come up
	I0816 18:14:00.704929   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:00.705395   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:14:00.705417   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:14:00.705344   76107 retry.go:31] will retry after 790.87115ms: waiting for machine to come up
	I0816 18:14:01.497557   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:01.497999   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:14:01.498052   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:14:01.497981   76107 retry.go:31] will retry after 919.825072ms: waiting for machine to come up
	I0816 18:14:02.419821   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:02.420280   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:14:02.420312   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:14:02.420227   76107 retry.go:31] will retry after 1.304504009s: waiting for machine to come up
	I0816 18:14:03.725928   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:03.726378   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:14:03.726400   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:14:03.726344   76107 retry.go:31] will retry after 2.105251359s: waiting for machine to come up
	I0816 18:14:06.879864   74828 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.708558161s)
	I0816 18:14:06.879904   74828 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0816 18:14:06.879905   74828 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.708563338s)
	I0816 18:14:06.879935   74828 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0816 18:14:06.879981   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:14:06.879991   74828 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0816 18:14:08.769077   74828 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.889063218s)
	I0816 18:14:08.769114   74828 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0816 18:14:08.769145   74828 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0816 18:14:08.769231   74828 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0816 18:14:08.769146   74828 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.889146748s)
	I0816 18:14:08.769343   74828 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0816 18:14:08.769431   74828 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0816 18:14:05.833605   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:05.834078   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:14:05.834109   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:14:05.834025   76107 retry.go:31] will retry after 2.042421539s: waiting for machine to come up
	I0816 18:14:07.878000   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:07.878510   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:14:07.878541   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:14:07.878432   76107 retry.go:31] will retry after 2.777402825s: waiting for machine to come up
	I0816 18:14:10.627286   74828 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.858028746s)
	I0816 18:14:10.627331   74828 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0816 18:14:10.627346   74828 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.857891086s)
	I0816 18:14:10.627358   74828 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0816 18:14:10.627378   74828 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0816 18:14:10.627402   74828 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0816 18:14:11.977277   74828 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.349851948s)
	I0816 18:14:11.977314   74828 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0816 18:14:11.977339   74828 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0816 18:14:11.977389   74828 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0816 18:14:12.630939   74828 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0816 18:14:12.630999   74828 cache_images.go:123] Successfully loaded all cached images
	I0816 18:14:12.631004   74828 cache_images.go:92] duration metric: took 14.589319022s to LoadCachedImages
	I0816 18:14:12.631016   74828 kubeadm.go:934] updating node { 192.168.50.50 8443 v1.31.0 crio true true} ...
	I0816 18:14:12.631132   74828 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-864476 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.50
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-864476 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 18:14:12.631207   74828 ssh_runner.go:195] Run: crio config
	I0816 18:14:12.683072   74828 cni.go:84] Creating CNI manager for ""
	I0816 18:14:12.683094   74828 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 18:14:12.683107   74828 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 18:14:12.683129   74828 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.50 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-864476 NodeName:no-preload-864476 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.50"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.50 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 18:14:12.683276   74828 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.50
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-864476"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.50
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.50"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 18:14:12.683345   74828 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 18:14:12.693879   74828 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 18:14:12.693941   74828 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 18:14:12.702601   74828 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0816 18:14:12.718235   74828 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 18:14:12.733455   74828 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0816 18:14:12.748878   74828 ssh_runner.go:195] Run: grep 192.168.50.50	control-plane.minikube.internal$ /etc/hosts
	I0816 18:14:12.752276   74828 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.50	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 18:14:12.763390   74828 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:14:12.872450   74828 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 18:14:12.888531   74828 certs.go:68] Setting up /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/no-preload-864476 for IP: 192.168.50.50
	I0816 18:14:12.888569   74828 certs.go:194] generating shared ca certs ...
	I0816 18:14:12.888589   74828 certs.go:226] acquiring lock for ca certs: {Name:mk1e000575c94c138513704c2900b8a68810eb65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:14:12.888783   74828 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key
	I0816 18:14:12.888845   74828 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key
	I0816 18:14:12.888860   74828 certs.go:256] generating profile certs ...
	I0816 18:14:12.888971   74828 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/no-preload-864476/client.key
	I0816 18:14:12.889070   74828 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/no-preload-864476/apiserver.key.30cf6dcb
	I0816 18:14:12.889136   74828 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/no-preload-864476/proxy-client.key
	I0816 18:14:12.889298   74828 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem (1338 bytes)
	W0816 18:14:12.889339   74828 certs.go:480] ignoring /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753_empty.pem, impossibly tiny 0 bytes
	I0816 18:14:12.889351   74828 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 18:14:12.889391   74828 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem (1082 bytes)
	I0816 18:14:12.889421   74828 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem (1123 bytes)
	I0816 18:14:12.889452   74828 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem (1679 bytes)
	I0816 18:14:12.889507   74828 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem (1708 bytes)
	I0816 18:14:12.890441   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 18:14:12.919571   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 18:14:12.947375   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 18:14:12.975197   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 18:14:13.007308   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/no-preload-864476/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0816 18:14:13.056151   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/no-preload-864476/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 18:14:13.080317   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/no-preload-864476/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 18:14:13.102231   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/no-preload-864476/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 18:14:13.124045   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 18:14:13.145312   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem --> /usr/share/ca-certificates/16753.pem (1338 bytes)
	I0816 18:14:13.166806   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /usr/share/ca-certificates/167532.pem (1708 bytes)
	I0816 18:14:13.188173   74828 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 18:14:13.203594   74828 ssh_runner.go:195] Run: openssl version
	I0816 18:14:13.209148   74828 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16753.pem && ln -fs /usr/share/ca-certificates/16753.pem /etc/ssl/certs/16753.pem"
	I0816 18:14:13.220266   74828 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16753.pem
	I0816 18:14:13.224569   74828 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 17:00 /usr/share/ca-certificates/16753.pem
	I0816 18:14:13.224635   74828 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16753.pem
	I0816 18:14:13.230141   74828 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16753.pem /etc/ssl/certs/51391683.0"
	I0816 18:14:13.241362   74828 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167532.pem && ln -fs /usr/share/ca-certificates/167532.pem /etc/ssl/certs/167532.pem"
	I0816 18:14:13.252437   74828 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167532.pem
	I0816 18:14:13.256658   74828 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 17:00 /usr/share/ca-certificates/167532.pem
	I0816 18:14:13.256712   74828 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167532.pem
	I0816 18:14:13.262006   74828 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167532.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 18:14:13.273168   74828 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 18:14:13.284518   74828 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:14:13.288566   74828 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:49 /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:14:13.288611   74828 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:14:13.293944   74828 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 18:14:13.305148   74828 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 18:14:13.309460   74828 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 18:14:13.315123   74828 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 18:14:13.320854   74828 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 18:14:13.326676   74828 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 18:14:13.332183   74828 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 18:14:13.337794   74828 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 18:14:13.343369   74828 kubeadm.go:392] StartCluster: {Name:no-preload-864476 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-864476 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.50 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 18:14:13.343470   74828 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 18:14:13.343527   74828 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 18:14:13.384490   74828 cri.go:89] found id: ""
	I0816 18:14:13.384567   74828 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 18:14:13.395094   74828 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 18:14:13.395116   74828 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 18:14:13.395183   74828 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 18:14:13.406605   74828 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 18:14:13.407898   74828 kubeconfig.go:125] found "no-preload-864476" server: "https://192.168.50.50:8443"
	I0816 18:14:13.410808   74828 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 18:14:13.420516   74828 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.50
	I0816 18:14:13.420541   74828 kubeadm.go:1160] stopping kube-system containers ...
	I0816 18:14:13.420554   74828 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 18:14:13.420589   74828 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 18:14:13.459174   74828 cri.go:89] found id: ""
	I0816 18:14:13.459242   74828 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 18:14:13.475598   74828 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 18:14:13.484685   74828 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 18:14:13.484707   74828 kubeadm.go:157] found existing configuration files:
	
	I0816 18:14:13.484756   74828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 18:14:13.493092   74828 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 18:14:13.493147   74828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 18:14:13.501649   74828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 18:14:13.509987   74828 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 18:14:13.510028   74828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 18:14:13.518500   74828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 18:14:13.526689   74828 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 18:14:13.526737   74828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 18:14:13.535606   74828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 18:14:13.545130   74828 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 18:14:13.545185   74828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 18:14:13.553947   74828 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 18:14:13.562763   74828 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:13.663383   74828 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:10.657652   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:10.658062   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:14:10.658105   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:14:10.657999   76107 retry.go:31] will retry after 3.856225979s: waiting for machine to come up
	I0816 18:14:14.518358   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:14.518875   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Found IP for machine: 192.168.72.144
	I0816 18:14:14.518896   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Reserving static IP address...
	I0816 18:14:14.518915   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has current primary IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:14.519296   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Reserved static IP address: 192.168.72.144
	I0816 18:14:14.519334   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-256678", mac: "52:54:00:76:32:d8", ip: "192.168.72.144"} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:14.519346   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for SSH to be available...
	I0816 18:14:14.519377   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | skip adding static IP to network mk-default-k8s-diff-port-256678 - found existing host DHCP lease matching {name: "default-k8s-diff-port-256678", mac: "52:54:00:76:32:d8", ip: "192.168.72.144"}
	I0816 18:14:14.519391   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | Getting to WaitForSSH function...
	I0816 18:14:14.521566   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:14.521926   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:14.521969   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:14.522133   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | Using SSH client type: external
	I0816 18:14:14.522160   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | Using SSH private key: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/default-k8s-diff-port-256678/id_rsa (-rw-------)
	I0816 18:14:14.522202   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.144 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19461-9545/.minikube/machines/default-k8s-diff-port-256678/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 18:14:14.522221   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | About to run SSH command:
	I0816 18:14:14.522235   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | exit 0
	I0816 18:14:14.648603   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | SSH cmd err, output: <nil>: 
	I0816 18:14:14.649005   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetConfigRaw
	I0816 18:14:14.649616   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetIP
	I0816 18:14:14.652340   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:14.652767   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:14.652796   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:14.653116   75006 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/default-k8s-diff-port-256678/config.json ...
	I0816 18:14:14.653337   75006 machine.go:93] provisionDockerMachine start ...
	I0816 18:14:14.653361   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .DriverName
	I0816 18:14:14.653598   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:14:14.656062   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:14.656412   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:14.656442   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:14.656565   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHPort
	I0816 18:14:14.656757   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:14.656895   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:14.657015   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHUsername
	I0816 18:14:14.657128   75006 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:14.657312   75006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I0816 18:14:14.657321   75006 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 18:14:14.768721   75006 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 18:14:14.768749   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetMachineName
	I0816 18:14:14.768990   75006 buildroot.go:166] provisioning hostname "default-k8s-diff-port-256678"
	I0816 18:14:14.769021   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetMachineName
	I0816 18:14:14.769246   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:14:14.772310   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:14.772675   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:14.772704   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:14.772922   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHPort
	I0816 18:14:14.773084   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:14.773242   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:14.773361   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHUsername
	I0816 18:14:14.773564   75006 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:14.773764   75006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I0816 18:14:14.773783   75006 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-256678 && echo "default-k8s-diff-port-256678" | sudo tee /etc/hostname
	I0816 18:14:14.894016   75006 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-256678
	
	I0816 18:14:14.894047   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:14:14.896797   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:14.897150   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:14.897184   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:14.897424   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHPort
	I0816 18:14:14.897613   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:14.897800   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:14.897933   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHUsername
	I0816 18:14:14.898124   75006 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:14.898286   75006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I0816 18:14:14.898303   75006 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-256678' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-256678/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-256678' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 18:14:15.814480   75402 start.go:364] duration metric: took 3m22.605706427s to acquireMachinesLock for "old-k8s-version-783465"
	I0816 18:14:15.814546   75402 start.go:96] Skipping create...Using existing machine configuration
	I0816 18:14:15.814554   75402 fix.go:54] fixHost starting: 
	I0816 18:14:15.815001   75402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:14:15.815062   75402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:14:15.834710   75402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46611
	I0816 18:14:15.835124   75402 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:14:15.835653   75402 main.go:141] libmachine: Using API Version  1
	I0816 18:14:15.835676   75402 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:14:15.836005   75402 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:14:15.836258   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	I0816 18:14:15.836392   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetState
	I0816 18:14:15.838010   75402 fix.go:112] recreateIfNeeded on old-k8s-version-783465: state=Stopped err=<nil>
	I0816 18:14:15.838043   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	W0816 18:14:15.838200   75402 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 18:14:15.840214   75402 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-783465" ...
	I0816 18:14:15.016150   75006 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 18:14:15.016176   75006 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19461-9545/.minikube CaCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19461-9545/.minikube}
	I0816 18:14:15.016200   75006 buildroot.go:174] setting up certificates
	I0816 18:14:15.016213   75006 provision.go:84] configureAuth start
	I0816 18:14:15.016231   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetMachineName
	I0816 18:14:15.016518   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetIP
	I0816 18:14:15.019132   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.019687   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:15.019725   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.019907   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:14:15.022758   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.023192   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:15.023233   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.023408   75006 provision.go:143] copyHostCerts
	I0816 18:14:15.023468   75006 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem, removing ...
	I0816 18:14:15.023489   75006 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem
	I0816 18:14:15.023552   75006 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem (1082 bytes)
	I0816 18:14:15.023649   75006 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem, removing ...
	I0816 18:14:15.023659   75006 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem
	I0816 18:14:15.023681   75006 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem (1123 bytes)
	I0816 18:14:15.023733   75006 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem, removing ...
	I0816 18:14:15.023740   75006 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem
	I0816 18:14:15.023756   75006 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem (1679 bytes)
	I0816 18:14:15.023802   75006 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-256678 san=[127.0.0.1 192.168.72.144 default-k8s-diff-port-256678 localhost minikube]
	I0816 18:14:15.142549   75006 provision.go:177] copyRemoteCerts
	I0816 18:14:15.142601   75006 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 18:14:15.142625   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:14:15.145515   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.145867   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:15.145903   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.146029   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHPort
	I0816 18:14:15.146250   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:15.146436   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHUsername
	I0816 18:14:15.146604   75006 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/default-k8s-diff-port-256678/id_rsa Username:docker}
	I0816 18:14:15.230785   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 18:14:15.258450   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0816 18:14:15.286008   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 18:14:15.308690   75006 provision.go:87] duration metric: took 292.45797ms to configureAuth
	I0816 18:14:15.308725   75006 buildroot.go:189] setting minikube options for container-runtime
	I0816 18:14:15.308927   75006 config.go:182] Loaded profile config "default-k8s-diff-port-256678": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 18:14:15.308996   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:14:15.311959   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.312310   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:15.312332   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.312492   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHPort
	I0816 18:14:15.312713   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:15.312890   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:15.313028   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHUsername
	I0816 18:14:15.313184   75006 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:15.313369   75006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I0816 18:14:15.313387   75006 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 18:14:15.574487   75006 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 18:14:15.574517   75006 machine.go:96] duration metric: took 921.166622ms to provisionDockerMachine
	I0816 18:14:15.574529   75006 start.go:293] postStartSetup for "default-k8s-diff-port-256678" (driver="kvm2")
	I0816 18:14:15.574538   75006 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 18:14:15.574552   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .DriverName
	I0816 18:14:15.574835   75006 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 18:14:15.574854   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:14:15.577944   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.578266   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:15.578295   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.578469   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHPort
	I0816 18:14:15.578651   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:15.578800   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHUsername
	I0816 18:14:15.578912   75006 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/default-k8s-diff-port-256678/id_rsa Username:docker}
	I0816 18:14:15.664404   75006 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 18:14:15.668362   75006 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 18:14:15.668389   75006 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/addons for local assets ...
	I0816 18:14:15.668459   75006 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/files for local assets ...
	I0816 18:14:15.668562   75006 filesync.go:149] local asset: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem -> 167532.pem in /etc/ssl/certs
	I0816 18:14:15.668705   75006 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 18:14:15.678830   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /etc/ssl/certs/167532.pem (1708 bytes)
	I0816 18:14:15.702087   75006 start.go:296] duration metric: took 127.545675ms for postStartSetup
	I0816 18:14:15.702129   75006 fix.go:56] duration metric: took 19.172678011s for fixHost
	I0816 18:14:15.702152   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:14:15.704680   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.705117   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:15.705154   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.705288   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHPort
	I0816 18:14:15.705479   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:15.705643   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:15.705766   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHUsername
	I0816 18:14:15.705922   75006 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:15.706084   75006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I0816 18:14:15.706095   75006 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 18:14:15.814313   75006 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723832055.788948458
	
	I0816 18:14:15.814337   75006 fix.go:216] guest clock: 1723832055.788948458
	I0816 18:14:15.814348   75006 fix.go:229] Guest: 2024-08-16 18:14:15.788948458 +0000 UTC Remote: 2024-08-16 18:14:15.702133997 +0000 UTC m=+265.826862410 (delta=86.814461ms)
	I0816 18:14:15.814372   75006 fix.go:200] guest clock delta is within tolerance: 86.814461ms
	I0816 18:14:15.814382   75006 start.go:83] releasing machines lock for "default-k8s-diff-port-256678", held for 19.284958633s
	I0816 18:14:15.814416   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .DriverName
	I0816 18:14:15.814723   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetIP
	I0816 18:14:15.817995   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.818426   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:15.818467   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.818620   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .DriverName
	I0816 18:14:15.819299   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .DriverName
	I0816 18:14:15.819518   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .DriverName
	I0816 18:14:15.819616   75006 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 18:14:15.819656   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:14:15.819840   75006 ssh_runner.go:195] Run: cat /version.json
	I0816 18:14:15.819869   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:14:15.822797   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.823189   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.823478   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:15.823521   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.823659   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHPort
	I0816 18:14:15.823804   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:15.823811   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:15.823828   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.823965   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHUsername
	I0816 18:14:15.824064   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHPort
	I0816 18:14:15.824177   75006 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/default-k8s-diff-port-256678/id_rsa Username:docker}
	I0816 18:14:15.824234   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:15.824368   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHUsername
	I0816 18:14:15.824486   75006 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/default-k8s-diff-port-256678/id_rsa Username:docker}
	I0816 18:14:15.948709   75006 ssh_runner.go:195] Run: systemctl --version
	I0816 18:14:15.956239   75006 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 18:14:16.103538   75006 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 18:14:16.109299   75006 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 18:14:16.109385   75006 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 18:14:16.125056   75006 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 18:14:16.125092   75006 start.go:495] detecting cgroup driver to use...
	I0816 18:14:16.125188   75006 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 18:14:16.141741   75006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 18:14:16.158917   75006 docker.go:217] disabling cri-docker service (if available) ...
	I0816 18:14:16.158993   75006 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 18:14:16.173256   75006 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 18:14:16.187026   75006 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 18:14:16.332452   75006 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 18:14:16.503181   75006 docker.go:233] disabling docker service ...
	I0816 18:14:16.503254   75006 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 18:14:16.517961   75006 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 18:14:16.535991   75006 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 18:14:16.667874   75006 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 18:14:16.799300   75006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 18:14:16.813852   75006 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 18:14:16.832891   75006 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 18:14:16.832953   75006 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:16.845621   75006 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 18:14:16.845716   75006 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:16.856045   75006 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:16.866117   75006 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:16.877586   75006 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 18:14:16.887643   75006 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:16.897164   75006 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:16.915247   75006 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:16.924887   75006 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 18:14:16.933645   75006 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 18:14:16.933709   75006 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 18:14:16.946920   75006 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 18:14:16.955928   75006 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:14:17.090148   75006 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 18:14:17.241434   75006 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 18:14:17.241531   75006 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 18:14:17.246730   75006 start.go:563] Will wait 60s for crictl version
	I0816 18:14:17.246796   75006 ssh_runner.go:195] Run: which crictl
	I0816 18:14:17.250397   75006 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 18:14:17.289194   75006 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 18:14:17.289295   75006 ssh_runner.go:195] Run: crio --version
	I0816 18:14:17.324401   75006 ssh_runner.go:195] Run: crio --version
	I0816 18:14:17.361220   75006 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 18:14:15.841411   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .Start
	I0816 18:14:15.841576   75402 main.go:141] libmachine: (old-k8s-version-783465) Ensuring networks are active...
	I0816 18:14:15.842263   75402 main.go:141] libmachine: (old-k8s-version-783465) Ensuring network default is active
	I0816 18:14:15.842609   75402 main.go:141] libmachine: (old-k8s-version-783465) Ensuring network mk-old-k8s-version-783465 is active
	I0816 18:14:15.843023   75402 main.go:141] libmachine: (old-k8s-version-783465) Getting domain xml...
	I0816 18:14:15.844141   75402 main.go:141] libmachine: (old-k8s-version-783465) Creating domain...
	I0816 18:14:17.215163   75402 main.go:141] libmachine: (old-k8s-version-783465) Waiting to get IP...
	I0816 18:14:17.216445   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:17.216933   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:17.217029   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:17.216922   76298 retry.go:31] will retry after 286.243503ms: waiting for machine to come up
	I0816 18:14:17.504645   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:17.505240   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:17.505262   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:17.505175   76298 retry.go:31] will retry after 275.715235ms: waiting for machine to come up
	I0816 18:14:17.782804   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:17.783365   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:17.783392   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:17.783292   76298 retry.go:31] will retry after 343.088129ms: waiting for machine to come up
	I0816 18:14:14.936549   74828 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.273126441s)
	I0816 18:14:14.936584   74828 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:15.139778   74828 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:15.201814   74828 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:15.270552   74828 api_server.go:52] waiting for apiserver process to appear ...
	I0816 18:14:15.270667   74828 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:15.771379   74828 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:16.271296   74828 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:16.335242   74828 api_server.go:72] duration metric: took 1.064710561s to wait for apiserver process to appear ...
	I0816 18:14:16.335265   74828 api_server.go:88] waiting for apiserver healthz status ...
	I0816 18:14:16.335282   74828 api_server.go:253] Checking apiserver healthz at https://192.168.50.50:8443/healthz ...
	I0816 18:14:16.335727   74828 api_server.go:269] stopped: https://192.168.50.50:8443/healthz: Get "https://192.168.50.50:8443/healthz": dial tcp 192.168.50.50:8443: connect: connection refused
	I0816 18:14:16.835361   74828 api_server.go:253] Checking apiserver healthz at https://192.168.50.50:8443/healthz ...
	I0816 18:14:17.362436   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetIP
	I0816 18:14:17.365728   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:17.366122   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:17.366154   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:17.366403   75006 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0816 18:14:17.370322   75006 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 18:14:17.383153   75006 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-256678 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-256678 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.144 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 18:14:17.383303   75006 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 18:14:17.383364   75006 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 18:14:17.420269   75006 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0816 18:14:17.420339   75006 ssh_runner.go:195] Run: which lz4
	I0816 18:14:17.424477   75006 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 18:14:17.428507   75006 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 18:14:17.428547   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0816 18:14:18.717202   75006 crio.go:462] duration metric: took 1.292754157s to copy over tarball
	I0816 18:14:18.717278   75006 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 18:14:19.241691   74828 api_server.go:279] https://192.168.50.50:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 18:14:19.241729   74828 api_server.go:103] status: https://192.168.50.50:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 18:14:19.241746   74828 api_server.go:253] Checking apiserver healthz at https://192.168.50.50:8443/healthz ...
	I0816 18:14:19.292883   74828 api_server.go:279] https://192.168.50.50:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 18:14:19.292924   74828 api_server.go:103] status: https://192.168.50.50:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 18:14:19.336097   74828 api_server.go:253] Checking apiserver healthz at https://192.168.50.50:8443/healthz ...
	I0816 18:14:19.363715   74828 api_server.go:279] https://192.168.50.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:14:19.363753   74828 api_server.go:103] status: https://192.168.50.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:14:19.835848   74828 api_server.go:253] Checking apiserver healthz at https://192.168.50.50:8443/healthz ...
	I0816 18:14:19.840615   74828 api_server.go:279] https://192.168.50.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:14:19.840666   74828 api_server.go:103] status: https://192.168.50.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:14:20.336291   74828 api_server.go:253] Checking apiserver healthz at https://192.168.50.50:8443/healthz ...
	I0816 18:14:20.343751   74828 api_server.go:279] https://192.168.50.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:14:20.343785   74828 api_server.go:103] status: https://192.168.50.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:14:20.835470   74828 api_server.go:253] Checking apiserver healthz at https://192.168.50.50:8443/healthz ...
	I0816 18:14:20.841217   74828 api_server.go:279] https://192.168.50.50:8443/healthz returned 200:
	ok
	I0816 18:14:20.849609   74828 api_server.go:141] control plane version: v1.31.0
	I0816 18:14:20.849642   74828 api_server.go:131] duration metric: took 4.514370955s to wait for apiserver health ...
	I0816 18:14:20.849653   74828 cni.go:84] Creating CNI manager for ""
	I0816 18:14:20.849662   74828 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 18:14:20.851828   74828 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 18:14:18.127538   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:18.128044   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:18.128077   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:18.127958   76298 retry.go:31] will retry after 543.91951ms: waiting for machine to come up
	I0816 18:14:18.673778   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:18.674328   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:18.674351   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:18.674274   76298 retry.go:31] will retry after 694.978788ms: waiting for machine to come up
	I0816 18:14:19.370976   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:19.371577   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:19.371605   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:19.371538   76298 retry.go:31] will retry after 578.640883ms: waiting for machine to come up
	I0816 18:14:19.952328   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:19.952917   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:19.952941   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:19.952863   76298 retry.go:31] will retry after 820.19233ms: waiting for machine to come up
	I0816 18:14:20.774767   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:20.775175   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:20.775200   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:20.775134   76298 retry.go:31] will retry after 1.262201815s: waiting for machine to come up
	I0816 18:14:22.038872   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:22.039357   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:22.039385   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:22.039302   76298 retry.go:31] will retry after 1.164593889s: waiting for machine to come up
	I0816 18:14:20.853121   74828 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 18:14:20.866117   74828 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 18:14:20.888451   74828 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 18:14:20.902482   74828 system_pods.go:59] 8 kube-system pods found
	I0816 18:14:20.902530   74828 system_pods.go:61] "coredns-6f6b679f8f-w9cbm" [9b50c913-f492-4432-a50a-e0f727a7b856] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 18:14:20.902545   74828 system_pods.go:61] "etcd-no-preload-864476" [e45a11b8-fa3e-4a6e-9d06-5d82fdaf20dc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 18:14:20.902557   74828 system_pods.go:61] "kube-apiserver-no-preload-864476" [1cf82575-b520-4bc0-9e90-d40c02b4468d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 18:14:20.902568   74828 system_pods.go:61] "kube-controller-manager-no-preload-864476" [8c9123e0-16a4-4940-8464-4bec383bac90] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 18:14:20.902577   74828 system_pods.go:61] "kube-proxy-vdqxz" [0332e87e-5c0c-41f5-88a9-31b7f8494eb6] Running
	I0816 18:14:20.902587   74828 system_pods.go:61] "kube-scheduler-no-preload-864476" [6139753f-b5cf-4af5-a9fa-03fb220e3dc9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 18:14:20.902606   74828 system_pods.go:61] "metrics-server-6867b74b74-rxtwg" [f0d04fc9-24c0-47e3-afdc-f250ef07900c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 18:14:20.902620   74828 system_pods.go:61] "storage-provisioner" [65303dd8-27d7-4bf3-ae58-ff5fe556f17f] Running
	I0816 18:14:20.902631   74828 system_pods.go:74] duration metric: took 14.150825ms to wait for pod list to return data ...
	I0816 18:14:20.902645   74828 node_conditions.go:102] verifying NodePressure condition ...
	I0816 18:14:20.909305   74828 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 18:14:20.909342   74828 node_conditions.go:123] node cpu capacity is 2
	I0816 18:14:20.909355   74828 node_conditions.go:105] duration metric: took 6.699359ms to run NodePressure ...
	I0816 18:14:20.909377   74828 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:21.193348   74828 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0816 18:14:21.198555   74828 kubeadm.go:739] kubelet initialised
	I0816 18:14:21.198585   74828 kubeadm.go:740] duration metric: took 5.20722ms waiting for restarted kubelet to initialise ...
	I0816 18:14:21.198595   74828 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:14:21.204695   74828 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-w9cbm" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:21.212855   74828 pod_ready.go:98] node "no-preload-864476" hosting pod "coredns-6f6b679f8f-w9cbm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-864476" has status "Ready":"False"
	I0816 18:14:21.212877   74828 pod_ready.go:82] duration metric: took 8.157781ms for pod "coredns-6f6b679f8f-w9cbm" in "kube-system" namespace to be "Ready" ...
	E0816 18:14:21.212889   74828 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-864476" hosting pod "coredns-6f6b679f8f-w9cbm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-864476" has status "Ready":"False"
	I0816 18:14:21.212899   74828 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:21.220125   74828 pod_ready.go:98] node "no-preload-864476" hosting pod "etcd-no-preload-864476" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-864476" has status "Ready":"False"
	I0816 18:14:21.220150   74828 pod_ready.go:82] duration metric: took 7.241861ms for pod "etcd-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	E0816 18:14:21.220158   74828 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-864476" hosting pod "etcd-no-preload-864476" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-864476" has status "Ready":"False"
	I0816 18:14:21.220166   74828 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:21.226930   74828 pod_ready.go:98] node "no-preload-864476" hosting pod "kube-apiserver-no-preload-864476" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-864476" has status "Ready":"False"
	I0816 18:14:21.226957   74828 pod_ready.go:82] duration metric: took 6.783402ms for pod "kube-apiserver-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	E0816 18:14:21.226967   74828 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-864476" hosting pod "kube-apiserver-no-preload-864476" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-864476" has status "Ready":"False"
	I0816 18:14:21.226976   74828 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:21.292011   74828 pod_ready.go:98] node "no-preload-864476" hosting pod "kube-controller-manager-no-preload-864476" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-864476" has status "Ready":"False"
	I0816 18:14:21.292054   74828 pod_ready.go:82] duration metric: took 65.066708ms for pod "kube-controller-manager-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	E0816 18:14:21.292066   74828 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-864476" hosting pod "kube-controller-manager-no-preload-864476" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-864476" has status "Ready":"False"
	I0816 18:14:21.292075   74828 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-vdqxz" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:21.692536   74828 pod_ready.go:93] pod "kube-proxy-vdqxz" in "kube-system" namespace has status "Ready":"True"
	I0816 18:14:21.692564   74828 pod_ready.go:82] duration metric: took 400.476293ms for pod "kube-proxy-vdqxz" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:21.692577   74828 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:21.155261   75006 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.437939279s)
	I0816 18:14:21.155296   75006 crio.go:469] duration metric: took 2.438065212s to extract the tarball
	I0816 18:14:21.155325   75006 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 18:14:21.199451   75006 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 18:14:21.249963   75006 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 18:14:21.249990   75006 cache_images.go:84] Images are preloaded, skipping loading
	I0816 18:14:21.250002   75006 kubeadm.go:934] updating node { 192.168.72.144 8444 v1.31.0 crio true true} ...
	I0816 18:14:21.250129   75006 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-256678 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.144
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-256678 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 18:14:21.250211   75006 ssh_runner.go:195] Run: crio config
	I0816 18:14:21.299619   75006 cni.go:84] Creating CNI manager for ""
	I0816 18:14:21.299644   75006 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 18:14:21.299663   75006 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 18:14:21.299684   75006 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.144 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-256678 NodeName:default-k8s-diff-port-256678 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.144"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.144 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 18:14:21.299813   75006 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.144
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-256678"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.144
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.144"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 18:14:21.299880   75006 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 18:14:21.310127   75006 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 18:14:21.310205   75006 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 18:14:21.319566   75006 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0816 18:14:21.337043   75006 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 18:14:21.352319   75006 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0816 18:14:21.370117   75006 ssh_runner.go:195] Run: grep 192.168.72.144	control-plane.minikube.internal$ /etc/hosts
	I0816 18:14:21.373986   75006 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.144	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 18:14:21.386518   75006 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:14:21.508855   75006 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 18:14:21.525184   75006 certs.go:68] Setting up /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/default-k8s-diff-port-256678 for IP: 192.168.72.144
	I0816 18:14:21.525209   75006 certs.go:194] generating shared ca certs ...
	I0816 18:14:21.525230   75006 certs.go:226] acquiring lock for ca certs: {Name:mk1e000575c94c138513704c2900b8a68810eb65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:14:21.525413   75006 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key
	I0816 18:14:21.525468   75006 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key
	I0816 18:14:21.525481   75006 certs.go:256] generating profile certs ...
	I0816 18:14:21.525604   75006 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/default-k8s-diff-port-256678/client.key
	I0816 18:14:21.525688   75006 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/default-k8s-diff-port-256678/apiserver.key.ac6d83aa
	I0816 18:14:21.525738   75006 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/default-k8s-diff-port-256678/proxy-client.key
	I0816 18:14:21.525888   75006 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem (1338 bytes)
	W0816 18:14:21.525931   75006 certs.go:480] ignoring /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753_empty.pem, impossibly tiny 0 bytes
	I0816 18:14:21.525944   75006 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 18:14:21.525991   75006 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem (1082 bytes)
	I0816 18:14:21.526028   75006 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem (1123 bytes)
	I0816 18:14:21.526052   75006 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem (1679 bytes)
	I0816 18:14:21.526101   75006 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem (1708 bytes)
	I0816 18:14:21.526719   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 18:14:21.556992   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 18:14:21.590311   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 18:14:21.624782   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 18:14:21.655118   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/default-k8s-diff-port-256678/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0816 18:14:21.695431   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/default-k8s-diff-port-256678/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 18:14:21.722575   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/default-k8s-diff-port-256678/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 18:14:21.744870   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/default-k8s-diff-port-256678/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 18:14:21.770850   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 18:14:21.793906   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem --> /usr/share/ca-certificates/16753.pem (1338 bytes)
	I0816 18:14:21.817643   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /usr/share/ca-certificates/167532.pem (1708 bytes)
	I0816 18:14:21.839584   75006 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 18:14:21.856447   75006 ssh_runner.go:195] Run: openssl version
	I0816 18:14:21.862104   75006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16753.pem && ln -fs /usr/share/ca-certificates/16753.pem /etc/ssl/certs/16753.pem"
	I0816 18:14:21.872584   75006 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16753.pem
	I0816 18:14:21.876886   75006 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 17:00 /usr/share/ca-certificates/16753.pem
	I0816 18:14:21.876945   75006 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16753.pem
	I0816 18:14:21.882424   75006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16753.pem /etc/ssl/certs/51391683.0"
	I0816 18:14:21.892761   75006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167532.pem && ln -fs /usr/share/ca-certificates/167532.pem /etc/ssl/certs/167532.pem"
	I0816 18:14:21.904506   75006 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167532.pem
	I0816 18:14:21.909624   75006 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 17:00 /usr/share/ca-certificates/167532.pem
	I0816 18:14:21.909687   75006 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167532.pem
	I0816 18:14:21.915765   75006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167532.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 18:14:21.927160   75006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 18:14:21.937381   75006 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:14:21.941423   75006 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:49 /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:14:21.941477   75006 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:14:21.946741   75006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 18:14:21.958082   75006 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 18:14:21.962431   75006 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 18:14:21.969889   75006 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 18:14:21.977302   75006 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 18:14:21.983468   75006 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 18:14:21.989115   75006 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 18:14:21.994569   75006 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 18:14:21.999962   75006 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-256678 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-256678 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.144 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 18:14:22.000090   75006 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 18:14:22.000139   75006 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 18:14:22.034063   75006 cri.go:89] found id: ""
	I0816 18:14:22.034158   75006 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 18:14:22.043988   75006 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 18:14:22.044003   75006 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 18:14:22.044040   75006 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 18:14:22.053276   75006 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 18:14:22.054255   75006 kubeconfig.go:125] found "default-k8s-diff-port-256678" server: "https://192.168.72.144:8444"
	I0816 18:14:22.056408   75006 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 18:14:22.065394   75006 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.144
	I0816 18:14:22.065429   75006 kubeadm.go:1160] stopping kube-system containers ...
	I0816 18:14:22.065443   75006 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 18:14:22.065496   75006 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 18:14:22.112797   75006 cri.go:89] found id: ""
	I0816 18:14:22.112889   75006 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 18:14:22.130231   75006 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 18:14:22.139432   75006 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 18:14:22.139451   75006 kubeadm.go:157] found existing configuration files:
	
	I0816 18:14:22.139493   75006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0816 18:14:22.148118   75006 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 18:14:22.148168   75006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 18:14:22.158088   75006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0816 18:14:22.166741   75006 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 18:14:22.166803   75006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 18:14:22.175578   75006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0816 18:14:22.185238   75006 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 18:14:22.185286   75006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 18:14:22.194074   75006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0816 18:14:22.205053   75006 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 18:14:22.205105   75006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 18:14:22.216506   75006 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 18:14:22.228754   75006 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:22.344597   75006 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:23.006750   75006 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:23.275587   75006 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:23.356515   75006 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:23.432890   75006 api_server.go:52] waiting for apiserver process to appear ...
	I0816 18:14:23.432991   75006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:23.933834   75006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:24.433736   75006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:23.205567   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:23.206051   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:23.206078   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:23.206007   76298 retry.go:31] will retry after 2.304886921s: waiting for machine to come up
	I0816 18:14:25.512748   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:25.513295   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:25.513321   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:25.513261   76298 retry.go:31] will retry after 2.603393394s: waiting for machine to come up
	I0816 18:14:23.801346   74828 pod_ready.go:103] pod "kube-scheduler-no-preload-864476" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:26.199045   74828 pod_ready.go:103] pod "kube-scheduler-no-preload-864476" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:28.205981   74828 pod_ready.go:103] pod "kube-scheduler-no-preload-864476" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:24.933846   75006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:24.954190   75006 api_server.go:72] duration metric: took 1.521307594s to wait for apiserver process to appear ...
	I0816 18:14:24.954219   75006 api_server.go:88] waiting for apiserver healthz status ...
	I0816 18:14:24.954242   75006 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I0816 18:14:27.835517   75006 api_server.go:279] https://192.168.72.144:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W0816 18:14:27.835552   75006 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I0816 18:14:27.835567   75006 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I0816 18:14:27.842961   75006 api_server.go:279] https://192.168.72.144:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W0816 18:14:27.842992   75006 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I0816 18:14:27.954290   75006 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I0816 18:14:27.963372   75006 api_server.go:279] https://192.168.72.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:14:27.963400   75006 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:14:28.455035   75006 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I0816 18:14:28.460244   75006 api_server.go:279] https://192.168.72.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:14:28.460279   75006 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:14:28.954475   75006 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I0816 18:14:28.962766   75006 api_server.go:279] https://192.168.72.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:14:28.962802   75006 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:14:29.454298   75006 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I0816 18:14:29.458650   75006 api_server.go:279] https://192.168.72.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:14:29.458681   75006 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:14:29.954582   75006 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I0816 18:14:29.959359   75006 api_server.go:279] https://192.168.72.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:14:29.959384   75006 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:14:30.455077   75006 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I0816 18:14:30.461068   75006 api_server.go:279] https://192.168.72.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:14:30.461099   75006 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:14:30.954772   75006 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I0816 18:14:30.960557   75006 api_server.go:279] https://192.168.72.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:14:30.960588   75006 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:14:31.455232   75006 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I0816 18:14:31.460157   75006 api_server.go:279] https://192.168.72.144:8444/healthz returned 200:
	ok
	I0816 18:14:31.471015   75006 api_server.go:141] control plane version: v1.31.0
	I0816 18:14:31.471046   75006 api_server.go:131] duration metric: took 6.516819341s to wait for apiserver health ...
	I0816 18:14:31.471056   75006 cni.go:84] Creating CNI manager for ""
	I0816 18:14:31.471064   75006 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 18:14:31.472930   75006 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 18:14:28.118105   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:28.118675   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:28.118706   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:28.118637   76298 retry.go:31] will retry after 2.400714985s: waiting for machine to come up
	I0816 18:14:30.521623   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:30.522157   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:30.522196   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:30.522111   76298 retry.go:31] will retry after 3.210603239s: waiting for machine to come up
	I0816 18:14:30.699930   74828 pod_ready.go:103] pod "kube-scheduler-no-preload-864476" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:33.200755   74828 pod_ready.go:103] pod "kube-scheduler-no-preload-864476" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:31.474388   75006 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 18:14:31.484723   75006 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 18:14:31.502094   75006 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 18:14:31.511169   75006 system_pods.go:59] 8 kube-system pods found
	I0816 18:14:31.511207   75006 system_pods.go:61] "coredns-6f6b679f8f-2sgmk" [3c98207c-ab70-435e-a725-3d6b108515d9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 18:14:31.511215   75006 system_pods.go:61] "etcd-default-k8s-diff-port-256678" [c6d0dbe2-8b80-4fb2-8408-7b2e668cf4cc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 18:14:31.511221   75006 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-256678" [4506e38e-6685-41f8-98b1-738b35476ad7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 18:14:31.511228   75006 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-256678" [14282ea5-2ebc-4ea6-8e06-829e86296333] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 18:14:31.511232   75006 system_pods.go:61] "kube-proxy-l4lr2" [880ceec6-c3d1-4934-b02a-7a175ded8a02] Running
	I0816 18:14:31.511236   75006 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-256678" [b122d1cd-12e8-4b87-a179-c50baf4c89d4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 18:14:31.511241   75006 system_pods.go:61] "metrics-server-6867b74b74-fc4h4" [3cb9624e-98b4-4edb-a2de-d6a971520cac] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 18:14:31.511244   75006 system_pods.go:61] "storage-provisioner" [79442d12-c28b-447e-ae96-e4c2ddb5c4da] Running
	I0816 18:14:31.511250   75006 system_pods.go:74] duration metric: took 9.137933ms to wait for pod list to return data ...
	I0816 18:14:31.511256   75006 node_conditions.go:102] verifying NodePressure condition ...
	I0816 18:14:31.515339   75006 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 18:14:31.515361   75006 node_conditions.go:123] node cpu capacity is 2
	I0816 18:14:31.515370   75006 node_conditions.go:105] duration metric: took 4.110442ms to run NodePressure ...
	I0816 18:14:31.515387   75006 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:31.774197   75006 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0816 18:14:31.778258   75006 kubeadm.go:739] kubelet initialised
	I0816 18:14:31.778276   75006 kubeadm.go:740] duration metric: took 4.052927ms waiting for restarted kubelet to initialise ...
	I0816 18:14:31.778283   75006 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:14:31.782595   75006 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-2sgmk" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:33.788205   75006 pod_ready.go:103] pod "coredns-6f6b679f8f-2sgmk" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:35.053312   74510 start.go:364] duration metric: took 53.786665535s to acquireMachinesLock for "embed-certs-777541"
	I0816 18:14:35.053367   74510 start.go:96] Skipping create...Using existing machine configuration
	I0816 18:14:35.053372   74510 fix.go:54] fixHost starting: 
	I0816 18:14:35.053687   74510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:14:35.053718   74510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:14:35.073509   74510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33223
	I0816 18:14:35.073935   74510 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:14:35.074396   74510 main.go:141] libmachine: Using API Version  1
	I0816 18:14:35.074420   74510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:14:35.074749   74510 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:14:35.074928   74510 main.go:141] libmachine: (embed-certs-777541) Calling .DriverName
	I0816 18:14:35.075102   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetState
	I0816 18:14:35.076710   74510 fix.go:112] recreateIfNeeded on embed-certs-777541: state=Stopped err=<nil>
	I0816 18:14:35.076738   74510 main.go:141] libmachine: (embed-certs-777541) Calling .DriverName
	W0816 18:14:35.076903   74510 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 18:14:35.078759   74510 out.go:177] * Restarting existing kvm2 VM for "embed-certs-777541" ...
	I0816 18:14:33.735394   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:33.735898   75402 main.go:141] libmachine: (old-k8s-version-783465) Found IP for machine: 192.168.39.211
	I0816 18:14:33.735925   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has current primary IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:33.735933   75402 main.go:141] libmachine: (old-k8s-version-783465) Reserving static IP address...
	I0816 18:14:33.736407   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "old-k8s-version-783465", mac: "52:54:00:d1:97:35", ip: "192.168.39.211"} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:33.736439   75402 main.go:141] libmachine: (old-k8s-version-783465) Reserved static IP address: 192.168.39.211
	I0816 18:14:33.736459   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | skip adding static IP to network mk-old-k8s-version-783465 - found existing host DHCP lease matching {name: "old-k8s-version-783465", mac: "52:54:00:d1:97:35", ip: "192.168.39.211"}
	I0816 18:14:33.736478   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | Getting to WaitForSSH function...
	I0816 18:14:33.736492   75402 main.go:141] libmachine: (old-k8s-version-783465) Waiting for SSH to be available...
	I0816 18:14:33.739028   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:33.739377   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:33.739397   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:33.739596   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | Using SSH client type: external
	I0816 18:14:33.739689   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | Using SSH private key: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/old-k8s-version-783465/id_rsa (-rw-------)
	I0816 18:14:33.739724   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.211 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19461-9545/.minikube/machines/old-k8s-version-783465/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 18:14:33.739747   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | About to run SSH command:
	I0816 18:14:33.739785   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | exit 0
	I0816 18:14:33.861036   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | SSH cmd err, output: <nil>: 
	I0816 18:14:33.861405   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetConfigRaw
	I0816 18:14:33.862105   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetIP
	I0816 18:14:33.864850   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:33.865245   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:33.865272   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:33.865542   75402 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/config.json ...
	I0816 18:14:33.865796   75402 machine.go:93] provisionDockerMachine start ...
	I0816 18:14:33.865820   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	I0816 18:14:33.866053   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:14:33.868422   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:33.868761   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:33.868795   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:33.868911   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHPort
	I0816 18:14:33.869095   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:33.869267   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:33.869415   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHUsername
	I0816 18:14:33.869579   75402 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:33.869796   75402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0816 18:14:33.869810   75402 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 18:14:33.972880   75402 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 18:14:33.972907   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetMachineName
	I0816 18:14:33.973141   75402 buildroot.go:166] provisioning hostname "old-k8s-version-783465"
	I0816 18:14:33.973172   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetMachineName
	I0816 18:14:33.973378   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:14:33.976198   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:33.976530   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:33.976563   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:33.976747   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHPort
	I0816 18:14:33.976945   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:33.977086   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:33.977228   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHUsername
	I0816 18:14:33.977369   75402 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:33.977529   75402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0816 18:14:33.977540   75402 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-783465 && echo "old-k8s-version-783465" | sudo tee /etc/hostname
	I0816 18:14:34.086092   75402 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-783465
	
	I0816 18:14:34.086123   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:14:34.088785   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.089107   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:34.089132   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.089285   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHPort
	I0816 18:14:34.089527   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:34.089684   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:34.089828   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHUsername
	I0816 18:14:34.089997   75402 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:34.090152   75402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0816 18:14:34.090168   75402 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-783465' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-783465/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-783465' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 18:14:34.200744   75402 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 18:14:34.200779   75402 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19461-9545/.minikube CaCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19461-9545/.minikube}
	I0816 18:14:34.200834   75402 buildroot.go:174] setting up certificates
	I0816 18:14:34.200848   75402 provision.go:84] configureAuth start
	I0816 18:14:34.200862   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetMachineName
	I0816 18:14:34.201175   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetIP
	I0816 18:14:34.203868   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.204297   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:34.204344   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.204506   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:14:34.207067   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.207441   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:34.207464   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.207810   75402 provision.go:143] copyHostCerts
	I0816 18:14:34.207869   75402 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem, removing ...
	I0816 18:14:34.207892   75402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem
	I0816 18:14:34.207951   75402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem (1082 bytes)
	I0816 18:14:34.208058   75402 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem, removing ...
	I0816 18:14:34.208069   75402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem
	I0816 18:14:34.208103   75402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem (1123 bytes)
	I0816 18:14:34.208180   75402 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem, removing ...
	I0816 18:14:34.208192   75402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem
	I0816 18:14:34.208220   75402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem (1679 bytes)
	I0816 18:14:34.208291   75402 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-783465 san=[127.0.0.1 192.168.39.211 localhost minikube old-k8s-version-783465]
	I0816 18:14:34.413800   75402 provision.go:177] copyRemoteCerts
	I0816 18:14:34.413857   75402 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 18:14:34.413881   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:14:34.416724   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.417138   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:34.417173   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.417345   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHPort
	I0816 18:14:34.417673   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:34.417894   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHUsername
	I0816 18:14:34.418089   75402 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/old-k8s-version-783465/id_rsa Username:docker}
	I0816 18:14:34.495519   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 18:14:34.517414   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0816 18:14:34.540423   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 18:14:34.563983   75402 provision.go:87] duration metric: took 363.122639ms to configureAuth
	I0816 18:14:34.564019   75402 buildroot.go:189] setting minikube options for container-runtime
	I0816 18:14:34.564229   75402 config.go:182] Loaded profile config "old-k8s-version-783465": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0816 18:14:34.564299   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:14:34.567149   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.567550   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:34.567580   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.567753   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHPort
	I0816 18:14:34.567935   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:34.568098   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:34.568255   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHUsername
	I0816 18:14:34.568448   75402 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:34.568659   75402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0816 18:14:34.568680   75402 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 18:14:34.824064   75402 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 18:14:34.824091   75402 machine.go:96] duration metric: took 958.278616ms to provisionDockerMachine
	I0816 18:14:34.824106   75402 start.go:293] postStartSetup for "old-k8s-version-783465" (driver="kvm2")
	I0816 18:14:34.824120   75402 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 18:14:34.824169   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	I0816 18:14:34.824556   75402 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 18:14:34.824599   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:14:34.827203   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.827517   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:34.827547   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.827677   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHPort
	I0816 18:14:34.827869   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:34.828033   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHUsername
	I0816 18:14:34.828171   75402 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/old-k8s-version-783465/id_rsa Username:docker}
	I0816 18:14:34.912148   75402 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 18:14:34.916652   75402 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 18:14:34.916681   75402 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/addons for local assets ...
	I0816 18:14:34.916755   75402 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/files for local assets ...
	I0816 18:14:34.916864   75402 filesync.go:149] local asset: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem -> 167532.pem in /etc/ssl/certs
	I0816 18:14:34.916989   75402 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 18:14:34.927061   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /etc/ssl/certs/167532.pem (1708 bytes)
	I0816 18:14:34.949703   75402 start.go:296] duration metric: took 125.581331ms for postStartSetup
	I0816 18:14:34.949743   75402 fix.go:56] duration metric: took 19.13519024s for fixHost
	I0816 18:14:34.949763   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:14:34.952740   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.953090   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:34.953124   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.953307   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHPort
	I0816 18:14:34.953532   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:34.953715   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:34.953861   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHUsername
	I0816 18:14:34.954029   75402 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:34.954229   75402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0816 18:14:34.954242   75402 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 18:14:35.053143   75402 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723832075.025252523
	
	I0816 18:14:35.053171   75402 fix.go:216] guest clock: 1723832075.025252523
	I0816 18:14:35.053180   75402 fix.go:229] Guest: 2024-08-16 18:14:35.025252523 +0000 UTC Remote: 2024-08-16 18:14:34.949747236 +0000 UTC m=+221.880938919 (delta=75.505287ms)
	I0816 18:14:35.053204   75402 fix.go:200] guest clock delta is within tolerance: 75.505287ms
	I0816 18:14:35.053211   75402 start.go:83] releasing machines lock for "old-k8s-version-783465", held for 19.238692888s
	I0816 18:14:35.053243   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	I0816 18:14:35.053549   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetIP
	I0816 18:14:35.056365   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:35.056792   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:35.056823   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:35.057009   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	I0816 18:14:35.057509   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	I0816 18:14:35.057731   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	I0816 18:14:35.057831   75402 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 18:14:35.057892   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:14:35.057951   75402 ssh_runner.go:195] Run: cat /version.json
	I0816 18:14:35.057972   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:14:35.060543   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:35.060733   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:35.060987   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:35.061016   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:35.061126   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:35.061148   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:35.061154   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHPort
	I0816 18:14:35.061319   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHPort
	I0816 18:14:35.061339   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:35.061456   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:35.061518   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHUsername
	I0816 18:14:35.061639   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHUsername
	I0816 18:14:35.061720   75402 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/old-k8s-version-783465/id_rsa Username:docker}
	I0816 18:14:35.061773   75402 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/old-k8s-version-783465/id_rsa Username:docker}
	I0816 18:14:35.174137   75402 ssh_runner.go:195] Run: systemctl --version
	I0816 18:14:35.181704   75402 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 18:14:35.323490   75402 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 18:14:35.330733   75402 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 18:14:35.330807   75402 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 18:14:35.350653   75402 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 18:14:35.350679   75402 start.go:495] detecting cgroup driver to use...
	I0816 18:14:35.350763   75402 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 18:14:35.372307   75402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 18:14:35.386513   75402 docker.go:217] disabling cri-docker service (if available) ...
	I0816 18:14:35.386598   75402 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 18:14:35.400406   75402 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 18:14:35.414761   75402 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 18:14:35.540356   75402 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 18:14:35.675726   75402 docker.go:233] disabling docker service ...
	I0816 18:14:35.675793   75402 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 18:14:35.691169   75402 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 18:14:35.707288   75402 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 18:14:35.858149   75402 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 18:14:35.981654   75402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 18:14:35.996396   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 18:14:36.013656   75402 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0816 18:14:36.013711   75402 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:36.023839   75402 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 18:14:36.023907   75402 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:36.033889   75402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:36.043727   75402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:36.053496   75402 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 18:14:36.063694   75402 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 18:14:36.072919   75402 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 18:14:36.072979   75402 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 18:14:36.085707   75402 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 18:14:36.095377   75402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:14:36.219235   75402 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 18:14:36.384915   75402 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 18:14:36.384990   75402 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 18:14:36.392122   75402 start.go:563] Will wait 60s for crictl version
	I0816 18:14:36.392196   75402 ssh_runner.go:195] Run: which crictl
	I0816 18:14:36.397589   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 18:14:36.443581   75402 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 18:14:36.443710   75402 ssh_runner.go:195] Run: crio --version
	I0816 18:14:36.473740   75402 ssh_runner.go:195] Run: crio --version
	I0816 18:14:36.512542   75402 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0816 18:14:36.513678   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetIP
	I0816 18:14:36.517404   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:36.517912   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:36.517948   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:36.518190   75402 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0816 18:14:36.523577   75402 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 18:14:36.536188   75402 kubeadm.go:883] updating cluster {Name:old-k8s-version-783465 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-783465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 18:14:36.536361   75402 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0816 18:14:36.536425   75402 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 18:14:36.587027   75402 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0816 18:14:36.587085   75402 ssh_runner.go:195] Run: which lz4
	I0816 18:14:36.590780   75402 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 18:14:36.594635   75402 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 18:14:36.594673   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0816 18:14:35.080033   74510 main.go:141] libmachine: (embed-certs-777541) Calling .Start
	I0816 18:14:35.080220   74510 main.go:141] libmachine: (embed-certs-777541) Ensuring networks are active...
	I0816 18:14:35.080971   74510 main.go:141] libmachine: (embed-certs-777541) Ensuring network default is active
	I0816 18:14:35.081366   74510 main.go:141] libmachine: (embed-certs-777541) Ensuring network mk-embed-certs-777541 is active
	I0816 18:14:35.081887   74510 main.go:141] libmachine: (embed-certs-777541) Getting domain xml...
	I0816 18:14:35.082634   74510 main.go:141] libmachine: (embed-certs-777541) Creating domain...
	I0816 18:14:36.459300   74510 main.go:141] libmachine: (embed-certs-777541) Waiting to get IP...
	I0816 18:14:36.460282   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:36.460801   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:36.460883   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:36.460778   76422 retry.go:31] will retry after 291.491491ms: waiting for machine to come up
	I0816 18:14:36.754548   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:36.755372   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:36.755412   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:36.755313   76422 retry.go:31] will retry after 356.347467ms: waiting for machine to come up
	I0816 18:14:37.113124   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:37.113704   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:37.113739   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:37.113676   76422 retry.go:31] will retry after 386.244375ms: waiting for machine to come up
	I0816 18:14:37.502241   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:37.502796   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:37.502826   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:37.502750   76422 retry.go:31] will retry after 437.69847ms: waiting for machine to come up
	I0816 18:14:37.942667   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:37.943423   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:37.943456   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:37.943378   76422 retry.go:31] will retry after 709.064032ms: waiting for machine to come up
	I0816 18:14:38.653840   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:38.654349   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:38.654386   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:38.654297   76422 retry.go:31] will retry after 594.417028ms: waiting for machine to come up
	I0816 18:14:34.700134   74828 pod_ready.go:93] pod "kube-scheduler-no-preload-864476" in "kube-system" namespace has status "Ready":"True"
	I0816 18:14:34.700158   74828 pod_ready.go:82] duration metric: took 13.007571631s for pod "kube-scheduler-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:34.700171   74828 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:36.707977   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:38.708527   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:35.790842   75006 pod_ready.go:103] pod "coredns-6f6b679f8f-2sgmk" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:37.791236   75006 pod_ready.go:93] pod "coredns-6f6b679f8f-2sgmk" in "kube-system" namespace has status "Ready":"True"
	I0816 18:14:37.791278   75006 pod_ready.go:82] duration metric: took 6.008656328s for pod "coredns-6f6b679f8f-2sgmk" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:37.791294   75006 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:39.798513   75006 pod_ready.go:93] pod "etcd-default-k8s-diff-port-256678" in "kube-system" namespace has status "Ready":"True"
	I0816 18:14:39.798543   75006 pod_ready.go:82] duration metric: took 2.007240233s for pod "etcd-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:39.798557   75006 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:38.127403   75402 crio.go:462] duration metric: took 1.536659915s to copy over tarball
	I0816 18:14:38.127504   75402 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 18:14:41.109575   75402 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.982013621s)
	I0816 18:14:41.109639   75402 crio.go:469] duration metric: took 2.982198625s to extract the tarball
	I0816 18:14:41.109650   75402 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 18:14:41.152940   75402 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 18:14:41.185863   75402 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0816 18:14:41.185892   75402 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0816 18:14:41.185982   75402 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:14:41.186003   75402 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0816 18:14:41.186036   75402 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0816 18:14:41.186044   75402 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 18:14:41.186103   75402 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 18:14:41.185993   75402 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0816 18:14:41.186171   75402 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 18:14:41.185993   75402 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 18:14:41.187521   75402 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0816 18:14:41.187532   75402 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 18:14:41.187542   75402 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0816 18:14:41.187527   75402 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 18:14:41.187595   75402 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:14:41.187605   75402 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 18:14:41.187688   75402 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 18:14:41.187840   75402 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0816 18:14:41.421551   75402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0816 18:14:41.462506   75402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0816 18:14:41.467716   75402 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0816 18:14:41.467758   75402 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0816 18:14:41.467810   75402 ssh_runner.go:195] Run: which crictl
	I0816 18:14:41.508571   75402 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0816 18:14:41.508638   75402 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 18:14:41.508687   75402 ssh_runner.go:195] Run: which crictl
	I0816 18:14:41.508691   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 18:14:41.514560   75402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 18:14:41.520003   75402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0816 18:14:41.526475   75402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0816 18:14:41.526892   75402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0816 18:14:41.533271   75402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0816 18:14:41.569269   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 18:14:41.569426   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 18:14:41.694043   75402 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0816 18:14:41.694100   75402 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 18:14:41.694049   75402 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0816 18:14:41.694210   75402 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 18:14:41.694173   75402 ssh_runner.go:195] Run: which crictl
	I0816 18:14:41.694268   75402 ssh_runner.go:195] Run: which crictl
	I0816 18:14:41.701292   75402 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0816 18:14:41.701337   75402 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0816 18:14:41.701389   75402 ssh_runner.go:195] Run: which crictl
	I0816 18:14:41.707345   75402 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0816 18:14:41.707415   75402 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 18:14:41.707467   75402 ssh_runner.go:195] Run: which crictl
	I0816 18:14:41.711820   75402 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0816 18:14:41.711854   75402 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0816 18:14:41.711896   75402 ssh_runner.go:195] Run: which crictl
	I0816 18:14:41.723813   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 18:14:41.723850   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 18:14:41.723814   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 18:14:41.723939   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 18:14:41.723951   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 18:14:41.724003   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 18:14:41.724060   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 18:14:41.872645   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 18:14:41.872674   75402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0816 18:14:41.873747   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 18:14:41.873786   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 18:14:41.873891   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 18:14:41.873899   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 18:14:41.873960   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 18:14:41.997519   75402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0816 18:14:42.002048   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 18:14:42.002091   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 18:14:42.002140   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 18:14:42.002178   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 18:14:42.002218   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 18:14:42.070993   75402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:14:42.115418   75402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0816 18:14:42.115527   75402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0816 18:14:42.115623   75402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0816 18:14:42.115631   75402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0816 18:14:42.115689   75402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0816 18:14:42.235706   75402 cache_images.go:92] duration metric: took 1.049784726s to LoadCachedImages
	W0816 18:14:42.235807   75402 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0816 18:14:42.235821   75402 kubeadm.go:934] updating node { 192.168.39.211 8443 v1.20.0 crio true true} ...
	I0816 18:14:42.235939   75402 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-783465 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.211
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-783465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 18:14:42.236024   75402 ssh_runner.go:195] Run: crio config
	I0816 18:14:42.286742   75402 cni.go:84] Creating CNI manager for ""
	I0816 18:14:42.286763   75402 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 18:14:42.286771   75402 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 18:14:42.286789   75402 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.211 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-783465 NodeName:old-k8s-version-783465 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.211"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.211 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0816 18:14:42.286904   75402 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.211
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-783465"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.211
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.211"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 18:14:42.286961   75402 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0816 18:14:42.297015   75402 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 18:14:42.297098   75402 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 18:14:42.306400   75402 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0816 18:14:42.322812   75402 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 18:14:42.339791   75402 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0816 18:14:42.356930   75402 ssh_runner.go:195] Run: grep 192.168.39.211	control-plane.minikube.internal$ /etc/hosts
	I0816 18:14:42.360578   75402 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.211	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 18:14:42.373248   75402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:14:42.495499   75402 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 18:14:42.511910   75402 certs.go:68] Setting up /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465 for IP: 192.168.39.211
	I0816 18:14:42.511942   75402 certs.go:194] generating shared ca certs ...
	I0816 18:14:42.511964   75402 certs.go:226] acquiring lock for ca certs: {Name:mk1e000575c94c138513704c2900b8a68810eb65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:14:42.512147   75402 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key
	I0816 18:14:42.512206   75402 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key
	I0816 18:14:42.512220   75402 certs.go:256] generating profile certs ...
	I0816 18:14:42.512361   75402 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/client.key
	I0816 18:14:42.512431   75402 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/apiserver.key.94c45fb6
	I0816 18:14:42.512483   75402 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/proxy-client.key
	I0816 18:14:42.512664   75402 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem (1338 bytes)
	W0816 18:14:42.512709   75402 certs.go:480] ignoring /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753_empty.pem, impossibly tiny 0 bytes
	I0816 18:14:42.512724   75402 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 18:14:42.512754   75402 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem (1082 bytes)
	I0816 18:14:42.512794   75402 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem (1123 bytes)
	I0816 18:14:42.512825   75402 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem (1679 bytes)
	I0816 18:14:42.512881   75402 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem (1708 bytes)
	I0816 18:14:42.513660   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 18:14:42.552291   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 18:14:42.585617   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 18:14:42.611017   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 18:14:42.638092   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0816 18:14:42.676877   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 18:14:42.710091   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 18:14:42.743734   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 18:14:42.779905   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /usr/share/ca-certificates/167532.pem (1708 bytes)
	I0816 18:14:42.802779   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 18:14:42.826432   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem --> /usr/share/ca-certificates/16753.pem (1338 bytes)
	I0816 18:14:42.849286   75402 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 18:14:42.866901   75402 ssh_runner.go:195] Run: openssl version
	I0816 18:14:42.872283   75402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167532.pem && ln -fs /usr/share/ca-certificates/167532.pem /etc/ssl/certs/167532.pem"
	I0816 18:14:42.882976   75402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167532.pem
	I0816 18:14:42.887432   75402 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 17:00 /usr/share/ca-certificates/167532.pem
	I0816 18:14:42.887504   75402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167532.pem
	I0816 18:14:42.893275   75402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167532.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 18:14:42.903687   75402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 18:14:42.915232   75402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:14:42.919669   75402 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:49 /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:14:42.919735   75402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:14:42.925282   75402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 18:14:42.937888   75402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16753.pem && ln -fs /usr/share/ca-certificates/16753.pem /etc/ssl/certs/16753.pem"
	I0816 18:14:42.949994   75402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16753.pem
	I0816 18:14:42.954495   75402 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 17:00 /usr/share/ca-certificates/16753.pem
	I0816 18:14:42.954548   75402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16753.pem
	I0816 18:14:42.960295   75402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16753.pem /etc/ssl/certs/51391683.0"
	I0816 18:14:42.972006   75402 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 18:14:42.976450   75402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 18:14:42.982741   75402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 18:14:42.988649   75402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 18:14:42.995021   75402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 18:14:43.000965   75402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 18:14:43.007030   75402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 18:14:43.012891   75402 kubeadm.go:392] StartCluster: {Name:old-k8s-version-783465 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-783465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 18:14:43.012983   75402 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 18:14:43.013071   75402 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 18:14:43.050670   75402 cri.go:89] found id: ""
	I0816 18:14:43.050741   75402 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 18:14:43.060748   75402 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 18:14:43.060773   75402 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 18:14:43.060825   75402 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 18:14:43.070299   75402 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 18:14:43.071251   75402 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-783465" does not appear in /home/jenkins/minikube-integration/19461-9545/kubeconfig
	I0816 18:14:43.071945   75402 kubeconfig.go:62] /home/jenkins/minikube-integration/19461-9545/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-783465" cluster setting kubeconfig missing "old-k8s-version-783465" context setting]
	I0816 18:14:43.072870   75402 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/kubeconfig: {Name:mk7d5abb3a1b9b654c5d7490d32aeae2c9a1bdd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:14:39.250064   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:39.250979   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:39.251028   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:39.250914   76422 retry.go:31] will retry after 1.014851653s: waiting for machine to come up
	I0816 18:14:40.266811   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:40.267287   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:40.267323   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:40.267238   76422 retry.go:31] will retry after 1.333311972s: waiting for machine to come up
	I0816 18:14:41.602031   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:41.602532   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:41.602565   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:41.602480   76422 retry.go:31] will retry after 1.525496469s: waiting for machine to come up
	I0816 18:14:43.130136   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:43.130620   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:43.130661   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:43.130563   76422 retry.go:31] will retry after 2.206344656s: waiting for machine to come up
	I0816 18:14:41.206879   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:43.706278   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:41.806382   75006 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-256678" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:43.927145   75006 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-256678" in "kube-system" namespace has status "Ready":"True"
	I0816 18:14:43.927173   75006 pod_ready.go:82] duration metric: took 4.128607781s for pod "kube-apiserver-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:43.927182   75006 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:43.932293   75006 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-256678" in "kube-system" namespace has status "Ready":"True"
	I0816 18:14:43.932314   75006 pod_ready.go:82] duration metric: took 5.122737ms for pod "kube-controller-manager-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:43.932326   75006 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-l4lr2" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:43.937128   75006 pod_ready.go:93] pod "kube-proxy-l4lr2" in "kube-system" namespace has status "Ready":"True"
	I0816 18:14:43.937146   75006 pod_ready.go:82] duration metric: took 4.812798ms for pod "kube-proxy-l4lr2" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:43.937154   75006 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:43.941992   75006 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-256678" in "kube-system" namespace has status "Ready":"True"
	I0816 18:14:43.942018   75006 pod_ready.go:82] duration metric: took 4.856588ms for pod "kube-scheduler-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:43.942030   75006 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:43.141753   75402 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 18:14:43.154269   75402 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.211
	I0816 18:14:43.154324   75402 kubeadm.go:1160] stopping kube-system containers ...
	I0816 18:14:43.154341   75402 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 18:14:43.154404   75402 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 18:14:43.192966   75402 cri.go:89] found id: ""
	I0816 18:14:43.193035   75402 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 18:14:43.213101   75402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 18:14:43.222811   75402 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 18:14:43.222826   75402 kubeadm.go:157] found existing configuration files:
	
	I0816 18:14:43.222870   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 18:14:43.232196   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 18:14:43.232261   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 18:14:43.241633   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 18:14:43.250751   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 18:14:43.250800   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 18:14:43.260197   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 18:14:43.268943   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 18:14:43.269000   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 18:14:43.277887   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 18:14:43.286281   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 18:14:43.286391   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 18:14:43.295899   75402 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 18:14:43.306026   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:43.441487   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:44.213457   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:44.431649   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:44.553955   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:44.646817   75402 api_server.go:52] waiting for apiserver process to appear ...
	I0816 18:14:44.646923   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:45.147202   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:45.648050   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:46.147958   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:46.647398   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:47.147403   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:47.646992   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:45.338228   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:45.338729   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:45.338763   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:45.338660   76422 retry.go:31] will retry after 2.526891535s: waiting for machine to come up
	I0816 18:14:47.868326   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:47.868821   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:47.868853   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:47.868774   76422 retry.go:31] will retry after 2.866643935s: waiting for machine to come up
	I0816 18:14:45.706669   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:47.707062   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:45.948791   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:48.447930   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:48.147987   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:48.646974   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:49.147114   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:49.647020   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:50.147765   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:50.647135   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:51.147506   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:51.647568   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:52.147648   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:52.647865   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:50.736760   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:50.737295   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:50.737331   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:50.737245   76422 retry.go:31] will retry after 3.824271015s: waiting for machine to come up
	I0816 18:14:50.206249   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:52.206435   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:50.449586   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:52.948577   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:54.566285   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:54.566784   74510 main.go:141] libmachine: (embed-certs-777541) Found IP for machine: 192.168.61.218
	I0816 18:14:54.566809   74510 main.go:141] libmachine: (embed-certs-777541) Reserving static IP address...
	I0816 18:14:54.566825   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has current primary IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:54.567171   74510 main.go:141] libmachine: (embed-certs-777541) Reserved static IP address: 192.168.61.218
	I0816 18:14:54.567193   74510 main.go:141] libmachine: (embed-certs-777541) Waiting for SSH to be available...
	I0816 18:14:54.567211   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "embed-certs-777541", mac: "52:54:00:54:9a:0c", ip: "192.168.61.218"} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:54.567231   74510 main.go:141] libmachine: (embed-certs-777541) DBG | skip adding static IP to network mk-embed-certs-777541 - found existing host DHCP lease matching {name: "embed-certs-777541", mac: "52:54:00:54:9a:0c", ip: "192.168.61.218"}
	I0816 18:14:54.567245   74510 main.go:141] libmachine: (embed-certs-777541) DBG | Getting to WaitForSSH function...
	I0816 18:14:54.569546   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:54.569864   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:54.569890   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:54.570019   74510 main.go:141] libmachine: (embed-certs-777541) DBG | Using SSH client type: external
	I0816 18:14:54.570046   74510 main.go:141] libmachine: (embed-certs-777541) DBG | Using SSH private key: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/embed-certs-777541/id_rsa (-rw-------)
	I0816 18:14:54.570073   74510 main.go:141] libmachine: (embed-certs-777541) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.218 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19461-9545/.minikube/machines/embed-certs-777541/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 18:14:54.570082   74510 main.go:141] libmachine: (embed-certs-777541) DBG | About to run SSH command:
	I0816 18:14:54.570109   74510 main.go:141] libmachine: (embed-certs-777541) DBG | exit 0
	I0816 18:14:54.692450   74510 main.go:141] libmachine: (embed-certs-777541) DBG | SSH cmd err, output: <nil>: 
	I0816 18:14:54.692828   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetConfigRaw
	I0816 18:14:54.693486   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetIP
	I0816 18:14:54.696565   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:54.696943   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:54.696987   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:54.697248   74510 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/embed-certs-777541/config.json ...
	I0816 18:14:54.697455   74510 machine.go:93] provisionDockerMachine start ...
	I0816 18:14:54.697475   74510 main.go:141] libmachine: (embed-certs-777541) Calling .DriverName
	I0816 18:14:54.697686   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:14:54.700172   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:54.700491   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:54.700520   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:54.700716   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHPort
	I0816 18:14:54.700906   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:54.701102   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:54.701239   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHUsername
	I0816 18:14:54.701440   74510 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:54.701650   74510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.218 22 <nil> <nil>}
	I0816 18:14:54.701662   74510 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 18:14:54.800770   74510 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 18:14:54.800805   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetMachineName
	I0816 18:14:54.801047   74510 buildroot.go:166] provisioning hostname "embed-certs-777541"
	I0816 18:14:54.801079   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetMachineName
	I0816 18:14:54.801264   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:14:54.804313   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:54.804734   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:54.804761   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:54.804940   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHPort
	I0816 18:14:54.805132   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:54.805322   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:54.805485   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHUsername
	I0816 18:14:54.805711   74510 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:54.805869   74510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.218 22 <nil> <nil>}
	I0816 18:14:54.805886   74510 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-777541 && echo "embed-certs-777541" | sudo tee /etc/hostname
	I0816 18:14:54.918908   74510 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-777541
	
	I0816 18:14:54.918949   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:14:54.921760   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:54.922117   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:54.922146   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:54.922338   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHPort
	I0816 18:14:54.922511   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:54.922681   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:54.922843   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHUsername
	I0816 18:14:54.923033   74510 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:54.923243   74510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.218 22 <nil> <nil>}
	I0816 18:14:54.923261   74510 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-777541' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-777541/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-777541' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 18:14:55.028983   74510 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 18:14:55.029016   74510 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19461-9545/.minikube CaCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19461-9545/.minikube}
	I0816 18:14:55.029040   74510 buildroot.go:174] setting up certificates
	I0816 18:14:55.029051   74510 provision.go:84] configureAuth start
	I0816 18:14:55.029064   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetMachineName
	I0816 18:14:55.029320   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetIP
	I0816 18:14:55.032273   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.032693   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:55.032743   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.032983   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:14:55.035257   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.035581   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:55.035606   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.035742   74510 provision.go:143] copyHostCerts
	I0816 18:14:55.035797   74510 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem, removing ...
	I0816 18:14:55.035814   74510 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem
	I0816 18:14:55.035899   74510 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem (1123 bytes)
	I0816 18:14:55.035996   74510 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem, removing ...
	I0816 18:14:55.036004   74510 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem
	I0816 18:14:55.036024   74510 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem (1679 bytes)
	I0816 18:14:55.036081   74510 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem, removing ...
	I0816 18:14:55.036087   74510 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem
	I0816 18:14:55.036106   74510 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem (1082 bytes)
	I0816 18:14:55.036155   74510 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem org=jenkins.embed-certs-777541 san=[127.0.0.1 192.168.61.218 embed-certs-777541 localhost minikube]
	I0816 18:14:55.182540   74510 provision.go:177] copyRemoteCerts
	I0816 18:14:55.182606   74510 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 18:14:55.182633   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:14:55.185807   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.186179   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:55.186229   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.186429   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHPort
	I0816 18:14:55.186619   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:55.186770   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHUsername
	I0816 18:14:55.186884   74510 sshutil.go:53] new ssh client: &{IP:192.168.61.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/embed-certs-777541/id_rsa Username:docker}
	I0816 18:14:55.262494   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0816 18:14:55.285186   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 18:14:55.307082   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 18:14:55.328912   74510 provision.go:87] duration metric: took 299.848734ms to configureAuth
	I0816 18:14:55.328934   74510 buildroot.go:189] setting minikube options for container-runtime
	I0816 18:14:55.329140   74510 config.go:182] Loaded profile config "embed-certs-777541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 18:14:55.329215   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:14:55.331989   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.332366   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:55.332414   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.332594   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHPort
	I0816 18:14:55.332801   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:55.333006   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:55.333158   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHUsername
	I0816 18:14:55.333312   74510 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:55.333501   74510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.218 22 <nil> <nil>}
	I0816 18:14:55.333522   74510 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 18:14:55.579734   74510 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 18:14:55.579765   74510 machine.go:96] duration metric: took 882.296402ms to provisionDockerMachine
	I0816 18:14:55.579781   74510 start.go:293] postStartSetup for "embed-certs-777541" (driver="kvm2")
	I0816 18:14:55.579793   74510 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 18:14:55.579814   74510 main.go:141] libmachine: (embed-certs-777541) Calling .DriverName
	I0816 18:14:55.580182   74510 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 18:14:55.580216   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:14:55.582826   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.583250   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:55.583285   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.583374   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHPort
	I0816 18:14:55.583574   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:55.583739   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHUsername
	I0816 18:14:55.583972   74510 sshutil.go:53] new ssh client: &{IP:192.168.61.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/embed-certs-777541/id_rsa Username:docker}
	I0816 18:14:55.663379   74510 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 18:14:55.667205   74510 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 18:14:55.667231   74510 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/addons for local assets ...
	I0816 18:14:55.667321   74510 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/files for local assets ...
	I0816 18:14:55.667426   74510 filesync.go:149] local asset: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem -> 167532.pem in /etc/ssl/certs
	I0816 18:14:55.667560   74510 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 18:14:55.676427   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /etc/ssl/certs/167532.pem (1708 bytes)
	I0816 18:14:55.698188   74510 start.go:296] duration metric: took 118.396211ms for postStartSetup
	I0816 18:14:55.698226   74510 fix.go:56] duration metric: took 20.644852989s for fixHost
	I0816 18:14:55.698245   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:14:55.701014   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.701359   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:55.701390   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.701587   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHPort
	I0816 18:14:55.701755   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:55.701924   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:55.702070   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHUsername
	I0816 18:14:55.702241   74510 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:55.702452   74510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.218 22 <nil> <nil>}
	I0816 18:14:55.702464   74510 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 18:14:55.801397   74510 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723832095.756052952
	
	I0816 18:14:55.801431   74510 fix.go:216] guest clock: 1723832095.756052952
	I0816 18:14:55.801443   74510 fix.go:229] Guest: 2024-08-16 18:14:55.756052952 +0000 UTC Remote: 2024-08-16 18:14:55.698231489 +0000 UTC m=+357.018707788 (delta=57.821463ms)
	I0816 18:14:55.801492   74510 fix.go:200] guest clock delta is within tolerance: 57.821463ms
	I0816 18:14:55.801504   74510 start.go:83] releasing machines lock for "embed-certs-777541", held for 20.74815396s
	I0816 18:14:55.801528   74510 main.go:141] libmachine: (embed-certs-777541) Calling .DriverName
	I0816 18:14:55.801781   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetIP
	I0816 18:14:55.804216   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.804617   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:55.804659   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.804795   74510 main.go:141] libmachine: (embed-certs-777541) Calling .DriverName
	I0816 18:14:55.805395   74510 main.go:141] libmachine: (embed-certs-777541) Calling .DriverName
	I0816 18:14:55.805622   74510 main.go:141] libmachine: (embed-certs-777541) Calling .DriverName
	I0816 18:14:55.805730   74510 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 18:14:55.805781   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:14:55.805849   74510 ssh_runner.go:195] Run: cat /version.json
	I0816 18:14:55.805877   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:14:55.808587   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.808946   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:55.808978   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.809080   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.809249   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHPort
	I0816 18:14:55.809415   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:55.809417   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:55.809442   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.809575   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHPort
	I0816 18:14:55.809597   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHUsername
	I0816 18:14:55.809720   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:55.809766   74510 sshutil.go:53] new ssh client: &{IP:192.168.61.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/embed-certs-777541/id_rsa Username:docker}
	I0816 18:14:55.809857   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHUsername
	I0816 18:14:55.809970   74510 sshutil.go:53] new ssh client: &{IP:192.168.61.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/embed-certs-777541/id_rsa Username:docker}
	I0816 18:14:55.885026   74510 ssh_runner.go:195] Run: systemctl --version
	I0816 18:14:55.927940   74510 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 18:14:56.072936   74510 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 18:14:56.080952   74510 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 18:14:56.081029   74510 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 18:14:56.100709   74510 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 18:14:56.100734   74510 start.go:495] detecting cgroup driver to use...
	I0816 18:14:56.100791   74510 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 18:14:56.115759   74510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 18:14:56.129714   74510 docker.go:217] disabling cri-docker service (if available) ...
	I0816 18:14:56.129774   74510 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 18:14:56.142909   74510 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 18:14:56.156413   74510 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 18:14:56.268818   74510 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 18:14:56.424536   74510 docker.go:233] disabling docker service ...
	I0816 18:14:56.424612   74510 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 18:14:56.438033   74510 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 18:14:56.450479   74510 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 18:14:56.560132   74510 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 18:14:56.683671   74510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 18:14:56.697636   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 18:14:56.716486   74510 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 18:14:56.716560   74510 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:56.726082   74510 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 18:14:56.726144   74510 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:56.735971   74510 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:56.745410   74510 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:56.754952   74510 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 18:14:56.764717   74510 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:56.774153   74510 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:56.789843   74510 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:56.799399   74510 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 18:14:56.807679   74510 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 18:14:56.807743   74510 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 18:14:56.819873   74510 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 18:14:56.829921   74510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:14:56.936372   74510 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 18:14:57.073931   74510 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 18:14:57.073998   74510 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 18:14:57.078254   74510 start.go:563] Will wait 60s for crictl version
	I0816 18:14:57.078327   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:14:57.081833   74510 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 18:14:57.121402   74510 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 18:14:57.121476   74510 ssh_runner.go:195] Run: crio --version
	I0816 18:14:57.149262   74510 ssh_runner.go:195] Run: crio --version
	I0816 18:14:57.183015   74510 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 18:14:53.146986   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:53.647279   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:54.147587   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:54.647911   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:55.147322   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:55.647765   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:56.147695   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:56.647296   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:57.147031   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:57.647108   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:57.184157   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetIP
	I0816 18:14:57.186758   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:57.187177   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:57.187206   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:57.187439   74510 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0816 18:14:57.191152   74510 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 18:14:57.203073   74510 kubeadm.go:883] updating cluster {Name:embed-certs-777541 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-777541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.218 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 18:14:57.203240   74510 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 18:14:57.203332   74510 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 18:14:57.238289   74510 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0816 18:14:57.238348   74510 ssh_runner.go:195] Run: which lz4
	I0816 18:14:57.242251   74510 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 18:14:57.246081   74510 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 18:14:57.246124   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0816 18:14:58.459887   74510 crio.go:462] duration metric: took 1.217672418s to copy over tarball
	I0816 18:14:58.459960   74510 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 18:14:54.707069   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:57.206750   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:55.449391   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:57.449830   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:59.451338   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:58.147661   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:58.647270   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:59.147355   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:59.647821   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:00.148023   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:00.647165   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:01.147669   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:01.647960   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:02.147721   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:02.647932   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:00.545989   74510 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.085985152s)
	I0816 18:15:00.546028   74510 crio.go:469] duration metric: took 2.086110527s to extract the tarball
	I0816 18:15:00.546039   74510 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 18:15:00.587096   74510 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 18:15:00.630366   74510 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 18:15:00.630394   74510 cache_images.go:84] Images are preloaded, skipping loading
	I0816 18:15:00.630405   74510 kubeadm.go:934] updating node { 192.168.61.218 8443 v1.31.0 crio true true} ...
	I0816 18:15:00.630540   74510 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-777541 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.218
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-777541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 18:15:00.630630   74510 ssh_runner.go:195] Run: crio config
	I0816 18:15:00.681196   74510 cni.go:84] Creating CNI manager for ""
	I0816 18:15:00.681224   74510 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 18:15:00.681235   74510 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 18:15:00.681262   74510 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.218 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-777541 NodeName:embed-certs-777541 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.218"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.218 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 18:15:00.681439   74510 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.218
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-777541"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.218
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.218"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 18:15:00.681534   74510 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 18:15:00.691239   74510 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 18:15:00.691294   74510 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 18:15:00.700059   74510 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0816 18:15:00.717826   74510 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 18:15:00.733475   74510 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0816 18:15:00.750175   74510 ssh_runner.go:195] Run: grep 192.168.61.218	control-plane.minikube.internal$ /etc/hosts
	I0816 18:15:00.753865   74510 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.218	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 18:15:00.765531   74510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:15:00.875234   74510 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 18:15:00.893095   74510 certs.go:68] Setting up /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/embed-certs-777541 for IP: 192.168.61.218
	I0816 18:15:00.893115   74510 certs.go:194] generating shared ca certs ...
	I0816 18:15:00.893131   74510 certs.go:226] acquiring lock for ca certs: {Name:mk1e000575c94c138513704c2900b8a68810eb65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:15:00.893274   74510 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key
	I0816 18:15:00.893318   74510 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key
	I0816 18:15:00.893327   74510 certs.go:256] generating profile certs ...
	I0816 18:15:00.893403   74510 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/embed-certs-777541/client.key
	I0816 18:15:00.893459   74510 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/embed-certs-777541/apiserver.key.dd0c1a01
	I0816 18:15:00.893503   74510 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/embed-certs-777541/proxy-client.key
	I0816 18:15:00.893617   74510 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem (1338 bytes)
	W0816 18:15:00.893645   74510 certs.go:480] ignoring /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753_empty.pem, impossibly tiny 0 bytes
	I0816 18:15:00.893655   74510 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 18:15:00.893675   74510 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem (1082 bytes)
	I0816 18:15:00.893698   74510 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem (1123 bytes)
	I0816 18:15:00.893721   74510 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem (1679 bytes)
	I0816 18:15:00.893759   74510 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem (1708 bytes)
	I0816 18:15:00.894445   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 18:15:00.936535   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 18:15:00.969775   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 18:15:01.013053   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 18:15:01.046087   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/embed-certs-777541/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0816 18:15:01.073290   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/embed-certs-777541/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 18:15:01.097033   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/embed-certs-777541/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 18:15:01.119859   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/embed-certs-777541/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 18:15:01.141943   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem --> /usr/share/ca-certificates/16753.pem (1338 bytes)
	I0816 18:15:01.168752   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /usr/share/ca-certificates/167532.pem (1708 bytes)
	I0816 18:15:01.191193   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 18:15:01.213691   74510 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 18:15:01.229374   74510 ssh_runner.go:195] Run: openssl version
	I0816 18:15:01.234563   74510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16753.pem && ln -fs /usr/share/ca-certificates/16753.pem /etc/ssl/certs/16753.pem"
	I0816 18:15:01.244301   74510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16753.pem
	I0816 18:15:01.248156   74510 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 17:00 /usr/share/ca-certificates/16753.pem
	I0816 18:15:01.248220   74510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16753.pem
	I0816 18:15:01.253468   74510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16753.pem /etc/ssl/certs/51391683.0"
	I0816 18:15:01.262917   74510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167532.pem && ln -fs /usr/share/ca-certificates/167532.pem /etc/ssl/certs/167532.pem"
	I0816 18:15:01.272577   74510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167532.pem
	I0816 18:15:01.276790   74510 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 17:00 /usr/share/ca-certificates/167532.pem
	I0816 18:15:01.276841   74510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167532.pem
	I0816 18:15:01.281847   74510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167532.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 18:15:01.291789   74510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 18:15:01.302422   74510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:15:01.306320   74510 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:49 /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:15:01.306364   74510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:15:01.311335   74510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 18:15:01.320713   74510 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 18:15:01.324442   74510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 18:15:01.330137   74510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 18:15:01.335693   74510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 18:15:01.340987   74510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 18:15:01.346071   74510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 18:15:01.351280   74510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 18:15:01.357275   74510 kubeadm.go:392] StartCluster: {Name:embed-certs-777541 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-777541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.218 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 18:15:01.357388   74510 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 18:15:01.357427   74510 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 18:15:01.400422   74510 cri.go:89] found id: ""
	I0816 18:15:01.400497   74510 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 18:15:01.410142   74510 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 18:15:01.410162   74510 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 18:15:01.410211   74510 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 18:15:01.419129   74510 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 18:15:01.420130   74510 kubeconfig.go:125] found "embed-certs-777541" server: "https://192.168.61.218:8443"
	I0816 18:15:01.422036   74510 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 18:15:01.430665   74510 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.218
	I0816 18:15:01.430694   74510 kubeadm.go:1160] stopping kube-system containers ...
	I0816 18:15:01.430705   74510 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 18:15:01.430762   74510 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 18:15:01.469108   74510 cri.go:89] found id: ""
	I0816 18:15:01.469182   74510 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 18:15:01.486125   74510 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 18:15:01.495311   74510 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 18:15:01.495335   74510 kubeadm.go:157] found existing configuration files:
	
	I0816 18:15:01.495384   74510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 18:15:01.504066   74510 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 18:15:01.504128   74510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 18:15:01.513222   74510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 18:15:01.521593   74510 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 18:15:01.521692   74510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 18:15:01.530413   74510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 18:15:01.539027   74510 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 18:15:01.539101   74510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 18:15:01.547802   74510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 18:15:01.557143   74510 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 18:15:01.557203   74510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 18:15:01.568616   74510 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 18:15:01.578091   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:15:01.700661   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:15:02.631047   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:15:02.833132   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:15:02.900476   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:15:02.972431   74510 api_server.go:52] waiting for apiserver process to appear ...
	I0816 18:15:02.972514   74510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:03.473296   74510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:59.707731   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:02.206825   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:01.948070   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:03.948398   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:03.147098   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:03.646983   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:04.147320   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:04.647649   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:05.147258   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:05.647999   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:06.147901   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:06.647340   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:07.147339   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:07.648033   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:03.973603   74510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:04.472779   74510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:04.972846   74510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:05.473594   74510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:05.487878   74510 api_server.go:72] duration metric: took 2.51545841s to wait for apiserver process to appear ...
	I0816 18:15:05.487914   74510 api_server.go:88] waiting for apiserver healthz status ...
	I0816 18:15:05.487937   74510 api_server.go:253] Checking apiserver healthz at https://192.168.61.218:8443/healthz ...
	I0816 18:15:08.450583   74510 api_server.go:279] https://192.168.61.218:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 18:15:08.450618   74510 api_server.go:103] status: https://192.168.61.218:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 18:15:08.450635   74510 api_server.go:253] Checking apiserver healthz at https://192.168.61.218:8443/healthz ...
	I0816 18:15:08.495625   74510 api_server.go:279] https://192.168.61.218:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 18:15:08.495656   74510 api_server.go:103] status: https://192.168.61.218:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 18:15:08.495669   74510 api_server.go:253] Checking apiserver healthz at https://192.168.61.218:8443/healthz ...
	I0816 18:15:08.516711   74510 api_server.go:279] https://192.168.61.218:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:15:08.516744   74510 api_server.go:103] status: https://192.168.61.218:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:15:04.836663   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:07.206999   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:06.447839   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:08.449939   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:08.988897   74510 api_server.go:253] Checking apiserver healthz at https://192.168.61.218:8443/healthz ...
	I0816 18:15:08.996347   74510 api_server.go:279] https://192.168.61.218:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:15:08.996374   74510 api_server.go:103] status: https://192.168.61.218:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:15:09.488013   74510 api_server.go:253] Checking apiserver healthz at https://192.168.61.218:8443/healthz ...
	I0816 18:15:09.499514   74510 api_server.go:279] https://192.168.61.218:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:15:09.499559   74510 api_server.go:103] status: https://192.168.61.218:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:15:09.988080   74510 api_server.go:253] Checking apiserver healthz at https://192.168.61.218:8443/healthz ...
	I0816 18:15:09.992106   74510 api_server.go:279] https://192.168.61.218:8443/healthz returned 200:
	ok
	I0816 18:15:09.998515   74510 api_server.go:141] control plane version: v1.31.0
	I0816 18:15:09.998542   74510 api_server.go:131] duration metric: took 4.510619176s to wait for apiserver health ...
	I0816 18:15:09.998555   74510 cni.go:84] Creating CNI manager for ""
	I0816 18:15:09.998563   74510 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 18:15:10.000470   74510 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 18:15:10.001870   74510 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 18:15:10.011805   74510 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 18:15:10.032349   74510 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 18:15:10.046765   74510 system_pods.go:59] 8 kube-system pods found
	I0816 18:15:10.046798   74510 system_pods.go:61] "coredns-6f6b679f8f-8njs2" [f29c31e1-4c2a-4dd8-ba60-62998504c55e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 18:15:10.046808   74510 system_pods.go:61] "etcd-embed-certs-777541" [9cad9a1c-cea5-4271-9a68-3689ecdad607] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 18:15:10.046817   74510 system_pods.go:61] "kube-apiserver-embed-certs-777541" [a5105f98-368f-4687-8eca-ecc66ae59b42] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 18:15:10.046829   74510 system_pods.go:61] "kube-controller-manager-embed-certs-777541" [63cb72fb-167b-446b-874d-6ee665ad8a55] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 18:15:10.046838   74510 system_pods.go:61] "kube-proxy-j5rl7" [fcbc8903-6fa2-4f55-9ec0-92b77e21fb08] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0816 18:15:10.046847   74510 system_pods.go:61] "kube-scheduler-embed-certs-777541" [f2224375-d2f9-4e85-9eb8-26070a90642f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 18:15:10.046855   74510 system_pods.go:61] "metrics-server-6867b74b74-6hkzb" [3e01da8d-7ddf-47cc-9079-5162cf2c2b53] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 18:15:10.046867   74510 system_pods.go:61] "storage-provisioner" [6fc6c4da-0e0f-45cc-84a6-bd4907f5e852] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0816 18:15:10.046876   74510 system_pods.go:74] duration metric: took 14.506593ms to wait for pod list to return data ...
	I0816 18:15:10.046889   74510 node_conditions.go:102] verifying NodePressure condition ...
	I0816 18:15:10.050663   74510 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 18:15:10.050686   74510 node_conditions.go:123] node cpu capacity is 2
	I0816 18:15:10.050699   74510 node_conditions.go:105] duration metric: took 3.805313ms to run NodePressure ...
	I0816 18:15:10.050717   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:15:10.344177   74510 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0816 18:15:10.348795   74510 kubeadm.go:739] kubelet initialised
	I0816 18:15:10.348820   74510 kubeadm.go:740] duration metric: took 4.612695ms waiting for restarted kubelet to initialise ...
	I0816 18:15:10.348830   74510 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:15:10.355270   74510 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-8njs2" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:10.361564   74510 pod_ready.go:98] node "embed-certs-777541" hosting pod "coredns-6f6b679f8f-8njs2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:10.361584   74510 pod_ready.go:82] duration metric: took 6.2936ms for pod "coredns-6f6b679f8f-8njs2" in "kube-system" namespace to be "Ready" ...
	E0816 18:15:10.361592   74510 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-777541" hosting pod "coredns-6f6b679f8f-8njs2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:10.361598   74510 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:10.367126   74510 pod_ready.go:98] node "embed-certs-777541" hosting pod "etcd-embed-certs-777541" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:10.367149   74510 pod_ready.go:82] duration metric: took 5.542782ms for pod "etcd-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	E0816 18:15:10.367159   74510 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-777541" hosting pod "etcd-embed-certs-777541" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:10.367166   74510 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:10.372241   74510 pod_ready.go:98] node "embed-certs-777541" hosting pod "kube-apiserver-embed-certs-777541" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:10.372262   74510 pod_ready.go:82] duration metric: took 5.086551ms for pod "kube-apiserver-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	E0816 18:15:10.372273   74510 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-777541" hosting pod "kube-apiserver-embed-certs-777541" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:10.372301   74510 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:10.436397   74510 pod_ready.go:98] node "embed-certs-777541" hosting pod "kube-controller-manager-embed-certs-777541" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:10.436423   74510 pod_ready.go:82] duration metric: took 64.108858ms for pod "kube-controller-manager-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	E0816 18:15:10.436432   74510 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-777541" hosting pod "kube-controller-manager-embed-certs-777541" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:10.436443   74510 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-j5rl7" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:10.836116   74510 pod_ready.go:98] node "embed-certs-777541" hosting pod "kube-proxy-j5rl7" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:10.836146   74510 pod_ready.go:82] duration metric: took 399.693364ms for pod "kube-proxy-j5rl7" in "kube-system" namespace to be "Ready" ...
	E0816 18:15:10.836158   74510 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-777541" hosting pod "kube-proxy-j5rl7" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:10.836165   74510 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:11.235403   74510 pod_ready.go:98] node "embed-certs-777541" hosting pod "kube-scheduler-embed-certs-777541" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:11.235426   74510 pod_ready.go:82] duration metric: took 399.255693ms for pod "kube-scheduler-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	E0816 18:15:11.235439   74510 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-777541" hosting pod "kube-scheduler-embed-certs-777541" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:11.235445   74510 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:11.635717   74510 pod_ready.go:98] node "embed-certs-777541" hosting pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:11.635746   74510 pod_ready.go:82] duration metric: took 400.29283ms for pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace to be "Ready" ...
	E0816 18:15:11.635756   74510 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-777541" hosting pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:11.635762   74510 pod_ready.go:39] duration metric: took 1.286923943s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:15:11.635784   74510 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 18:15:11.646221   74510 ops.go:34] apiserver oom_adj: -16
	I0816 18:15:11.646248   74510 kubeadm.go:597] duration metric: took 10.23607804s to restartPrimaryControlPlane
	I0816 18:15:11.646269   74510 kubeadm.go:394] duration metric: took 10.288999278s to StartCluster
	I0816 18:15:11.646322   74510 settings.go:142] acquiring lock: {Name:mk7d6bbff611e99a9222156e58c7ea3b0a5541f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:15:11.646405   74510 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19461-9545/kubeconfig
	I0816 18:15:11.648652   74510 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/kubeconfig: {Name:mk7d5abb3a1b9b654c5d7490d32aeae2c9a1bdd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:15:11.648939   74510 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.218 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 18:15:11.649056   74510 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 18:15:11.649124   74510 config.go:182] Loaded profile config "embed-certs-777541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 18:15:11.649155   74510 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-777541"
	I0816 18:15:11.649165   74510 addons.go:69] Setting metrics-server=true in profile "embed-certs-777541"
	I0816 18:15:11.649192   74510 addons.go:234] Setting addon metrics-server=true in "embed-certs-777541"
	I0816 18:15:11.649201   74510 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-777541"
	W0816 18:15:11.649205   74510 addons.go:243] addon metrics-server should already be in state true
	I0816 18:15:11.649193   74510 addons.go:69] Setting default-storageclass=true in profile "embed-certs-777541"
	I0816 18:15:11.649252   74510 host.go:66] Checking if "embed-certs-777541" exists ...
	I0816 18:15:11.649254   74510 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-777541"
	W0816 18:15:11.649209   74510 addons.go:243] addon storage-provisioner should already be in state true
	I0816 18:15:11.649332   74510 host.go:66] Checking if "embed-certs-777541" exists ...
	I0816 18:15:11.649702   74510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:15:11.649706   74510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:15:11.649742   74510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:15:11.649772   74510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:15:11.649877   74510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:15:11.649930   74510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:15:11.651580   74510 out.go:177] * Verifying Kubernetes components...
	I0816 18:15:11.652903   74510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:15:11.665975   74510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33631
	I0816 18:15:11.666041   74510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44231
	I0816 18:15:11.666404   74510 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:15:11.666439   74510 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:15:11.666986   74510 main.go:141] libmachine: Using API Version  1
	I0816 18:15:11.667005   74510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:15:11.667051   74510 main.go:141] libmachine: Using API Version  1
	I0816 18:15:11.667085   74510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:15:11.667312   74510 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:15:11.667517   74510 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:15:11.667846   74510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:15:11.667899   74510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:15:11.668039   74510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:15:11.668077   74510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:15:11.669328   74510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38849
	I0816 18:15:11.669765   74510 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:15:11.670270   74510 main.go:141] libmachine: Using API Version  1
	I0816 18:15:11.670301   74510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:15:11.670658   74510 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:15:11.670896   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetState
	I0816 18:15:11.674148   74510 addons.go:234] Setting addon default-storageclass=true in "embed-certs-777541"
	W0816 18:15:11.674165   74510 addons.go:243] addon default-storageclass should already be in state true
	I0816 18:15:11.674184   74510 host.go:66] Checking if "embed-certs-777541" exists ...
	I0816 18:15:11.674448   74510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:15:11.674482   74510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:15:11.683629   74510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39851
	I0816 18:15:11.683637   74510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42943
	I0816 18:15:11.684040   74510 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:15:11.684048   74510 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:15:11.684499   74510 main.go:141] libmachine: Using API Version  1
	I0816 18:15:11.684516   74510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:15:11.684653   74510 main.go:141] libmachine: Using API Version  1
	I0816 18:15:11.684670   74510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:15:11.684968   74510 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:15:11.685114   74510 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:15:11.685136   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetState
	I0816 18:15:11.685329   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetState
	I0816 18:15:11.687030   74510 main.go:141] libmachine: (embed-certs-777541) Calling .DriverName
	I0816 18:15:11.687130   74510 main.go:141] libmachine: (embed-certs-777541) Calling .DriverName
	I0816 18:15:11.688852   74510 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0816 18:15:11.688855   74510 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:15:08.147308   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:08.647669   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:09.147149   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:09.647072   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:10.147381   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:10.647567   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:11.147101   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:11.647587   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:12.146972   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:12.647842   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:11.689590   74510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46491
	I0816 18:15:11.690041   74510 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:15:11.690152   74510 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 18:15:11.690170   74510 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0816 18:15:11.690186   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:15:11.690223   74510 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 18:15:11.690238   74510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 18:15:11.690253   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:15:11.690606   74510 main.go:141] libmachine: Using API Version  1
	I0816 18:15:11.690627   74510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:15:11.691006   74510 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:15:11.691543   74510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:15:11.691575   74510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:15:11.693646   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:15:11.693669   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:15:11.693988   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:15:11.694007   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:15:11.694051   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:15:11.694064   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:15:11.694275   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHPort
	I0816 18:15:11.694322   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHPort
	I0816 18:15:11.694436   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:15:11.694468   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:15:11.694545   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHUsername
	I0816 18:15:11.694602   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHUsername
	I0816 18:15:11.694677   74510 sshutil.go:53] new ssh client: &{IP:192.168.61.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/embed-certs-777541/id_rsa Username:docker}
	I0816 18:15:11.694885   74510 sshutil.go:53] new ssh client: &{IP:192.168.61.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/embed-certs-777541/id_rsa Username:docker}
	I0816 18:15:11.709409   74510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36055
	I0816 18:15:11.709800   74510 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:15:11.710343   74510 main.go:141] libmachine: Using API Version  1
	I0816 18:15:11.710363   74510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:15:11.710700   74510 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:15:11.710874   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetState
	I0816 18:15:11.712484   74510 main.go:141] libmachine: (embed-certs-777541) Calling .DriverName
	I0816 18:15:11.712691   74510 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 18:15:11.712706   74510 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 18:15:11.712723   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:15:11.715590   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:15:11.716017   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:15:11.716050   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:15:11.716167   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHPort
	I0816 18:15:11.716379   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:15:11.716572   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHUsername
	I0816 18:15:11.716737   74510 sshutil.go:53] new ssh client: &{IP:192.168.61.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/embed-certs-777541/id_rsa Username:docker}
	I0816 18:15:11.864710   74510 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 18:15:11.885871   74510 node_ready.go:35] waiting up to 6m0s for node "embed-certs-777541" to be "Ready" ...
	I0816 18:15:11.985725   74510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 18:15:12.007635   74510 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 18:15:12.007669   74510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0816 18:15:12.040044   74510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 18:15:12.059661   74510 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 18:15:12.059687   74510 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0816 18:15:12.123787   74510 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 18:15:12.123812   74510 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0816 18:15:12.167249   74510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 18:15:12.457960   74510 main.go:141] libmachine: Making call to close driver server
	I0816 18:15:12.457985   74510 main.go:141] libmachine: (embed-certs-777541) Calling .Close
	I0816 18:15:12.458264   74510 main.go:141] libmachine: (embed-certs-777541) DBG | Closing plugin on server side
	I0816 18:15:12.458315   74510 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:15:12.458334   74510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:15:12.458348   74510 main.go:141] libmachine: Making call to close driver server
	I0816 18:15:12.458360   74510 main.go:141] libmachine: (embed-certs-777541) Calling .Close
	I0816 18:15:12.458577   74510 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:15:12.458590   74510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:15:12.468651   74510 main.go:141] libmachine: Making call to close driver server
	I0816 18:15:12.468675   74510 main.go:141] libmachine: (embed-certs-777541) Calling .Close
	I0816 18:15:12.468921   74510 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:15:12.468940   74510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:15:12.468963   74510 main.go:141] libmachine: (embed-certs-777541) DBG | Closing plugin on server side
	I0816 18:15:13.203995   74510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.163904081s)
	I0816 18:15:13.204048   74510 main.go:141] libmachine: Making call to close driver server
	I0816 18:15:13.204060   74510 main.go:141] libmachine: (embed-certs-777541) Calling .Close
	I0816 18:15:13.204309   74510 main.go:141] libmachine: (embed-certs-777541) DBG | Closing plugin on server side
	I0816 18:15:13.204350   74510 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:15:13.204359   74510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:15:13.204368   74510 main.go:141] libmachine: Making call to close driver server
	I0816 18:15:13.204376   74510 main.go:141] libmachine: (embed-certs-777541) Calling .Close
	I0816 18:15:13.204562   74510 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:15:13.204589   74510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:15:13.213068   74510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.045790147s)
	I0816 18:15:13.213101   74510 main.go:141] libmachine: Making call to close driver server
	I0816 18:15:13.213115   74510 main.go:141] libmachine: (embed-certs-777541) Calling .Close
	I0816 18:15:13.213533   74510 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:15:13.213551   74510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:15:13.213555   74510 main.go:141] libmachine: (embed-certs-777541) DBG | Closing plugin on server side
	I0816 18:15:13.213560   74510 main.go:141] libmachine: Making call to close driver server
	I0816 18:15:13.213595   74510 main.go:141] libmachine: (embed-certs-777541) Calling .Close
	I0816 18:15:13.213869   74510 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:15:13.213887   74510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:15:13.213897   74510 addons.go:475] Verifying addon metrics-server=true in "embed-certs-777541"
	I0816 18:15:13.213901   74510 main.go:141] libmachine: (embed-certs-777541) DBG | Closing plugin on server side
	I0816 18:15:13.215724   74510 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0816 18:15:13.217031   74510 addons.go:510] duration metric: took 1.567977779s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0816 18:15:09.706813   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:11.708577   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:10.947986   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:12.949227   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:13.147558   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:13.647755   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:14.147408   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:14.647810   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:15.147888   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:15.647476   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:16.147258   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:16.647785   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:17.147086   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:17.647852   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:13.889379   74510 node_ready.go:53] node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:15.889764   74510 node_ready.go:53] node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:18.390031   74510 node_ready.go:53] node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:14.207743   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:16.705831   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:15.448826   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:17.950756   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:18.147086   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:18.647013   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:19.147027   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:19.647100   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:20.147070   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:20.647097   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:21.147251   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:21.647856   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:22.147427   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:22.647231   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:18.890110   74510 node_ready.go:49] node "embed-certs-777541" has status "Ready":"True"
	I0816 18:15:18.890138   74510 node_ready.go:38] duration metric: took 7.004237799s for node "embed-certs-777541" to be "Ready" ...
	I0816 18:15:18.890156   74510 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:15:18.897124   74510 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-8njs2" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:18.902860   74510 pod_ready.go:93] pod "coredns-6f6b679f8f-8njs2" in "kube-system" namespace has status "Ready":"True"
	I0816 18:15:18.902878   74510 pod_ready.go:82] duration metric: took 5.73242ms for pod "coredns-6f6b679f8f-8njs2" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:18.902886   74510 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:20.909185   74510 pod_ready.go:103] pod "etcd-embed-certs-777541" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:21.909629   74510 pod_ready.go:93] pod "etcd-embed-certs-777541" in "kube-system" namespace has status "Ready":"True"
	I0816 18:15:21.909660   74510 pod_ready.go:82] duration metric: took 3.006768325s for pod "etcd-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:21.909670   74510 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:22.916066   74510 pod_ready.go:93] pod "kube-apiserver-embed-certs-777541" in "kube-system" namespace has status "Ready":"True"
	I0816 18:15:22.916090   74510 pod_ready.go:82] duration metric: took 1.006414177s for pod "kube-apiserver-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:22.916099   74510 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:22.920882   74510 pod_ready.go:93] pod "kube-controller-manager-embed-certs-777541" in "kube-system" namespace has status "Ready":"True"
	I0816 18:15:22.920908   74510 pod_ready.go:82] duration metric: took 4.802561ms for pod "kube-controller-manager-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:22.920918   74510 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-j5rl7" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:22.926952   74510 pod_ready.go:93] pod "kube-proxy-j5rl7" in "kube-system" namespace has status "Ready":"True"
	I0816 18:15:22.926975   74510 pod_ready.go:82] duration metric: took 6.0498ms for pod "kube-proxy-j5rl7" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:22.926984   74510 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:19.206127   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:21.206280   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:23.705588   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:20.448793   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:22.948798   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:23.147403   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:23.647030   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:24.147677   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:24.647324   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:25.147973   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:25.647097   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:26.147160   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:26.646963   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:27.147620   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:27.647918   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:24.933953   74510 pod_ready.go:103] pod "kube-scheduler-embed-certs-777541" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:25.433826   74510 pod_ready.go:93] pod "kube-scheduler-embed-certs-777541" in "kube-system" namespace has status "Ready":"True"
	I0816 18:15:25.433846   74510 pod_ready.go:82] duration metric: took 2.506855714s for pod "kube-scheduler-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:25.433855   74510 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:27.440119   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:25.707915   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:28.206580   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:25.447687   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:27.948700   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:28.146994   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:28.647364   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:29.147332   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:29.647773   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:30.147276   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:30.647794   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:31.147398   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:31.647565   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:32.147139   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:32.647961   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:29.440564   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:31.940747   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:30.706544   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:32.706852   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:29.948982   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:32.447920   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:34.448186   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:33.147648   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:33.647087   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:34.147881   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:34.646988   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:35.147118   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:35.647978   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:36.147541   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:36.647423   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:37.147051   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:37.647726   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:34.439692   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:36.439956   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:38.440315   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:35.206291   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:37.206902   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:36.948416   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:39.447952   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:38.147192   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:38.647318   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:39.147186   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:39.647662   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:40.147044   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:40.647787   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:41.147638   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:41.647490   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:42.147787   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:42.647959   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:40.440405   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:42.440727   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:39.207086   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:41.706048   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:43.706585   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:41.450069   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:43.948101   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:43.147938   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:43.647855   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:44.147781   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:44.647710   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:15:44.647796   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:15:44.682176   75402 cri.go:89] found id: ""
	I0816 18:15:44.682207   75402 logs.go:276] 0 containers: []
	W0816 18:15:44.682218   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:15:44.682226   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:15:44.682285   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:15:44.717500   75402 cri.go:89] found id: ""
	I0816 18:15:44.717530   75402 logs.go:276] 0 containers: []
	W0816 18:15:44.717540   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:15:44.717552   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:15:44.717620   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:15:44.751816   75402 cri.go:89] found id: ""
	I0816 18:15:44.751847   75402 logs.go:276] 0 containers: []
	W0816 18:15:44.751858   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:15:44.751865   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:15:44.751942   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:15:44.783236   75402 cri.go:89] found id: ""
	I0816 18:15:44.783260   75402 logs.go:276] 0 containers: []
	W0816 18:15:44.783267   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:15:44.783272   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:15:44.783337   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:15:44.813087   75402 cri.go:89] found id: ""
	I0816 18:15:44.813110   75402 logs.go:276] 0 containers: []
	W0816 18:15:44.813116   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:15:44.813122   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:15:44.813166   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:15:44.843568   75402 cri.go:89] found id: ""
	I0816 18:15:44.843599   75402 logs.go:276] 0 containers: []
	W0816 18:15:44.843609   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:15:44.843616   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:15:44.843679   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:15:44.873694   75402 cri.go:89] found id: ""
	I0816 18:15:44.873723   75402 logs.go:276] 0 containers: []
	W0816 18:15:44.873734   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:15:44.873741   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:15:44.873808   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:15:44.906183   75402 cri.go:89] found id: ""
	I0816 18:15:44.906212   75402 logs.go:276] 0 containers: []
	W0816 18:15:44.906222   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:15:44.906231   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:15:44.906241   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:15:44.958963   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:15:44.958993   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:15:44.972390   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:15:44.972415   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:15:45.091624   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:15:45.091645   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:15:45.091661   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:15:45.159927   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:15:45.159963   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:15:47.698398   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:47.711848   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:15:47.711917   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:15:47.744247   75402 cri.go:89] found id: ""
	I0816 18:15:47.744278   75402 logs.go:276] 0 containers: []
	W0816 18:15:47.744288   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:15:47.744295   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:15:47.744374   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:15:47.783188   75402 cri.go:89] found id: ""
	I0816 18:15:47.783211   75402 logs.go:276] 0 containers: []
	W0816 18:15:47.783219   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:15:47.783224   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:15:47.783270   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:15:47.829284   75402 cri.go:89] found id: ""
	I0816 18:15:47.829320   75402 logs.go:276] 0 containers: []
	W0816 18:15:47.829333   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:15:47.829341   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:15:47.829413   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:15:47.879482   75402 cri.go:89] found id: ""
	I0816 18:15:47.879514   75402 logs.go:276] 0 containers: []
	W0816 18:15:47.879525   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:15:47.879532   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:15:47.879606   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:15:47.913766   75402 cri.go:89] found id: ""
	I0816 18:15:47.913797   75402 logs.go:276] 0 containers: []
	W0816 18:15:47.913808   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:15:47.913815   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:15:47.913880   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:15:47.947262   75402 cri.go:89] found id: ""
	I0816 18:15:47.947340   75402 logs.go:276] 0 containers: []
	W0816 18:15:47.947353   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:15:47.947362   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:15:47.947427   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:15:47.979638   75402 cri.go:89] found id: ""
	I0816 18:15:47.979667   75402 logs.go:276] 0 containers: []
	W0816 18:15:47.979678   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:15:47.979685   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:15:47.979741   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:15:48.010246   75402 cri.go:89] found id: ""
	I0816 18:15:48.010277   75402 logs.go:276] 0 containers: []
	W0816 18:15:48.010288   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:15:48.010296   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:15:48.010310   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:15:48.083916   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:15:48.083953   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:15:44.940775   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:47.440356   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:46.207236   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:48.705791   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:45.948300   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:47.948501   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:48.120254   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:15:48.120285   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:15:48.169590   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:15:48.169628   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:15:48.182821   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:15:48.182850   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:15:48.254088   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:15:50.755114   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:50.768167   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:15:50.768250   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:15:50.800881   75402 cri.go:89] found id: ""
	I0816 18:15:50.800906   75402 logs.go:276] 0 containers: []
	W0816 18:15:50.800913   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:15:50.800918   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:15:50.800969   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:15:50.833538   75402 cri.go:89] found id: ""
	I0816 18:15:50.833567   75402 logs.go:276] 0 containers: []
	W0816 18:15:50.833578   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:15:50.833586   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:15:50.833649   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:15:50.867306   75402 cri.go:89] found id: ""
	I0816 18:15:50.867336   75402 logs.go:276] 0 containers: []
	W0816 18:15:50.867347   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:15:50.867353   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:15:50.867400   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:15:50.900029   75402 cri.go:89] found id: ""
	I0816 18:15:50.900055   75402 logs.go:276] 0 containers: []
	W0816 18:15:50.900064   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:15:50.900072   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:15:50.900135   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:15:50.933604   75402 cri.go:89] found id: ""
	I0816 18:15:50.933630   75402 logs.go:276] 0 containers: []
	W0816 18:15:50.933638   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:15:50.933643   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:15:50.933707   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:15:50.966102   75402 cri.go:89] found id: ""
	I0816 18:15:50.966131   75402 logs.go:276] 0 containers: []
	W0816 18:15:50.966141   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:15:50.966149   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:15:50.966210   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:15:50.998007   75402 cri.go:89] found id: ""
	I0816 18:15:50.998036   75402 logs.go:276] 0 containers: []
	W0816 18:15:50.998047   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:15:50.998054   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:15:50.998115   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:15:51.032306   75402 cri.go:89] found id: ""
	I0816 18:15:51.032342   75402 logs.go:276] 0 containers: []
	W0816 18:15:51.032349   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:15:51.032357   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:15:51.032369   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:15:51.083186   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:15:51.083222   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:15:51.096072   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:15:51.096153   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:15:51.162667   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:15:51.162693   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:15:51.162709   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:15:51.241913   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:15:51.241954   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:15:49.440546   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:51.940026   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:50.706662   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:53.206075   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:50.447947   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:52.448340   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:54.448431   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:53.779323   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:53.793358   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:15:53.793433   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:15:53.827380   75402 cri.go:89] found id: ""
	I0816 18:15:53.827414   75402 logs.go:276] 0 containers: []
	W0816 18:15:53.827424   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:15:53.827430   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:15:53.827489   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:15:53.867331   75402 cri.go:89] found id: ""
	I0816 18:15:53.867370   75402 logs.go:276] 0 containers: []
	W0816 18:15:53.867380   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:15:53.867386   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:15:53.867438   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:15:53.899445   75402 cri.go:89] found id: ""
	I0816 18:15:53.899477   75402 logs.go:276] 0 containers: []
	W0816 18:15:53.899489   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:15:53.899498   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:15:53.899588   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:15:53.936527   75402 cri.go:89] found id: ""
	I0816 18:15:53.936556   75402 logs.go:276] 0 containers: []
	W0816 18:15:53.936568   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:15:53.936576   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:15:53.936653   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:15:53.970739   75402 cri.go:89] found id: ""
	I0816 18:15:53.970765   75402 logs.go:276] 0 containers: []
	W0816 18:15:53.970773   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:15:53.970780   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:15:53.970842   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:15:54.004119   75402 cri.go:89] found id: ""
	I0816 18:15:54.004150   75402 logs.go:276] 0 containers: []
	W0816 18:15:54.004159   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:15:54.004164   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:15:54.004217   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:15:54.038370   75402 cri.go:89] found id: ""
	I0816 18:15:54.038400   75402 logs.go:276] 0 containers: []
	W0816 18:15:54.038411   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:15:54.038416   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:15:54.038472   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:15:54.079346   75402 cri.go:89] found id: ""
	I0816 18:15:54.079375   75402 logs.go:276] 0 containers: []
	W0816 18:15:54.079383   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:15:54.079392   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:15:54.079403   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:15:54.116551   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:15:54.116586   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:15:54.169930   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:15:54.169970   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:15:54.182416   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:15:54.182448   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:15:54.253516   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:15:54.253539   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:15:54.253559   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:15:56.833124   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:56.846139   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:15:56.846211   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:15:56.880899   75402 cri.go:89] found id: ""
	I0816 18:15:56.880928   75402 logs.go:276] 0 containers: []
	W0816 18:15:56.880939   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:15:56.880945   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:15:56.880994   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:15:56.913362   75402 cri.go:89] found id: ""
	I0816 18:15:56.913393   75402 logs.go:276] 0 containers: []
	W0816 18:15:56.913406   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:15:56.913415   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:15:56.913507   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:15:56.951876   75402 cri.go:89] found id: ""
	I0816 18:15:56.951904   75402 logs.go:276] 0 containers: []
	W0816 18:15:56.951914   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:15:56.951919   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:15:56.951988   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:15:56.986335   75402 cri.go:89] found id: ""
	I0816 18:15:56.986358   75402 logs.go:276] 0 containers: []
	W0816 18:15:56.986366   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:15:56.986372   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:15:56.986423   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:15:57.022485   75402 cri.go:89] found id: ""
	I0816 18:15:57.022511   75402 logs.go:276] 0 containers: []
	W0816 18:15:57.022522   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:15:57.022529   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:15:57.022641   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:15:57.055436   75402 cri.go:89] found id: ""
	I0816 18:15:57.055463   75402 logs.go:276] 0 containers: []
	W0816 18:15:57.055470   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:15:57.055476   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:15:57.055536   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:15:57.085930   75402 cri.go:89] found id: ""
	I0816 18:15:57.085965   75402 logs.go:276] 0 containers: []
	W0816 18:15:57.085975   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:15:57.085981   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:15:57.086032   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:15:57.120436   75402 cri.go:89] found id: ""
	I0816 18:15:57.120466   75402 logs.go:276] 0 containers: []
	W0816 18:15:57.120477   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:15:57.120488   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:15:57.120501   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:15:57.202161   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:15:57.202218   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:15:57.243766   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:15:57.243805   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:15:57.295552   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:15:57.295585   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:15:57.307769   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:15:57.307802   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:15:57.390480   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:15:53.941399   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:56.439763   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:58.440357   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:55.206970   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:57.207312   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:56.948085   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:59.448174   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:59.891480   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:59.904766   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:15:59.904836   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:15:59.939209   75402 cri.go:89] found id: ""
	I0816 18:15:59.939241   75402 logs.go:276] 0 containers: []
	W0816 18:15:59.939252   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:15:59.939260   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:15:59.939324   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:15:59.971782   75402 cri.go:89] found id: ""
	I0816 18:15:59.971812   75402 logs.go:276] 0 containers: []
	W0816 18:15:59.971822   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:15:59.971832   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:15:59.971894   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:00.018585   75402 cri.go:89] found id: ""
	I0816 18:16:00.018630   75402 logs.go:276] 0 containers: []
	W0816 18:16:00.018643   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:00.018654   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:00.018722   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:00.050484   75402 cri.go:89] found id: ""
	I0816 18:16:00.050520   75402 logs.go:276] 0 containers: []
	W0816 18:16:00.050532   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:00.050540   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:00.050603   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:00.082900   75402 cri.go:89] found id: ""
	I0816 18:16:00.082930   75402 logs.go:276] 0 containers: []
	W0816 18:16:00.082942   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:00.082951   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:00.083025   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:00.115330   75402 cri.go:89] found id: ""
	I0816 18:16:00.115363   75402 logs.go:276] 0 containers: []
	W0816 18:16:00.115372   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:00.115378   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:00.115442   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:00.150898   75402 cri.go:89] found id: ""
	I0816 18:16:00.150935   75402 logs.go:276] 0 containers: []
	W0816 18:16:00.150952   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:00.150960   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:00.151033   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:00.193304   75402 cri.go:89] found id: ""
	I0816 18:16:00.193338   75402 logs.go:276] 0 containers: []
	W0816 18:16:00.193349   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:00.193359   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:00.193370   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:00.247340   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:00.247376   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:00.260470   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:00.260500   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:00.336483   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:00.336506   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:00.336521   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:00.421251   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:00.421289   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:02.964042   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:02.977284   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:02.977381   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:03.009533   75402 cri.go:89] found id: ""
	I0816 18:16:03.009574   75402 logs.go:276] 0 containers: []
	W0816 18:16:03.009586   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:03.009594   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:03.009673   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:03.043756   75402 cri.go:89] found id: ""
	I0816 18:16:03.043784   75402 logs.go:276] 0 containers: []
	W0816 18:16:03.043794   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:03.043802   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:03.043867   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:03.078817   75402 cri.go:89] found id: ""
	I0816 18:16:03.078840   75402 logs.go:276] 0 containers: []
	W0816 18:16:03.078848   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:03.078853   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:03.078906   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:00.440728   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:02.440788   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:59.706129   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:01.707967   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:01.948193   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:04.448504   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:03.112874   75402 cri.go:89] found id: ""
	I0816 18:16:03.112903   75402 logs.go:276] 0 containers: []
	W0816 18:16:03.112912   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:03.112918   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:03.112985   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:03.152008   75402 cri.go:89] found id: ""
	I0816 18:16:03.152040   75402 logs.go:276] 0 containers: []
	W0816 18:16:03.152052   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:03.152059   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:03.152125   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:03.187353   75402 cri.go:89] found id: ""
	I0816 18:16:03.187386   75402 logs.go:276] 0 containers: []
	W0816 18:16:03.187396   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:03.187404   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:03.187467   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:03.220860   75402 cri.go:89] found id: ""
	I0816 18:16:03.220895   75402 logs.go:276] 0 containers: []
	W0816 18:16:03.220903   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:03.220909   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:03.220958   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:03.252202   75402 cri.go:89] found id: ""
	I0816 18:16:03.252240   75402 logs.go:276] 0 containers: []
	W0816 18:16:03.252247   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:03.252256   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:03.252268   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:03.286907   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:03.286934   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:03.338212   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:03.338249   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:03.352548   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:03.352585   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:03.427580   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:03.427610   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:03.427626   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:06.011792   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:06.024201   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:06.024277   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:06.058328   75402 cri.go:89] found id: ""
	I0816 18:16:06.058356   75402 logs.go:276] 0 containers: []
	W0816 18:16:06.058367   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:06.058373   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:06.058433   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:06.091262   75402 cri.go:89] found id: ""
	I0816 18:16:06.091298   75402 logs.go:276] 0 containers: []
	W0816 18:16:06.091311   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:06.091318   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:06.091382   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:06.124114   75402 cri.go:89] found id: ""
	I0816 18:16:06.124146   75402 logs.go:276] 0 containers: []
	W0816 18:16:06.124154   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:06.124159   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:06.124220   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:06.155379   75402 cri.go:89] found id: ""
	I0816 18:16:06.155406   75402 logs.go:276] 0 containers: []
	W0816 18:16:06.155416   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:06.155422   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:06.155471   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:06.189442   75402 cri.go:89] found id: ""
	I0816 18:16:06.189472   75402 logs.go:276] 0 containers: []
	W0816 18:16:06.189480   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:06.189485   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:06.189538   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:06.228881   75402 cri.go:89] found id: ""
	I0816 18:16:06.228910   75402 logs.go:276] 0 containers: []
	W0816 18:16:06.228921   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:06.228929   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:06.229003   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:06.262272   75402 cri.go:89] found id: ""
	I0816 18:16:06.262299   75402 logs.go:276] 0 containers: []
	W0816 18:16:06.262310   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:06.262317   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:06.262386   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:06.295427   75402 cri.go:89] found id: ""
	I0816 18:16:06.295456   75402 logs.go:276] 0 containers: []
	W0816 18:16:06.295468   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:06.295478   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:06.295492   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:06.347569   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:06.347608   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:06.362786   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:06.362825   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:06.432020   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:06.432044   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:06.432059   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:06.512085   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:06.512120   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:04.940128   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:07.439708   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:04.206477   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:06.208125   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:08.706765   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:06.947599   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:08.948183   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:09.051957   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:09.066630   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:09.066690   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:09.101484   75402 cri.go:89] found id: ""
	I0816 18:16:09.101515   75402 logs.go:276] 0 containers: []
	W0816 18:16:09.101526   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:09.101536   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:09.101614   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:09.140645   75402 cri.go:89] found id: ""
	I0816 18:16:09.140677   75402 logs.go:276] 0 containers: []
	W0816 18:16:09.140689   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:09.140696   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:09.140758   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:09.174666   75402 cri.go:89] found id: ""
	I0816 18:16:09.174698   75402 logs.go:276] 0 containers: []
	W0816 18:16:09.174708   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:09.174717   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:09.174780   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:09.209715   75402 cri.go:89] found id: ""
	I0816 18:16:09.209748   75402 logs.go:276] 0 containers: []
	W0816 18:16:09.209758   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:09.209767   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:09.209845   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:09.243681   75402 cri.go:89] found id: ""
	I0816 18:16:09.243712   75402 logs.go:276] 0 containers: []
	W0816 18:16:09.243720   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:09.243726   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:09.243781   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:09.278058   75402 cri.go:89] found id: ""
	I0816 18:16:09.278090   75402 logs.go:276] 0 containers: []
	W0816 18:16:09.278102   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:09.278111   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:09.278178   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:09.313092   75402 cri.go:89] found id: ""
	I0816 18:16:09.313122   75402 logs.go:276] 0 containers: []
	W0816 18:16:09.313132   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:09.313137   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:09.313201   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:09.345203   75402 cri.go:89] found id: ""
	I0816 18:16:09.345229   75402 logs.go:276] 0 containers: []
	W0816 18:16:09.345236   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:09.345245   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:09.345259   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:09.358198   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:09.358225   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:09.422024   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:09.422047   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:09.422059   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:09.498684   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:09.498717   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:09.535349   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:09.535382   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:12.087472   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:12.100412   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:12.100477   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:12.133982   75402 cri.go:89] found id: ""
	I0816 18:16:12.134018   75402 logs.go:276] 0 containers: []
	W0816 18:16:12.134030   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:12.134038   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:12.134100   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:12.166466   75402 cri.go:89] found id: ""
	I0816 18:16:12.166497   75402 logs.go:276] 0 containers: []
	W0816 18:16:12.166507   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:12.166514   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:12.166589   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:12.197752   75402 cri.go:89] found id: ""
	I0816 18:16:12.197779   75402 logs.go:276] 0 containers: []
	W0816 18:16:12.197790   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:12.197797   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:12.197856   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:12.239759   75402 cri.go:89] found id: ""
	I0816 18:16:12.239789   75402 logs.go:276] 0 containers: []
	W0816 18:16:12.239801   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:12.239810   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:12.239871   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:12.273263   75402 cri.go:89] found id: ""
	I0816 18:16:12.273292   75402 logs.go:276] 0 containers: []
	W0816 18:16:12.273302   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:12.273310   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:12.273370   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:12.308788   75402 cri.go:89] found id: ""
	I0816 18:16:12.308820   75402 logs.go:276] 0 containers: []
	W0816 18:16:12.308831   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:12.308839   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:12.308897   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:12.345243   75402 cri.go:89] found id: ""
	I0816 18:16:12.345274   75402 logs.go:276] 0 containers: []
	W0816 18:16:12.345281   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:12.345288   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:12.345341   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:12.379939   75402 cri.go:89] found id: ""
	I0816 18:16:12.379968   75402 logs.go:276] 0 containers: []
	W0816 18:16:12.379978   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:12.379989   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:12.380004   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:12.436097   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:12.436130   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:12.449328   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:12.449357   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:12.518723   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:12.518749   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:12.518764   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:12.600228   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:12.600268   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:09.441051   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:11.441097   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:11.206853   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:13.705328   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:11.449793   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:13.948517   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:15.137940   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:15.150617   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:15.150694   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:15.186029   75402 cri.go:89] found id: ""
	I0816 18:16:15.186057   75402 logs.go:276] 0 containers: []
	W0816 18:16:15.186067   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:15.186074   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:15.186134   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:15.219812   75402 cri.go:89] found id: ""
	I0816 18:16:15.219840   75402 logs.go:276] 0 containers: []
	W0816 18:16:15.219851   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:15.219864   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:15.219927   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:15.253434   75402 cri.go:89] found id: ""
	I0816 18:16:15.253462   75402 logs.go:276] 0 containers: []
	W0816 18:16:15.253472   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:15.253479   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:15.253542   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:15.286697   75402 cri.go:89] found id: ""
	I0816 18:16:15.286729   75402 logs.go:276] 0 containers: []
	W0816 18:16:15.286745   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:15.286751   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:15.286810   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:15.319363   75402 cri.go:89] found id: ""
	I0816 18:16:15.319405   75402 logs.go:276] 0 containers: []
	W0816 18:16:15.319415   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:15.319422   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:15.319506   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:15.353900   75402 cri.go:89] found id: ""
	I0816 18:16:15.353924   75402 logs.go:276] 0 containers: []
	W0816 18:16:15.353931   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:15.353937   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:15.353991   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:15.389086   75402 cri.go:89] found id: ""
	I0816 18:16:15.389114   75402 logs.go:276] 0 containers: []
	W0816 18:16:15.389122   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:15.389127   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:15.389184   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:15.424069   75402 cri.go:89] found id: ""
	I0816 18:16:15.424099   75402 logs.go:276] 0 containers: []
	W0816 18:16:15.424110   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:15.424121   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:15.424136   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:15.482703   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:15.482738   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:15.496859   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:15.496886   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:15.562178   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:15.562196   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:15.562212   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:15.643484   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:15.643521   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:13.944174   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:16.439987   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:18.442569   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:15.706743   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:18.206088   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:16.448775   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:18.948447   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:18.180963   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:18.194705   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:18.194783   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:18.231302   75402 cri.go:89] found id: ""
	I0816 18:16:18.231337   75402 logs.go:276] 0 containers: []
	W0816 18:16:18.231348   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:18.231355   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:18.231413   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:18.264098   75402 cri.go:89] found id: ""
	I0816 18:16:18.264124   75402 logs.go:276] 0 containers: []
	W0816 18:16:18.264135   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:18.264155   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:18.264228   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:18.298133   75402 cri.go:89] found id: ""
	I0816 18:16:18.298165   75402 logs.go:276] 0 containers: []
	W0816 18:16:18.298178   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:18.298186   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:18.298252   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:18.331323   75402 cri.go:89] found id: ""
	I0816 18:16:18.331354   75402 logs.go:276] 0 containers: []
	W0816 18:16:18.331362   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:18.331367   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:18.331416   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:18.365677   75402 cri.go:89] found id: ""
	I0816 18:16:18.365709   75402 logs.go:276] 0 containers: []
	W0816 18:16:18.365718   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:18.365724   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:18.365774   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:18.399801   75402 cri.go:89] found id: ""
	I0816 18:16:18.399835   75402 logs.go:276] 0 containers: []
	W0816 18:16:18.399844   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:18.399850   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:18.399908   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:18.438148   75402 cri.go:89] found id: ""
	I0816 18:16:18.438179   75402 logs.go:276] 0 containers: []
	W0816 18:16:18.438189   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:18.438197   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:18.438257   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:18.472185   75402 cri.go:89] found id: ""
	I0816 18:16:18.472215   75402 logs.go:276] 0 containers: []
	W0816 18:16:18.472223   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:18.472232   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:18.472243   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:18.523369   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:18.523400   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:18.536152   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:18.536179   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:18.611539   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:18.611560   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:18.611571   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:18.688043   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:18.688079   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:21.229163   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:21.242641   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:21.242717   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:21.275188   75402 cri.go:89] found id: ""
	I0816 18:16:21.275213   75402 logs.go:276] 0 containers: []
	W0816 18:16:21.275220   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:21.275226   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:21.275275   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:21.308377   75402 cri.go:89] found id: ""
	I0816 18:16:21.308406   75402 logs.go:276] 0 containers: []
	W0816 18:16:21.308417   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:21.308424   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:21.308475   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:21.341067   75402 cri.go:89] found id: ""
	I0816 18:16:21.341098   75402 logs.go:276] 0 containers: []
	W0816 18:16:21.341106   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:21.341112   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:21.341170   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:21.372707   75402 cri.go:89] found id: ""
	I0816 18:16:21.372743   75402 logs.go:276] 0 containers: []
	W0816 18:16:21.372756   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:21.372763   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:21.372847   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:21.410210   75402 cri.go:89] found id: ""
	I0816 18:16:21.410241   75402 logs.go:276] 0 containers: []
	W0816 18:16:21.410252   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:21.410259   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:21.410323   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:21.444840   75402 cri.go:89] found id: ""
	I0816 18:16:21.444863   75402 logs.go:276] 0 containers: []
	W0816 18:16:21.444872   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:21.444879   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:21.444942   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:21.478278   75402 cri.go:89] found id: ""
	I0816 18:16:21.478319   75402 logs.go:276] 0 containers: []
	W0816 18:16:21.478327   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:21.478333   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:21.478395   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:21.512026   75402 cri.go:89] found id: ""
	I0816 18:16:21.512063   75402 logs.go:276] 0 containers: []
	W0816 18:16:21.512073   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:21.512090   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:21.512111   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:21.564800   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:21.564834   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:21.577343   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:21.577368   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:21.663216   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:21.663238   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:21.663251   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:21.741960   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:21.741994   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:20.939740   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:22.942844   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:20.706032   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:22.707112   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:21.449404   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:23.454804   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:24.282136   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:24.296452   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:24.296513   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:24.337173   75402 cri.go:89] found id: ""
	I0816 18:16:24.337200   75402 logs.go:276] 0 containers: []
	W0816 18:16:24.337210   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:24.337218   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:24.337282   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:24.374163   75402 cri.go:89] found id: ""
	I0816 18:16:24.374200   75402 logs.go:276] 0 containers: []
	W0816 18:16:24.374213   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:24.374222   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:24.374287   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:24.407823   75402 cri.go:89] found id: ""
	I0816 18:16:24.407854   75402 logs.go:276] 0 containers: []
	W0816 18:16:24.407866   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:24.407881   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:24.407953   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:24.444006   75402 cri.go:89] found id: ""
	I0816 18:16:24.444032   75402 logs.go:276] 0 containers: []
	W0816 18:16:24.444042   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:24.444049   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:24.444113   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:24.479082   75402 cri.go:89] found id: ""
	I0816 18:16:24.479110   75402 logs.go:276] 0 containers: []
	W0816 18:16:24.479119   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:24.479125   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:24.479174   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:24.524738   75402 cri.go:89] found id: ""
	I0816 18:16:24.524764   75402 logs.go:276] 0 containers: []
	W0816 18:16:24.524775   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:24.524782   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:24.524842   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:24.560298   75402 cri.go:89] found id: ""
	I0816 18:16:24.560326   75402 logs.go:276] 0 containers: []
	W0816 18:16:24.560335   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:24.560343   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:24.560406   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:24.597182   75402 cri.go:89] found id: ""
	I0816 18:16:24.597214   75402 logs.go:276] 0 containers: []
	W0816 18:16:24.597227   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:24.597239   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:24.597254   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:24.653063   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:24.653106   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:24.665940   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:24.665972   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:24.736599   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:24.736639   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:24.736657   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:24.821883   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:24.821939   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:27.359558   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:27.382980   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:27.383053   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:27.416766   75402 cri.go:89] found id: ""
	I0816 18:16:27.416793   75402 logs.go:276] 0 containers: []
	W0816 18:16:27.416802   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:27.416811   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:27.416873   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:27.452966   75402 cri.go:89] found id: ""
	I0816 18:16:27.452988   75402 logs.go:276] 0 containers: []
	W0816 18:16:27.452995   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:27.453001   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:27.453050   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:27.485850   75402 cri.go:89] found id: ""
	I0816 18:16:27.485885   75402 logs.go:276] 0 containers: []
	W0816 18:16:27.485896   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:27.485903   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:27.485960   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:27.517667   75402 cri.go:89] found id: ""
	I0816 18:16:27.517694   75402 logs.go:276] 0 containers: []
	W0816 18:16:27.517704   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:27.517711   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:27.517774   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:27.553547   75402 cri.go:89] found id: ""
	I0816 18:16:27.553574   75402 logs.go:276] 0 containers: []
	W0816 18:16:27.553582   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:27.553593   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:27.553653   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:27.586857   75402 cri.go:89] found id: ""
	I0816 18:16:27.586884   75402 logs.go:276] 0 containers: []
	W0816 18:16:27.586893   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:27.586898   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:27.586957   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:27.621739   75402 cri.go:89] found id: ""
	I0816 18:16:27.621766   75402 logs.go:276] 0 containers: []
	W0816 18:16:27.621776   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:27.621784   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:27.621844   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:27.657772   75402 cri.go:89] found id: ""
	I0816 18:16:27.657797   75402 logs.go:276] 0 containers: []
	W0816 18:16:27.657805   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:27.657819   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:27.657831   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:27.729769   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:27.729796   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:27.729810   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:27.813351   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:27.813403   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:27.852985   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:27.853010   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:27.908434   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:27.908476   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:25.439828   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:27.440749   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:25.207590   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:27.706496   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:25.948579   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:28.448590   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:30.422781   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:30.435987   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:30.436070   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:30.470878   75402 cri.go:89] found id: ""
	I0816 18:16:30.470907   75402 logs.go:276] 0 containers: []
	W0816 18:16:30.470918   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:30.470926   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:30.470983   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:30.504940   75402 cri.go:89] found id: ""
	I0816 18:16:30.504969   75402 logs.go:276] 0 containers: []
	W0816 18:16:30.504980   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:30.504988   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:30.505058   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:30.538680   75402 cri.go:89] found id: ""
	I0816 18:16:30.538708   75402 logs.go:276] 0 containers: []
	W0816 18:16:30.538716   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:30.538722   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:30.538788   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:30.574757   75402 cri.go:89] found id: ""
	I0816 18:16:30.574782   75402 logs.go:276] 0 containers: []
	W0816 18:16:30.574791   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:30.574797   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:30.574853   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:30.612500   75402 cri.go:89] found id: ""
	I0816 18:16:30.612529   75402 logs.go:276] 0 containers: []
	W0816 18:16:30.612539   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:30.612547   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:30.612613   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:30.644572   75402 cri.go:89] found id: ""
	I0816 18:16:30.644595   75402 logs.go:276] 0 containers: []
	W0816 18:16:30.644603   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:30.644609   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:30.644678   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:30.678199   75402 cri.go:89] found id: ""
	I0816 18:16:30.678232   75402 logs.go:276] 0 containers: []
	W0816 18:16:30.678243   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:30.678252   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:30.678331   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:30.709435   75402 cri.go:89] found id: ""
	I0816 18:16:30.709470   75402 logs.go:276] 0 containers: []
	W0816 18:16:30.709482   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:30.709494   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:30.709511   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:30.723430   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:30.723464   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:30.800340   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:30.800374   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:30.800390   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:30.883945   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:30.883986   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:30.922107   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:30.922139   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:29.940430   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:32.440198   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:29.706649   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:32.205271   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:30.949515   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:33.448456   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:33.480016   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:33.494178   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:33.494241   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:33.529497   75402 cri.go:89] found id: ""
	I0816 18:16:33.529527   75402 logs.go:276] 0 containers: []
	W0816 18:16:33.529546   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:33.529554   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:33.529614   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:33.566670   75402 cri.go:89] found id: ""
	I0816 18:16:33.566700   75402 logs.go:276] 0 containers: []
	W0816 18:16:33.566711   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:33.566718   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:33.566781   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:33.603898   75402 cri.go:89] found id: ""
	I0816 18:16:33.603926   75402 logs.go:276] 0 containers: []
	W0816 18:16:33.603937   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:33.603944   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:33.604003   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:33.636077   75402 cri.go:89] found id: ""
	I0816 18:16:33.636111   75402 logs.go:276] 0 containers: []
	W0816 18:16:33.636125   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:33.636134   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:33.636200   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:33.668974   75402 cri.go:89] found id: ""
	I0816 18:16:33.669002   75402 logs.go:276] 0 containers: []
	W0816 18:16:33.669011   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:33.669017   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:33.669070   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:33.700981   75402 cri.go:89] found id: ""
	I0816 18:16:33.701010   75402 logs.go:276] 0 containers: []
	W0816 18:16:33.701019   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:33.701026   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:33.701088   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:33.735430   75402 cri.go:89] found id: ""
	I0816 18:16:33.735463   75402 logs.go:276] 0 containers: []
	W0816 18:16:33.735474   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:33.735481   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:33.735539   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:33.779797   75402 cri.go:89] found id: ""
	I0816 18:16:33.779829   75402 logs.go:276] 0 containers: []
	W0816 18:16:33.779840   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:33.779851   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:33.779865   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:33.824873   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:33.824908   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:33.874177   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:33.874217   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:33.888535   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:33.888561   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:33.957590   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:33.957608   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:33.957627   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:36.533660   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:36.546542   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:36.546606   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:36.584056   75402 cri.go:89] found id: ""
	I0816 18:16:36.584085   75402 logs.go:276] 0 containers: []
	W0816 18:16:36.584094   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:36.584099   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:36.584149   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:36.622143   75402 cri.go:89] found id: ""
	I0816 18:16:36.622172   75402 logs.go:276] 0 containers: []
	W0816 18:16:36.622184   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:36.622193   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:36.622262   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:36.655479   75402 cri.go:89] found id: ""
	I0816 18:16:36.655509   75402 logs.go:276] 0 containers: []
	W0816 18:16:36.655520   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:36.655528   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:36.655603   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:36.688044   75402 cri.go:89] found id: ""
	I0816 18:16:36.688076   75402 logs.go:276] 0 containers: []
	W0816 18:16:36.688088   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:36.688096   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:36.688161   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:36.725831   75402 cri.go:89] found id: ""
	I0816 18:16:36.725861   75402 logs.go:276] 0 containers: []
	W0816 18:16:36.725868   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:36.725874   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:36.725925   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:36.758398   75402 cri.go:89] found id: ""
	I0816 18:16:36.758433   75402 logs.go:276] 0 containers: []
	W0816 18:16:36.758444   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:36.758453   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:36.758517   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:36.791097   75402 cri.go:89] found id: ""
	I0816 18:16:36.791126   75402 logs.go:276] 0 containers: []
	W0816 18:16:36.791136   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:36.791144   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:36.791207   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:36.829337   75402 cri.go:89] found id: ""
	I0816 18:16:36.829369   75402 logs.go:276] 0 containers: []
	W0816 18:16:36.829380   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:36.829391   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:36.829405   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:36.881898   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:36.881932   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:36.895584   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:36.895618   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:36.967175   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:36.967197   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:36.967213   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:37.046993   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:37.047025   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:34.440475   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:36.946369   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:34.206677   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:36.207893   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:38.706193   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:35.449611   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:37.947527   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:39.588683   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:39.607205   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:39.607287   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:39.640517   75402 cri.go:89] found id: ""
	I0816 18:16:39.640541   75402 logs.go:276] 0 containers: []
	W0816 18:16:39.640549   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:39.640554   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:39.640604   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:39.673777   75402 cri.go:89] found id: ""
	I0816 18:16:39.673805   75402 logs.go:276] 0 containers: []
	W0816 18:16:39.673813   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:39.673818   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:39.673899   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:39.709574   75402 cri.go:89] found id: ""
	I0816 18:16:39.709598   75402 logs.go:276] 0 containers: []
	W0816 18:16:39.709606   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:39.709611   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:39.709666   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:39.743946   75402 cri.go:89] found id: ""
	I0816 18:16:39.743971   75402 logs.go:276] 0 containers: []
	W0816 18:16:39.743979   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:39.743985   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:39.744041   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:39.776140   75402 cri.go:89] found id: ""
	I0816 18:16:39.776171   75402 logs.go:276] 0 containers: []
	W0816 18:16:39.776181   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:39.776187   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:39.776254   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:39.808697   75402 cri.go:89] found id: ""
	I0816 18:16:39.808719   75402 logs.go:276] 0 containers: []
	W0816 18:16:39.808728   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:39.808734   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:39.808793   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:39.840163   75402 cri.go:89] found id: ""
	I0816 18:16:39.840190   75402 logs.go:276] 0 containers: []
	W0816 18:16:39.840200   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:39.840206   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:39.840270   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:39.874396   75402 cri.go:89] found id: ""
	I0816 18:16:39.874419   75402 logs.go:276] 0 containers: []
	W0816 18:16:39.874426   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:39.874434   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:39.874448   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:39.927922   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:39.927963   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:39.942048   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:39.942076   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:40.012143   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:40.012166   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:40.012181   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:40.088798   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:40.088844   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:42.625875   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:42.640386   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:42.640448   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:42.675201   75402 cri.go:89] found id: ""
	I0816 18:16:42.675224   75402 logs.go:276] 0 containers: []
	W0816 18:16:42.675231   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:42.675236   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:42.675293   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:42.705156   75402 cri.go:89] found id: ""
	I0816 18:16:42.705182   75402 logs.go:276] 0 containers: []
	W0816 18:16:42.705192   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:42.705199   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:42.705258   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:42.738921   75402 cri.go:89] found id: ""
	I0816 18:16:42.738948   75402 logs.go:276] 0 containers: []
	W0816 18:16:42.738956   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:42.738962   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:42.739013   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:42.771130   75402 cri.go:89] found id: ""
	I0816 18:16:42.771160   75402 logs.go:276] 0 containers: []
	W0816 18:16:42.771168   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:42.771175   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:42.771231   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:42.805774   75402 cri.go:89] found id: ""
	I0816 18:16:42.805803   75402 logs.go:276] 0 containers: []
	W0816 18:16:42.805811   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:42.805817   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:42.805879   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:42.840248   75402 cri.go:89] found id: ""
	I0816 18:16:42.840277   75402 logs.go:276] 0 containers: []
	W0816 18:16:42.840293   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:42.840302   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:42.840360   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:42.873260   75402 cri.go:89] found id: ""
	I0816 18:16:42.873287   75402 logs.go:276] 0 containers: []
	W0816 18:16:42.873297   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:42.873322   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:42.873383   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:42.906205   75402 cri.go:89] found id: ""
	I0816 18:16:42.906230   75402 logs.go:276] 0 containers: []
	W0816 18:16:42.906238   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:42.906247   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:42.906257   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:42.959235   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:42.959272   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:42.972063   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:42.972090   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:43.039530   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:43.039558   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:43.039569   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:39.440219   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:41.441052   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:40.707059   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:43.210643   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:39.948907   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:42.448534   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:43.115486   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:43.115519   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:45.651040   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:45.663718   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:45.663812   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:45.696548   75402 cri.go:89] found id: ""
	I0816 18:16:45.696578   75402 logs.go:276] 0 containers: []
	W0816 18:16:45.696586   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:45.696591   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:45.696663   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:45.731032   75402 cri.go:89] found id: ""
	I0816 18:16:45.731059   75402 logs.go:276] 0 containers: []
	W0816 18:16:45.731068   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:45.731073   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:45.731126   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:45.764801   75402 cri.go:89] found id: ""
	I0816 18:16:45.764829   75402 logs.go:276] 0 containers: []
	W0816 18:16:45.764840   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:45.764846   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:45.764908   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:45.800768   75402 cri.go:89] found id: ""
	I0816 18:16:45.800795   75402 logs.go:276] 0 containers: []
	W0816 18:16:45.800803   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:45.800809   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:45.800858   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:45.841460   75402 cri.go:89] found id: ""
	I0816 18:16:45.841486   75402 logs.go:276] 0 containers: []
	W0816 18:16:45.841493   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:45.841505   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:45.841566   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:45.875230   75402 cri.go:89] found id: ""
	I0816 18:16:45.875254   75402 logs.go:276] 0 containers: []
	W0816 18:16:45.875261   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:45.875266   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:45.875319   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:45.907711   75402 cri.go:89] found id: ""
	I0816 18:16:45.907739   75402 logs.go:276] 0 containers: []
	W0816 18:16:45.907747   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:45.907753   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:45.907804   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:45.943147   75402 cri.go:89] found id: ""
	I0816 18:16:45.943171   75402 logs.go:276] 0 containers: []
	W0816 18:16:45.943182   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:45.943192   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:45.943206   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:45.998459   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:45.998491   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:46.013237   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:46.013267   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:46.079248   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:46.079273   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:46.079288   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:46.158842   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:46.158874   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:43.939212   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:45.939893   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:47.940331   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:45.706588   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:48.206342   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:44.948046   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:46.948752   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:49.448263   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:48.696728   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:48.710946   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:48.711041   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:48.746696   75402 cri.go:89] found id: ""
	I0816 18:16:48.746727   75402 logs.go:276] 0 containers: []
	W0816 18:16:48.746735   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:48.746741   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:48.746803   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:48.781496   75402 cri.go:89] found id: ""
	I0816 18:16:48.781522   75402 logs.go:276] 0 containers: []
	W0816 18:16:48.781532   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:48.781539   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:48.781603   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:48.815628   75402 cri.go:89] found id: ""
	I0816 18:16:48.815654   75402 logs.go:276] 0 containers: []
	W0816 18:16:48.815665   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:48.815673   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:48.815736   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:48.848990   75402 cri.go:89] found id: ""
	I0816 18:16:48.849018   75402 logs.go:276] 0 containers: []
	W0816 18:16:48.849030   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:48.849040   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:48.849098   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:48.886924   75402 cri.go:89] found id: ""
	I0816 18:16:48.886949   75402 logs.go:276] 0 containers: []
	W0816 18:16:48.886960   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:48.886968   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:48.887022   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:48.923989   75402 cri.go:89] found id: ""
	I0816 18:16:48.924018   75402 logs.go:276] 0 containers: []
	W0816 18:16:48.924030   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:48.924038   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:48.924102   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:48.959513   75402 cri.go:89] found id: ""
	I0816 18:16:48.959546   75402 logs.go:276] 0 containers: []
	W0816 18:16:48.959556   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:48.959562   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:48.959614   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:48.995615   75402 cri.go:89] found id: ""
	I0816 18:16:48.995651   75402 logs.go:276] 0 containers: []
	W0816 18:16:48.995662   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:48.995673   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:48.995688   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:49.008440   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:49.008468   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:49.076761   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:49.076780   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:49.076797   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:49.152855   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:49.152893   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:49.190857   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:49.190887   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:51.745344   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:51.759552   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:51.759628   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:51.795494   75402 cri.go:89] found id: ""
	I0816 18:16:51.795520   75402 logs.go:276] 0 containers: []
	W0816 18:16:51.795531   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:51.795539   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:51.795600   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:51.833162   75402 cri.go:89] found id: ""
	I0816 18:16:51.833188   75402 logs.go:276] 0 containers: []
	W0816 18:16:51.833198   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:51.833205   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:51.833265   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:51.866940   75402 cri.go:89] found id: ""
	I0816 18:16:51.866968   75402 logs.go:276] 0 containers: []
	W0816 18:16:51.866979   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:51.866986   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:51.867051   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:51.899824   75402 cri.go:89] found id: ""
	I0816 18:16:51.899857   75402 logs.go:276] 0 containers: []
	W0816 18:16:51.899867   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:51.899874   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:51.899937   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:51.932273   75402 cri.go:89] found id: ""
	I0816 18:16:51.932297   75402 logs.go:276] 0 containers: []
	W0816 18:16:51.932312   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:51.932320   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:51.932390   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:51.966885   75402 cri.go:89] found id: ""
	I0816 18:16:51.966911   75402 logs.go:276] 0 containers: []
	W0816 18:16:51.966922   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:51.966930   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:51.966996   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:52.002988   75402 cri.go:89] found id: ""
	I0816 18:16:52.003020   75402 logs.go:276] 0 containers: []
	W0816 18:16:52.003029   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:52.003035   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:52.003098   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:52.038858   75402 cri.go:89] found id: ""
	I0816 18:16:52.038894   75402 logs.go:276] 0 containers: []
	W0816 18:16:52.038909   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:52.038919   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:52.038933   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:52.076404   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:52.076431   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:52.127735   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:52.127767   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:52.140657   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:52.140680   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:52.202961   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:52.202989   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:52.203008   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:50.440577   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:52.441865   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:50.705618   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:52.706795   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:51.448948   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:53.947907   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:54.787095   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:54.801258   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:54.801332   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:54.837987   75402 cri.go:89] found id: ""
	I0816 18:16:54.838018   75402 logs.go:276] 0 containers: []
	W0816 18:16:54.838028   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:54.838034   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:54.838118   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:54.872439   75402 cri.go:89] found id: ""
	I0816 18:16:54.872466   75402 logs.go:276] 0 containers: []
	W0816 18:16:54.872477   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:54.872490   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:54.872554   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:54.904676   75402 cri.go:89] found id: ""
	I0816 18:16:54.904706   75402 logs.go:276] 0 containers: []
	W0816 18:16:54.904717   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:54.904724   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:54.904783   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:54.938101   75402 cri.go:89] found id: ""
	I0816 18:16:54.938134   75402 logs.go:276] 0 containers: []
	W0816 18:16:54.938145   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:54.938154   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:54.938218   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:54.977409   75402 cri.go:89] found id: ""
	I0816 18:16:54.977442   75402 logs.go:276] 0 containers: []
	W0816 18:16:54.977453   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:54.977460   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:54.977521   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:55.013248   75402 cri.go:89] found id: ""
	I0816 18:16:55.013275   75402 logs.go:276] 0 containers: []
	W0816 18:16:55.013286   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:55.013294   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:55.013363   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:55.044555   75402 cri.go:89] found id: ""
	I0816 18:16:55.044588   75402 logs.go:276] 0 containers: []
	W0816 18:16:55.044597   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:55.044603   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:55.044690   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:55.075970   75402 cri.go:89] found id: ""
	I0816 18:16:55.075997   75402 logs.go:276] 0 containers: []
	W0816 18:16:55.076006   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:55.076014   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:55.076025   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:55.149982   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:55.150017   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:55.190160   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:55.190194   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:55.242629   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:55.242660   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:55.255229   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:55.255254   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:55.324775   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:57.824996   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:57.838666   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:57.838740   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:57.872828   75402 cri.go:89] found id: ""
	I0816 18:16:57.872861   75402 logs.go:276] 0 containers: []
	W0816 18:16:57.872869   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:57.872875   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:57.872927   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:57.907324   75402 cri.go:89] found id: ""
	I0816 18:16:57.907354   75402 logs.go:276] 0 containers: []
	W0816 18:16:57.907366   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:57.907373   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:57.907433   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:57.941657   75402 cri.go:89] found id: ""
	I0816 18:16:57.941682   75402 logs.go:276] 0 containers: []
	W0816 18:16:57.941689   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:57.941695   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:57.941746   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:57.981424   75402 cri.go:89] found id: ""
	I0816 18:16:57.981466   75402 logs.go:276] 0 containers: []
	W0816 18:16:57.981480   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:57.981489   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:57.981562   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:58.015534   75402 cri.go:89] found id: ""
	I0816 18:16:58.015587   75402 logs.go:276] 0 containers: []
	W0816 18:16:58.015598   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:58.015606   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:58.015669   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:58.047875   75402 cri.go:89] found id: ""
	I0816 18:16:58.047908   75402 logs.go:276] 0 containers: []
	W0816 18:16:58.047917   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:58.047923   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:58.047976   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:58.079294   75402 cri.go:89] found id: ""
	I0816 18:16:58.079324   75402 logs.go:276] 0 containers: []
	W0816 18:16:58.079334   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:58.079342   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:58.079406   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:54.940977   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:57.439254   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:55.208298   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:57.706380   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:55.948080   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:57.949589   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:58.112357   75402 cri.go:89] found id: ""
	I0816 18:16:58.112389   75402 logs.go:276] 0 containers: []
	W0816 18:16:58.112401   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:58.112413   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:58.112428   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:58.159903   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:58.159934   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:58.172763   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:58.172789   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:58.245827   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:58.245856   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:58.245872   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:58.325008   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:58.325049   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:00.864354   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:00.877517   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:00.877593   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:00.915396   75402 cri.go:89] found id: ""
	I0816 18:17:00.915428   75402 logs.go:276] 0 containers: []
	W0816 18:17:00.915438   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:00.915446   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:00.915611   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:00.953950   75402 cri.go:89] found id: ""
	I0816 18:17:00.953977   75402 logs.go:276] 0 containers: []
	W0816 18:17:00.953987   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:00.953993   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:00.954051   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:00.987673   75402 cri.go:89] found id: ""
	I0816 18:17:00.987703   75402 logs.go:276] 0 containers: []
	W0816 18:17:00.987713   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:00.987721   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:00.987784   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:01.021230   75402 cri.go:89] found id: ""
	I0816 18:17:01.021277   75402 logs.go:276] 0 containers: []
	W0816 18:17:01.021308   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:01.021315   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:01.021388   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:01.057087   75402 cri.go:89] found id: ""
	I0816 18:17:01.057117   75402 logs.go:276] 0 containers: []
	W0816 18:17:01.057127   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:01.057135   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:01.057207   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:01.094142   75402 cri.go:89] found id: ""
	I0816 18:17:01.094168   75402 logs.go:276] 0 containers: []
	W0816 18:17:01.094176   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:01.094183   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:01.094233   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:01.132799   75402 cri.go:89] found id: ""
	I0816 18:17:01.132824   75402 logs.go:276] 0 containers: []
	W0816 18:17:01.132831   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:01.132837   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:01.132888   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:01.173367   75402 cri.go:89] found id: ""
	I0816 18:17:01.173402   75402 logs.go:276] 0 containers: []
	W0816 18:17:01.173414   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:01.173425   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:01.173443   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:01.186856   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:01.186896   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:01.259913   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:01.259941   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:01.259955   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:01.340914   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:01.340947   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:01.381023   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:01.381058   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:59.440314   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:01.440377   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:59.706750   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:01.707186   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:00.448182   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:02.448773   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:03.933420   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:03.946940   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:03.947008   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:03.984529   75402 cri.go:89] found id: ""
	I0816 18:17:03.984560   75402 logs.go:276] 0 containers: []
	W0816 18:17:03.984571   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:03.984581   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:03.984668   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:04.017900   75402 cri.go:89] found id: ""
	I0816 18:17:04.017929   75402 logs.go:276] 0 containers: []
	W0816 18:17:04.017940   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:04.017948   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:04.018009   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:04.050837   75402 cri.go:89] found id: ""
	I0816 18:17:04.050871   75402 logs.go:276] 0 containers: []
	W0816 18:17:04.050888   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:04.050896   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:04.050959   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:04.085448   75402 cri.go:89] found id: ""
	I0816 18:17:04.085477   75402 logs.go:276] 0 containers: []
	W0816 18:17:04.085487   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:04.085495   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:04.085564   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:04.118177   75402 cri.go:89] found id: ""
	I0816 18:17:04.118203   75402 logs.go:276] 0 containers: []
	W0816 18:17:04.118213   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:04.118220   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:04.118284   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:04.150289   75402 cri.go:89] found id: ""
	I0816 18:17:04.150317   75402 logs.go:276] 0 containers: []
	W0816 18:17:04.150330   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:04.150338   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:04.150404   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:04.184258   75402 cri.go:89] found id: ""
	I0816 18:17:04.184282   75402 logs.go:276] 0 containers: []
	W0816 18:17:04.184290   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:04.184295   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:04.184347   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:04.217142   75402 cri.go:89] found id: ""
	I0816 18:17:04.217174   75402 logs.go:276] 0 containers: []
	W0816 18:17:04.217184   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:04.217192   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:04.217204   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:04.253000   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:04.253034   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:04.304978   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:04.305018   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:04.320210   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:04.320241   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:04.396146   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:04.396169   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:04.396184   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:06.980747   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:06.992944   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:06.993006   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:07.026303   75402 cri.go:89] found id: ""
	I0816 18:17:07.026356   75402 logs.go:276] 0 containers: []
	W0816 18:17:07.026368   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:07.026376   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:07.026443   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:07.059226   75402 cri.go:89] found id: ""
	I0816 18:17:07.059257   75402 logs.go:276] 0 containers: []
	W0816 18:17:07.059268   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:07.059277   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:07.059339   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:07.092142   75402 cri.go:89] found id: ""
	I0816 18:17:07.092171   75402 logs.go:276] 0 containers: []
	W0816 18:17:07.092182   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:07.092188   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:07.092248   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:07.125284   75402 cri.go:89] found id: ""
	I0816 18:17:07.125330   75402 logs.go:276] 0 containers: []
	W0816 18:17:07.125347   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:07.125355   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:07.125420   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:07.163890   75402 cri.go:89] found id: ""
	I0816 18:17:07.163919   75402 logs.go:276] 0 containers: []
	W0816 18:17:07.163930   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:07.163938   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:07.164002   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:07.197988   75402 cri.go:89] found id: ""
	I0816 18:17:07.198014   75402 logs.go:276] 0 containers: []
	W0816 18:17:07.198025   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:07.198033   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:07.198116   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:07.232709   75402 cri.go:89] found id: ""
	I0816 18:17:07.232738   75402 logs.go:276] 0 containers: []
	W0816 18:17:07.232749   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:07.232756   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:07.232817   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:07.264514   75402 cri.go:89] found id: ""
	I0816 18:17:07.264548   75402 logs.go:276] 0 containers: []
	W0816 18:17:07.264558   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:07.264569   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:07.264583   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:07.316138   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:07.316173   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:07.329659   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:07.329688   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:07.397345   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:07.397380   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:07.397397   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:07.481245   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:07.481280   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:03.940100   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:05.940355   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:07.940821   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:04.207253   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:06.705745   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:08.706828   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:04.949027   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:07.447957   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:10.024405   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:10.036860   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:10.036927   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:10.069402   75402 cri.go:89] found id: ""
	I0816 18:17:10.069436   75402 logs.go:276] 0 containers: []
	W0816 18:17:10.069448   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:10.069458   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:10.069511   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:10.101480   75402 cri.go:89] found id: ""
	I0816 18:17:10.101508   75402 logs.go:276] 0 containers: []
	W0816 18:17:10.101518   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:10.101529   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:10.101601   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:10.131673   75402 cri.go:89] found id: ""
	I0816 18:17:10.131708   75402 logs.go:276] 0 containers: []
	W0816 18:17:10.131719   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:10.131726   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:10.131821   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:10.166476   75402 cri.go:89] found id: ""
	I0816 18:17:10.166508   75402 logs.go:276] 0 containers: []
	W0816 18:17:10.166518   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:10.166525   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:10.166590   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:10.199296   75402 cri.go:89] found id: ""
	I0816 18:17:10.199321   75402 logs.go:276] 0 containers: []
	W0816 18:17:10.199332   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:10.199340   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:10.199406   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:10.232640   75402 cri.go:89] found id: ""
	I0816 18:17:10.232672   75402 logs.go:276] 0 containers: []
	W0816 18:17:10.232683   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:10.232691   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:10.232775   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:10.263958   75402 cri.go:89] found id: ""
	I0816 18:17:10.263988   75402 logs.go:276] 0 containers: []
	W0816 18:17:10.263998   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:10.264003   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:10.264052   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:10.295904   75402 cri.go:89] found id: ""
	I0816 18:17:10.295929   75402 logs.go:276] 0 containers: []
	W0816 18:17:10.295937   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:10.295946   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:10.295957   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:10.344874   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:10.344909   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:10.358523   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:10.358552   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:10.433311   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:10.433334   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:10.433351   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:10.514580   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:10.514620   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:13.053815   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:13.068517   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:13.068597   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:10.440472   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:12.939209   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:10.707438   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:13.207630   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:09.947889   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:11.949408   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:14.447906   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:13.104251   75402 cri.go:89] found id: ""
	I0816 18:17:13.104279   75402 logs.go:276] 0 containers: []
	W0816 18:17:13.104313   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:13.104321   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:13.104375   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:13.137415   75402 cri.go:89] found id: ""
	I0816 18:17:13.137442   75402 logs.go:276] 0 containers: []
	W0816 18:17:13.137453   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:13.137461   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:13.137510   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:13.174165   75402 cri.go:89] found id: ""
	I0816 18:17:13.174191   75402 logs.go:276] 0 containers: []
	W0816 18:17:13.174203   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:13.174210   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:13.174271   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:13.206789   75402 cri.go:89] found id: ""
	I0816 18:17:13.206814   75402 logs.go:276] 0 containers: []
	W0816 18:17:13.206823   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:13.206831   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:13.206892   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:13.238950   75402 cri.go:89] found id: ""
	I0816 18:17:13.238975   75402 logs.go:276] 0 containers: []
	W0816 18:17:13.238984   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:13.238990   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:13.239037   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:13.271485   75402 cri.go:89] found id: ""
	I0816 18:17:13.271518   75402 logs.go:276] 0 containers: []
	W0816 18:17:13.271535   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:13.271544   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:13.271612   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:13.307576   75402 cri.go:89] found id: ""
	I0816 18:17:13.307610   75402 logs.go:276] 0 containers: []
	W0816 18:17:13.307622   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:13.307632   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:13.307698   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:13.339746   75402 cri.go:89] found id: ""
	I0816 18:17:13.339792   75402 logs.go:276] 0 containers: []
	W0816 18:17:13.339802   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:13.339813   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:13.339827   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:13.352847   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:13.352875   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:13.440397   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:13.440418   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:13.440432   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:13.514879   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:13.514916   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:13.553848   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:13.553882   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:16.103318   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:16.115837   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:16.115922   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:16.147079   75402 cri.go:89] found id: ""
	I0816 18:17:16.147108   75402 logs.go:276] 0 containers: []
	W0816 18:17:16.147119   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:16.147127   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:16.147189   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:16.184207   75402 cri.go:89] found id: ""
	I0816 18:17:16.184233   75402 logs.go:276] 0 containers: []
	W0816 18:17:16.184241   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:16.184247   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:16.184295   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:16.219036   75402 cri.go:89] found id: ""
	I0816 18:17:16.219065   75402 logs.go:276] 0 containers: []
	W0816 18:17:16.219072   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:16.219078   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:16.219163   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:16.251269   75402 cri.go:89] found id: ""
	I0816 18:17:16.251307   75402 logs.go:276] 0 containers: []
	W0816 18:17:16.251320   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:16.251329   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:16.251394   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:16.286549   75402 cri.go:89] found id: ""
	I0816 18:17:16.286576   75402 logs.go:276] 0 containers: []
	W0816 18:17:16.286585   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:16.286591   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:16.286647   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:16.322017   75402 cri.go:89] found id: ""
	I0816 18:17:16.322045   75402 logs.go:276] 0 containers: []
	W0816 18:17:16.322055   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:16.322063   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:16.322128   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:16.353606   75402 cri.go:89] found id: ""
	I0816 18:17:16.353636   75402 logs.go:276] 0 containers: []
	W0816 18:17:16.353646   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:16.353653   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:16.353719   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:16.386973   75402 cri.go:89] found id: ""
	I0816 18:17:16.387005   75402 logs.go:276] 0 containers: []
	W0816 18:17:16.387016   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:16.387027   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:16.387039   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:16.437031   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:16.437066   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:16.451258   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:16.451292   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:16.519130   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:16.519155   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:16.519170   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:16.598591   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:16.598626   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:14.939993   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:17.440655   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:15.705969   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:17.706271   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:16.449266   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:18.948220   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:19.147916   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:19.160525   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:19.160600   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:19.193494   75402 cri.go:89] found id: ""
	I0816 18:17:19.193520   75402 logs.go:276] 0 containers: []
	W0816 18:17:19.193527   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:19.193533   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:19.193599   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:19.230936   75402 cri.go:89] found id: ""
	I0816 18:17:19.230963   75402 logs.go:276] 0 containers: []
	W0816 18:17:19.230971   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:19.230976   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:19.231029   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:19.263713   75402 cri.go:89] found id: ""
	I0816 18:17:19.263735   75402 logs.go:276] 0 containers: []
	W0816 18:17:19.263742   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:19.263748   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:19.263794   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:19.294609   75402 cri.go:89] found id: ""
	I0816 18:17:19.294635   75402 logs.go:276] 0 containers: []
	W0816 18:17:19.294642   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:19.294647   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:19.294698   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:19.329278   75402 cri.go:89] found id: ""
	I0816 18:17:19.329303   75402 logs.go:276] 0 containers: []
	W0816 18:17:19.329313   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:19.329319   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:19.329368   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:19.362007   75402 cri.go:89] found id: ""
	I0816 18:17:19.362043   75402 logs.go:276] 0 containers: []
	W0816 18:17:19.362052   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:19.362067   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:19.362120   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:19.395190   75402 cri.go:89] found id: ""
	I0816 18:17:19.395217   75402 logs.go:276] 0 containers: []
	W0816 18:17:19.395248   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:19.395255   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:19.395302   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:19.426962   75402 cri.go:89] found id: ""
	I0816 18:17:19.426991   75402 logs.go:276] 0 containers: []
	W0816 18:17:19.427002   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:19.427012   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:19.427027   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:19.441319   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:19.441346   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:19.511390   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:19.511409   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:19.511425   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:19.590897   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:19.590935   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:19.628753   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:19.628781   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:22.182534   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:22.194844   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:22.194917   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:22.228225   75402 cri.go:89] found id: ""
	I0816 18:17:22.228247   75402 logs.go:276] 0 containers: []
	W0816 18:17:22.228269   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:22.228276   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:22.228325   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:22.258614   75402 cri.go:89] found id: ""
	I0816 18:17:22.258646   75402 logs.go:276] 0 containers: []
	W0816 18:17:22.258654   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:22.258660   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:22.258708   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:22.289103   75402 cri.go:89] found id: ""
	I0816 18:17:22.289136   75402 logs.go:276] 0 containers: []
	W0816 18:17:22.289147   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:22.289154   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:22.289215   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:22.321828   75402 cri.go:89] found id: ""
	I0816 18:17:22.321857   75402 logs.go:276] 0 containers: []
	W0816 18:17:22.321869   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:22.321877   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:22.321942   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:22.353557   75402 cri.go:89] found id: ""
	I0816 18:17:22.353588   75402 logs.go:276] 0 containers: []
	W0816 18:17:22.353597   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:22.353602   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:22.353660   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:22.385078   75402 cri.go:89] found id: ""
	I0816 18:17:22.385103   75402 logs.go:276] 0 containers: []
	W0816 18:17:22.385110   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:22.385116   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:22.385189   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:22.415864   75402 cri.go:89] found id: ""
	I0816 18:17:22.415900   75402 logs.go:276] 0 containers: []
	W0816 18:17:22.415913   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:22.415922   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:22.415990   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:22.449895   75402 cri.go:89] found id: ""
	I0816 18:17:22.449922   75402 logs.go:276] 0 containers: []
	W0816 18:17:22.449942   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:22.449957   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:22.449974   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:22.523055   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:22.523073   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:22.523084   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:22.599680   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:22.599719   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:22.638021   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:22.638057   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:22.688970   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:22.689010   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:19.941154   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:22.440580   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:20.207713   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:22.706805   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:21.448399   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:23.448444   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:25.202748   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:25.217316   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:25.217388   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:25.249528   75402 cri.go:89] found id: ""
	I0816 18:17:25.249558   75402 logs.go:276] 0 containers: []
	W0816 18:17:25.249566   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:25.249578   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:25.249625   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:25.282667   75402 cri.go:89] found id: ""
	I0816 18:17:25.282696   75402 logs.go:276] 0 containers: []
	W0816 18:17:25.282706   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:25.282712   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:25.282764   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:25.314061   75402 cri.go:89] found id: ""
	I0816 18:17:25.314091   75402 logs.go:276] 0 containers: []
	W0816 18:17:25.314101   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:25.314108   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:25.314161   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:25.351260   75402 cri.go:89] found id: ""
	I0816 18:17:25.351287   75402 logs.go:276] 0 containers: []
	W0816 18:17:25.351296   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:25.351301   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:25.351352   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:25.388303   75402 cri.go:89] found id: ""
	I0816 18:17:25.388334   75402 logs.go:276] 0 containers: []
	W0816 18:17:25.388345   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:25.388352   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:25.388412   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:25.422133   75402 cri.go:89] found id: ""
	I0816 18:17:25.422161   75402 logs.go:276] 0 containers: []
	W0816 18:17:25.422169   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:25.422175   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:25.422232   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:25.456749   75402 cri.go:89] found id: ""
	I0816 18:17:25.456775   75402 logs.go:276] 0 containers: []
	W0816 18:17:25.456783   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:25.456789   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:25.456836   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:25.494783   75402 cri.go:89] found id: ""
	I0816 18:17:25.494809   75402 logs.go:276] 0 containers: []
	W0816 18:17:25.494817   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:25.494825   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:25.494836   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:25.561253   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:25.561290   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:25.580349   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:25.580383   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:25.656333   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:25.656361   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:25.656378   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:25.733479   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:25.733515   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:24.444069   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:26.939743   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:24.707849   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:26.709711   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:25.448555   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:27.449070   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:28.272217   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:28.285750   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:28.285822   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:28.318230   75402 cri.go:89] found id: ""
	I0816 18:17:28.318260   75402 logs.go:276] 0 containers: []
	W0816 18:17:28.318268   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:28.318275   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:28.318344   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:28.351766   75402 cri.go:89] found id: ""
	I0816 18:17:28.351798   75402 logs.go:276] 0 containers: []
	W0816 18:17:28.351808   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:28.351814   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:28.351872   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:28.385543   75402 cri.go:89] found id: ""
	I0816 18:17:28.385572   75402 logs.go:276] 0 containers: []
	W0816 18:17:28.385581   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:28.385588   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:28.385653   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:28.418808   75402 cri.go:89] found id: ""
	I0816 18:17:28.418837   75402 logs.go:276] 0 containers: []
	W0816 18:17:28.418846   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:28.418852   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:28.418900   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:28.453883   75402 cri.go:89] found id: ""
	I0816 18:17:28.453911   75402 logs.go:276] 0 containers: []
	W0816 18:17:28.453922   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:28.453929   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:28.453996   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:28.486261   75402 cri.go:89] found id: ""
	I0816 18:17:28.486291   75402 logs.go:276] 0 containers: []
	W0816 18:17:28.486304   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:28.486310   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:28.486366   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:28.520617   75402 cri.go:89] found id: ""
	I0816 18:17:28.520658   75402 logs.go:276] 0 containers: []
	W0816 18:17:28.520670   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:28.520678   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:28.520731   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:28.552996   75402 cri.go:89] found id: ""
	I0816 18:17:28.553026   75402 logs.go:276] 0 containers: []
	W0816 18:17:28.553036   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:28.553046   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:28.553061   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:28.604149   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:28.604192   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:28.617393   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:28.617421   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:28.683258   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:28.683279   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:28.683294   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:28.766933   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:28.766977   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:31.305897   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:31.326070   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:31.326143   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:31.375314   75402 cri.go:89] found id: ""
	I0816 18:17:31.375350   75402 logs.go:276] 0 containers: []
	W0816 18:17:31.375361   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:31.375369   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:31.375429   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:31.407372   75402 cri.go:89] found id: ""
	I0816 18:17:31.407398   75402 logs.go:276] 0 containers: []
	W0816 18:17:31.407406   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:31.407411   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:31.407459   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:31.445679   75402 cri.go:89] found id: ""
	I0816 18:17:31.445706   75402 logs.go:276] 0 containers: []
	W0816 18:17:31.445714   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:31.445720   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:31.445781   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:31.480040   75402 cri.go:89] found id: ""
	I0816 18:17:31.480072   75402 logs.go:276] 0 containers: []
	W0816 18:17:31.480080   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:31.480085   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:31.480145   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:31.511143   75402 cri.go:89] found id: ""
	I0816 18:17:31.511171   75402 logs.go:276] 0 containers: []
	W0816 18:17:31.511182   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:31.511188   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:31.511252   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:31.544254   75402 cri.go:89] found id: ""
	I0816 18:17:31.544282   75402 logs.go:276] 0 containers: []
	W0816 18:17:31.544293   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:31.544300   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:31.544363   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:31.579007   75402 cri.go:89] found id: ""
	I0816 18:17:31.579033   75402 logs.go:276] 0 containers: []
	W0816 18:17:31.579041   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:31.579046   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:31.579108   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:31.619966   75402 cri.go:89] found id: ""
	I0816 18:17:31.619995   75402 logs.go:276] 0 containers: []
	W0816 18:17:31.620005   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:31.620018   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:31.620035   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:31.657784   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:31.657815   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:31.706824   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:31.706853   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:31.719696   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:31.719721   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:31.786096   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:31.786124   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:31.786142   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:28.940711   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:31.440514   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:29.206929   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:31.706188   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:33.706244   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:29.948053   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:32.448453   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:34.363862   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:34.377365   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:34.377430   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:34.414191   75402 cri.go:89] found id: ""
	I0816 18:17:34.414216   75402 logs.go:276] 0 containers: []
	W0816 18:17:34.414223   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:34.414229   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:34.414285   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:34.446811   75402 cri.go:89] found id: ""
	I0816 18:17:34.446836   75402 logs.go:276] 0 containers: []
	W0816 18:17:34.446843   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:34.446848   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:34.446905   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:34.477582   75402 cri.go:89] found id: ""
	I0816 18:17:34.477615   75402 logs.go:276] 0 containers: []
	W0816 18:17:34.477627   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:34.477634   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:34.477695   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:34.507868   75402 cri.go:89] found id: ""
	I0816 18:17:34.507901   75402 logs.go:276] 0 containers: []
	W0816 18:17:34.507912   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:34.507921   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:34.507984   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:34.538719   75402 cri.go:89] found id: ""
	I0816 18:17:34.538754   75402 logs.go:276] 0 containers: []
	W0816 18:17:34.538765   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:34.538772   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:34.538826   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:34.571445   75402 cri.go:89] found id: ""
	I0816 18:17:34.571468   75402 logs.go:276] 0 containers: []
	W0816 18:17:34.571477   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:34.571484   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:34.571557   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:34.601587   75402 cri.go:89] found id: ""
	I0816 18:17:34.601611   75402 logs.go:276] 0 containers: []
	W0816 18:17:34.601618   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:34.601624   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:34.601669   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:34.634850   75402 cri.go:89] found id: ""
	I0816 18:17:34.634878   75402 logs.go:276] 0 containers: []
	W0816 18:17:34.634892   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:34.634906   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:34.634920   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:34.682828   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:34.682859   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:34.695796   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:34.695820   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:34.762100   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:34.762121   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:34.762133   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:34.845329   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:34.845359   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:37.386266   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:37.398940   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:37.399005   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:37.433072   75402 cri.go:89] found id: ""
	I0816 18:17:37.433099   75402 logs.go:276] 0 containers: []
	W0816 18:17:37.433112   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:37.433118   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:37.433169   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:37.466968   75402 cri.go:89] found id: ""
	I0816 18:17:37.467001   75402 logs.go:276] 0 containers: []
	W0816 18:17:37.467012   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:37.467021   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:37.467086   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:37.509268   75402 cri.go:89] found id: ""
	I0816 18:17:37.509291   75402 logs.go:276] 0 containers: []
	W0816 18:17:37.509300   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:37.509306   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:37.509365   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:37.541295   75402 cri.go:89] found id: ""
	I0816 18:17:37.541338   75402 logs.go:276] 0 containers: []
	W0816 18:17:37.541350   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:37.541357   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:37.541421   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:37.575423   75402 cri.go:89] found id: ""
	I0816 18:17:37.575453   75402 logs.go:276] 0 containers: []
	W0816 18:17:37.575464   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:37.575472   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:37.575540   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:37.614787   75402 cri.go:89] found id: ""
	I0816 18:17:37.614817   75402 logs.go:276] 0 containers: []
	W0816 18:17:37.614828   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:37.614835   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:37.614896   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:37.646396   75402 cri.go:89] found id: ""
	I0816 18:17:37.646430   75402 logs.go:276] 0 containers: []
	W0816 18:17:37.646441   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:37.646449   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:37.646517   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:37.679383   75402 cri.go:89] found id: ""
	I0816 18:17:37.679414   75402 logs.go:276] 0 containers: []
	W0816 18:17:37.679423   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:37.679431   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:37.679442   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:37.729641   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:37.729673   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:37.742420   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:37.742448   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:37.812572   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:37.812600   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:37.812615   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:37.887100   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:37.887137   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:33.940380   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:35.941055   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:38.440700   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:35.706903   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:38.207115   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:34.947638   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:37.448511   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:39.448944   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:40.424202   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:40.438231   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:40.438337   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:40.474614   75402 cri.go:89] found id: ""
	I0816 18:17:40.474639   75402 logs.go:276] 0 containers: []
	W0816 18:17:40.474648   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:40.474653   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:40.474701   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:40.510123   75402 cri.go:89] found id: ""
	I0816 18:17:40.510154   75402 logs.go:276] 0 containers: []
	W0816 18:17:40.510162   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:40.510167   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:40.510217   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:40.548971   75402 cri.go:89] found id: ""
	I0816 18:17:40.549000   75402 logs.go:276] 0 containers: []
	W0816 18:17:40.549008   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:40.549013   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:40.549069   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:40.595126   75402 cri.go:89] found id: ""
	I0816 18:17:40.595158   75402 logs.go:276] 0 containers: []
	W0816 18:17:40.595167   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:40.595174   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:40.595220   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:40.629769   75402 cri.go:89] found id: ""
	I0816 18:17:40.629793   75402 logs.go:276] 0 containers: []
	W0816 18:17:40.629801   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:40.629807   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:40.629871   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:40.661889   75402 cri.go:89] found id: ""
	I0816 18:17:40.661922   75402 logs.go:276] 0 containers: []
	W0816 18:17:40.661932   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:40.661939   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:40.662001   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:40.697764   75402 cri.go:89] found id: ""
	I0816 18:17:40.697790   75402 logs.go:276] 0 containers: []
	W0816 18:17:40.697801   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:40.697808   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:40.697867   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:40.734825   75402 cri.go:89] found id: ""
	I0816 18:17:40.734852   75402 logs.go:276] 0 containers: []
	W0816 18:17:40.734862   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:40.734872   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:40.734939   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:40.787975   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:40.788015   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:40.800817   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:40.800843   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:40.874182   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:40.874205   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:40.874219   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:40.960032   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:40.960066   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:40.940284   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:42.943218   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:40.207943   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:42.707356   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:41.947437   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:43.947887   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:43.499770   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:43.513726   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:43.513806   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:43.548368   75402 cri.go:89] found id: ""
	I0816 18:17:43.548396   75402 logs.go:276] 0 containers: []
	W0816 18:17:43.548406   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:43.548413   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:43.548474   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:43.581177   75402 cri.go:89] found id: ""
	I0816 18:17:43.581205   75402 logs.go:276] 0 containers: []
	W0816 18:17:43.581216   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:43.581223   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:43.581291   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:43.614315   75402 cri.go:89] found id: ""
	I0816 18:17:43.614354   75402 logs.go:276] 0 containers: []
	W0816 18:17:43.614367   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:43.614374   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:43.614437   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:43.648608   75402 cri.go:89] found id: ""
	I0816 18:17:43.648645   75402 logs.go:276] 0 containers: []
	W0816 18:17:43.648658   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:43.648669   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:43.648722   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:43.680549   75402 cri.go:89] found id: ""
	I0816 18:17:43.680586   75402 logs.go:276] 0 containers: []
	W0816 18:17:43.680597   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:43.680604   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:43.680686   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:43.710473   75402 cri.go:89] found id: ""
	I0816 18:17:43.710497   75402 logs.go:276] 0 containers: []
	W0816 18:17:43.710506   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:43.710514   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:43.710576   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:43.741415   75402 cri.go:89] found id: ""
	I0816 18:17:43.741442   75402 logs.go:276] 0 containers: []
	W0816 18:17:43.741450   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:43.741456   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:43.741505   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:43.775018   75402 cri.go:89] found id: ""
	I0816 18:17:43.775051   75402 logs.go:276] 0 containers: []
	W0816 18:17:43.775063   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:43.775074   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:43.775087   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:43.825596   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:43.825630   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:43.839133   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:43.839161   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:43.905645   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:43.905667   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:43.905679   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:43.988860   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:43.988901   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:46.525896   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:46.539147   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:46.539229   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:46.570703   75402 cri.go:89] found id: ""
	I0816 18:17:46.570726   75402 logs.go:276] 0 containers: []
	W0816 18:17:46.570734   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:46.570740   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:46.570785   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:46.605909   75402 cri.go:89] found id: ""
	I0816 18:17:46.605939   75402 logs.go:276] 0 containers: []
	W0816 18:17:46.605954   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:46.605961   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:46.606013   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:46.638865   75402 cri.go:89] found id: ""
	I0816 18:17:46.638899   75402 logs.go:276] 0 containers: []
	W0816 18:17:46.638911   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:46.638919   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:46.638994   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:46.671869   75402 cri.go:89] found id: ""
	I0816 18:17:46.671904   75402 logs.go:276] 0 containers: []
	W0816 18:17:46.671917   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:46.671926   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:46.671988   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:46.703423   75402 cri.go:89] found id: ""
	I0816 18:17:46.703464   75402 logs.go:276] 0 containers: []
	W0816 18:17:46.703473   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:46.703479   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:46.703545   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:46.735824   75402 cri.go:89] found id: ""
	I0816 18:17:46.735853   75402 logs.go:276] 0 containers: []
	W0816 18:17:46.735864   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:46.735871   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:46.735926   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:46.767122   75402 cri.go:89] found id: ""
	I0816 18:17:46.767146   75402 logs.go:276] 0 containers: []
	W0816 18:17:46.767154   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:46.767160   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:46.767207   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:46.798093   75402 cri.go:89] found id: ""
	I0816 18:17:46.798126   75402 logs.go:276] 0 containers: []
	W0816 18:17:46.798140   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:46.798152   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:46.798167   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:46.832699   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:46.832725   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:46.884212   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:46.884246   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:46.896896   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:46.896921   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:46.968805   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:46.968824   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:46.968838   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:45.440474   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:47.940127   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:45.206534   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:47.206973   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:45.948252   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:48.448086   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:49.552581   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:49.565134   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:49.565212   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:49.597012   75402 cri.go:89] found id: ""
	I0816 18:17:49.597042   75402 logs.go:276] 0 containers: []
	W0816 18:17:49.597057   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:49.597067   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:49.597133   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:49.628902   75402 cri.go:89] found id: ""
	I0816 18:17:49.628935   75402 logs.go:276] 0 containers: []
	W0816 18:17:49.628948   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:49.628957   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:49.629025   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:49.662668   75402 cri.go:89] found id: ""
	I0816 18:17:49.662698   75402 logs.go:276] 0 containers: []
	W0816 18:17:49.662709   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:49.662715   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:49.662778   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:49.696354   75402 cri.go:89] found id: ""
	I0816 18:17:49.696381   75402 logs.go:276] 0 containers: []
	W0816 18:17:49.696389   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:49.696395   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:49.696487   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:49.730801   75402 cri.go:89] found id: ""
	I0816 18:17:49.730838   75402 logs.go:276] 0 containers: []
	W0816 18:17:49.730849   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:49.730856   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:49.730921   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:49.764474   75402 cri.go:89] found id: ""
	I0816 18:17:49.764503   75402 logs.go:276] 0 containers: []
	W0816 18:17:49.764514   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:49.764522   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:49.764585   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:49.798577   75402 cri.go:89] found id: ""
	I0816 18:17:49.798616   75402 logs.go:276] 0 containers: []
	W0816 18:17:49.798627   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:49.798634   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:49.798703   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:49.830987   75402 cri.go:89] found id: ""
	I0816 18:17:49.831016   75402 logs.go:276] 0 containers: []
	W0816 18:17:49.831024   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:49.831032   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:49.831043   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:49.883397   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:49.883433   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:49.897208   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:49.897239   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:49.968363   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:49.968386   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:49.968398   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:50.056552   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:50.056583   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:52.596191   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:52.609592   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:52.609668   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:52.645775   75402 cri.go:89] found id: ""
	I0816 18:17:52.645807   75402 logs.go:276] 0 containers: []
	W0816 18:17:52.645817   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:52.645823   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:52.645869   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:52.677817   75402 cri.go:89] found id: ""
	I0816 18:17:52.677852   75402 logs.go:276] 0 containers: []
	W0816 18:17:52.677862   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:52.677870   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:52.677935   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:52.710618   75402 cri.go:89] found id: ""
	I0816 18:17:52.710648   75402 logs.go:276] 0 containers: []
	W0816 18:17:52.710658   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:52.710664   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:52.710716   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:52.745830   75402 cri.go:89] found id: ""
	I0816 18:17:52.745858   75402 logs.go:276] 0 containers: []
	W0816 18:17:52.745867   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:52.745872   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:52.745929   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:52.778511   75402 cri.go:89] found id: ""
	I0816 18:17:52.778538   75402 logs.go:276] 0 containers: []
	W0816 18:17:52.778548   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:52.778567   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:52.778632   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:52.810759   75402 cri.go:89] found id: ""
	I0816 18:17:52.810788   75402 logs.go:276] 0 containers: []
	W0816 18:17:52.810800   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:52.810807   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:52.810872   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:52.843786   75402 cri.go:89] found id: ""
	I0816 18:17:52.843814   75402 logs.go:276] 0 containers: []
	W0816 18:17:52.843824   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:52.843831   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:52.843886   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:52.876886   75402 cri.go:89] found id: ""
	I0816 18:17:52.876914   75402 logs.go:276] 0 containers: []
	W0816 18:17:52.876924   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:52.876934   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:52.876950   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:52.932519   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:52.932559   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:52.946645   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:52.946671   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:53.018156   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:53.018177   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:53.018190   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:53.095562   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:53.095600   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:49.940263   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:51.940433   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:49.707635   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:52.206027   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:50.449204   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:52.949591   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:55.633820   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:55.646170   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:55.646238   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:55.678147   75402 cri.go:89] found id: ""
	I0816 18:17:55.678181   75402 logs.go:276] 0 containers: []
	W0816 18:17:55.678194   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:55.678202   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:55.678264   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:55.710910   75402 cri.go:89] found id: ""
	I0816 18:17:55.710938   75402 logs.go:276] 0 containers: []
	W0816 18:17:55.710948   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:55.710956   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:55.711012   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:55.744822   75402 cri.go:89] found id: ""
	I0816 18:17:55.744853   75402 logs.go:276] 0 containers: []
	W0816 18:17:55.744863   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:55.744870   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:55.744931   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:55.791677   75402 cri.go:89] found id: ""
	I0816 18:17:55.791708   75402 logs.go:276] 0 containers: []
	W0816 18:17:55.791719   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:55.791727   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:55.791788   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:55.826448   75402 cri.go:89] found id: ""
	I0816 18:17:55.826481   75402 logs.go:276] 0 containers: []
	W0816 18:17:55.826492   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:55.826500   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:55.826564   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:55.861178   75402 cri.go:89] found id: ""
	I0816 18:17:55.861210   75402 logs.go:276] 0 containers: []
	W0816 18:17:55.861219   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:55.861225   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:55.861280   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:55.898073   75402 cri.go:89] found id: ""
	I0816 18:17:55.898099   75402 logs.go:276] 0 containers: []
	W0816 18:17:55.898110   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:55.898117   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:55.898184   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:55.931446   75402 cri.go:89] found id: ""
	I0816 18:17:55.931478   75402 logs.go:276] 0 containers: []
	W0816 18:17:55.931487   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:55.931498   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:55.931514   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:55.999910   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:55.999931   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:55.999943   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:56.077240   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:56.077312   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:56.115479   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:56.115506   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:56.166954   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:56.166989   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:54.440166   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:56.939865   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:54.206368   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:56.206710   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:58.207053   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:55.448566   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:57.948891   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:58.680571   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:58.692824   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:58.692890   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:58.729761   75402 cri.go:89] found id: ""
	I0816 18:17:58.729786   75402 logs.go:276] 0 containers: []
	W0816 18:17:58.729794   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:58.729799   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:58.729857   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:58.764943   75402 cri.go:89] found id: ""
	I0816 18:17:58.765082   75402 logs.go:276] 0 containers: []
	W0816 18:17:58.765113   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:58.765124   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:58.765179   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:58.801314   75402 cri.go:89] found id: ""
	I0816 18:17:58.801345   75402 logs.go:276] 0 containers: []
	W0816 18:17:58.801357   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:58.801365   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:58.801429   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:58.833936   75402 cri.go:89] found id: ""
	I0816 18:17:58.833973   75402 logs.go:276] 0 containers: []
	W0816 18:17:58.833982   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:58.833988   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:58.834046   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:58.870108   75402 cri.go:89] found id: ""
	I0816 18:17:58.870137   75402 logs.go:276] 0 containers: []
	W0816 18:17:58.870148   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:58.870155   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:58.870219   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:58.904157   75402 cri.go:89] found id: ""
	I0816 18:17:58.904184   75402 logs.go:276] 0 containers: []
	W0816 18:17:58.904194   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:58.904201   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:58.904264   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:58.937862   75402 cri.go:89] found id: ""
	I0816 18:17:58.937891   75402 logs.go:276] 0 containers: []
	W0816 18:17:58.937901   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:58.937909   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:58.937972   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:58.972465   75402 cri.go:89] found id: ""
	I0816 18:17:58.972495   75402 logs.go:276] 0 containers: []
	W0816 18:17:58.972506   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:58.972517   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:58.972532   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:59.047197   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:59.047223   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:59.047238   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:59.126634   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:59.126668   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:59.165528   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:59.165562   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:59.214294   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:59.214433   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:01.729662   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:01.742582   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:01.742642   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:01.776148   75402 cri.go:89] found id: ""
	I0816 18:18:01.776180   75402 logs.go:276] 0 containers: []
	W0816 18:18:01.776188   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:01.776197   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:01.776243   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:01.809186   75402 cri.go:89] found id: ""
	I0816 18:18:01.809218   75402 logs.go:276] 0 containers: []
	W0816 18:18:01.809229   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:01.809237   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:01.809307   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:01.842379   75402 cri.go:89] found id: ""
	I0816 18:18:01.842406   75402 logs.go:276] 0 containers: []
	W0816 18:18:01.842417   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:01.842425   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:01.842490   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:01.874706   75402 cri.go:89] found id: ""
	I0816 18:18:01.874739   75402 logs.go:276] 0 containers: []
	W0816 18:18:01.874747   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:01.874753   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:01.874813   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:01.915567   75402 cri.go:89] found id: ""
	I0816 18:18:01.915596   75402 logs.go:276] 0 containers: []
	W0816 18:18:01.915607   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:01.915615   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:01.915675   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:01.951527   75402 cri.go:89] found id: ""
	I0816 18:18:01.951559   75402 logs.go:276] 0 containers: []
	W0816 18:18:01.951569   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:01.951576   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:01.951638   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:01.983822   75402 cri.go:89] found id: ""
	I0816 18:18:01.983848   75402 logs.go:276] 0 containers: []
	W0816 18:18:01.983856   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:01.983861   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:01.983909   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:02.018976   75402 cri.go:89] found id: ""
	I0816 18:18:02.019003   75402 logs.go:276] 0 containers: []
	W0816 18:18:02.019012   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:02.019019   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:02.019033   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:02.071096   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:02.071131   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:02.085163   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:02.085189   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:02.154771   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:02.154789   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:02.154800   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:02.242068   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:02.242105   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:58.941456   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:01.440404   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:00.208085   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:02.705334   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:00.447843   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:02.448334   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:04.790311   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:04.803215   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:04.803298   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:04.835834   75402 cri.go:89] found id: ""
	I0816 18:18:04.835868   75402 logs.go:276] 0 containers: []
	W0816 18:18:04.835879   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:04.835886   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:04.835951   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:04.870000   75402 cri.go:89] found id: ""
	I0816 18:18:04.870032   75402 logs.go:276] 0 containers: []
	W0816 18:18:04.870042   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:04.870049   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:04.870111   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:04.906624   75402 cri.go:89] found id: ""
	I0816 18:18:04.906653   75402 logs.go:276] 0 containers: []
	W0816 18:18:04.906663   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:04.906670   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:04.906730   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:04.940115   75402 cri.go:89] found id: ""
	I0816 18:18:04.940139   75402 logs.go:276] 0 containers: []
	W0816 18:18:04.940148   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:04.940155   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:04.940213   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:04.974461   75402 cri.go:89] found id: ""
	I0816 18:18:04.974493   75402 logs.go:276] 0 containers: []
	W0816 18:18:04.974503   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:04.974510   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:04.974571   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:05.006593   75402 cri.go:89] found id: ""
	I0816 18:18:05.006618   75402 logs.go:276] 0 containers: []
	W0816 18:18:05.006628   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:05.006635   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:05.006691   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:05.040041   75402 cri.go:89] found id: ""
	I0816 18:18:05.040066   75402 logs.go:276] 0 containers: []
	W0816 18:18:05.040082   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:05.040089   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:05.040144   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:05.072968   75402 cri.go:89] found id: ""
	I0816 18:18:05.072996   75402 logs.go:276] 0 containers: []
	W0816 18:18:05.073005   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:05.073014   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:05.073025   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:05.124510   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:05.124543   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:05.145566   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:05.145592   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:05.221874   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:05.221898   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:05.221914   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:05.297283   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:05.297316   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:07.837564   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:07.850372   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:07.850441   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:07.882879   75402 cri.go:89] found id: ""
	I0816 18:18:07.882906   75402 logs.go:276] 0 containers: []
	W0816 18:18:07.882915   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:07.882920   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:07.882978   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:07.916983   75402 cri.go:89] found id: ""
	I0816 18:18:07.917011   75402 logs.go:276] 0 containers: []
	W0816 18:18:07.917019   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:07.917024   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:07.917075   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:07.953864   75402 cri.go:89] found id: ""
	I0816 18:18:07.953886   75402 logs.go:276] 0 containers: []
	W0816 18:18:07.953896   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:07.953903   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:07.953951   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:07.994375   75402 cri.go:89] found id: ""
	I0816 18:18:07.994399   75402 logs.go:276] 0 containers: []
	W0816 18:18:07.994408   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:07.994414   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:07.994472   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:08.029137   75402 cri.go:89] found id: ""
	I0816 18:18:08.029170   75402 logs.go:276] 0 containers: []
	W0816 18:18:08.029182   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:08.029189   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:08.029253   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:08.062331   75402 cri.go:89] found id: ""
	I0816 18:18:08.062358   75402 logs.go:276] 0 containers: []
	W0816 18:18:08.062367   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:08.062373   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:08.062430   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:08.097021   75402 cri.go:89] found id: ""
	I0816 18:18:08.097044   75402 logs.go:276] 0 containers: []
	W0816 18:18:08.097051   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:08.097056   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:08.097112   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:03.940724   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:06.441847   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:04.706298   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:06.707011   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:04.948066   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:06.948125   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:08.948992   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:08.131147   75402 cri.go:89] found id: ""
	I0816 18:18:08.131174   75402 logs.go:276] 0 containers: []
	W0816 18:18:08.131184   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:08.131192   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:08.131203   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:08.182334   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:08.182373   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:08.195459   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:08.195485   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:08.260333   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:08.260351   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:08.260363   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:08.344466   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:08.344506   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:10.881640   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:10.896400   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:10.896482   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:10.934034   75402 cri.go:89] found id: ""
	I0816 18:18:10.934068   75402 logs.go:276] 0 containers: []
	W0816 18:18:10.934076   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:10.934081   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:10.934130   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:10.966697   75402 cri.go:89] found id: ""
	I0816 18:18:10.966724   75402 logs.go:276] 0 containers: []
	W0816 18:18:10.966733   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:10.966741   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:10.966807   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:11.000540   75402 cri.go:89] found id: ""
	I0816 18:18:11.000568   75402 logs.go:276] 0 containers: []
	W0816 18:18:11.000579   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:11.000587   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:11.000665   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:11.034322   75402 cri.go:89] found id: ""
	I0816 18:18:11.034346   75402 logs.go:276] 0 containers: []
	W0816 18:18:11.034354   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:11.034360   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:11.034407   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:11.067081   75402 cri.go:89] found id: ""
	I0816 18:18:11.067108   75402 logs.go:276] 0 containers: []
	W0816 18:18:11.067116   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:11.067122   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:11.067170   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:11.099726   75402 cri.go:89] found id: ""
	I0816 18:18:11.099753   75402 logs.go:276] 0 containers: []
	W0816 18:18:11.099763   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:11.099770   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:11.099834   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:11.133187   75402 cri.go:89] found id: ""
	I0816 18:18:11.133216   75402 logs.go:276] 0 containers: []
	W0816 18:18:11.133226   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:11.133235   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:11.133315   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:11.167121   75402 cri.go:89] found id: ""
	I0816 18:18:11.167157   75402 logs.go:276] 0 containers: []
	W0816 18:18:11.167166   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:11.167177   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:11.167194   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:11.181396   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:11.181424   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:11.248286   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:11.248313   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:11.248325   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:11.328546   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:11.328583   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:11.365534   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:11.365576   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:08.939686   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:10.941097   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:13.440001   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:09.207018   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:11.207677   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:13.706818   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:10.949461   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:13.448057   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:13.919889   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:13.935097   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:13.935178   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:13.973196   75402 cri.go:89] found id: ""
	I0816 18:18:13.973225   75402 logs.go:276] 0 containers: []
	W0816 18:18:13.973236   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:13.973244   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:13.973328   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:14.011913   75402 cri.go:89] found id: ""
	I0816 18:18:14.011936   75402 logs.go:276] 0 containers: []
	W0816 18:18:14.011944   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:14.011950   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:14.012013   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:14.048418   75402 cri.go:89] found id: ""
	I0816 18:18:14.048447   75402 logs.go:276] 0 containers: []
	W0816 18:18:14.048459   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:14.048466   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:14.048515   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:14.082462   75402 cri.go:89] found id: ""
	I0816 18:18:14.082496   75402 logs.go:276] 0 containers: []
	W0816 18:18:14.082506   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:14.082514   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:14.082576   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:14.114958   75402 cri.go:89] found id: ""
	I0816 18:18:14.114986   75402 logs.go:276] 0 containers: []
	W0816 18:18:14.114996   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:14.115005   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:14.115067   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:14.154829   75402 cri.go:89] found id: ""
	I0816 18:18:14.154865   75402 logs.go:276] 0 containers: []
	W0816 18:18:14.154878   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:14.154888   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:14.154957   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:14.190012   75402 cri.go:89] found id: ""
	I0816 18:18:14.190045   75402 logs.go:276] 0 containers: []
	W0816 18:18:14.190053   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:14.190058   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:14.190108   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:14.223314   75402 cri.go:89] found id: ""
	I0816 18:18:14.223341   75402 logs.go:276] 0 containers: []
	W0816 18:18:14.223350   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:14.223360   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:14.223381   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:14.274995   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:14.275035   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:14.288518   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:14.288564   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:14.365668   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:14.365691   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:14.365705   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:14.445828   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:14.445866   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:16.981802   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:16.994729   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:16.994794   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:17.029790   75402 cri.go:89] found id: ""
	I0816 18:18:17.029821   75402 logs.go:276] 0 containers: []
	W0816 18:18:17.029839   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:17.029848   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:17.029912   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:17.063194   75402 cri.go:89] found id: ""
	I0816 18:18:17.063223   75402 logs.go:276] 0 containers: []
	W0816 18:18:17.063233   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:17.063240   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:17.063293   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:17.097808   75402 cri.go:89] found id: ""
	I0816 18:18:17.097831   75402 logs.go:276] 0 containers: []
	W0816 18:18:17.097839   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:17.097844   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:17.097900   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:17.132646   75402 cri.go:89] found id: ""
	I0816 18:18:17.132682   75402 logs.go:276] 0 containers: []
	W0816 18:18:17.132691   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:17.132697   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:17.132751   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:17.164285   75402 cri.go:89] found id: ""
	I0816 18:18:17.164316   75402 logs.go:276] 0 containers: []
	W0816 18:18:17.164328   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:17.164335   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:17.164391   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:17.195642   75402 cri.go:89] found id: ""
	I0816 18:18:17.195672   75402 logs.go:276] 0 containers: []
	W0816 18:18:17.195683   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:17.195691   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:17.195754   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:17.228005   75402 cri.go:89] found id: ""
	I0816 18:18:17.228033   75402 logs.go:276] 0 containers: []
	W0816 18:18:17.228041   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:17.228047   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:17.228107   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:17.279195   75402 cri.go:89] found id: ""
	I0816 18:18:17.279229   75402 logs.go:276] 0 containers: []
	W0816 18:18:17.279241   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:17.279253   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:17.279270   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:17.360084   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:17.360125   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:17.405184   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:17.405210   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:17.457453   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:17.457483   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:17.471472   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:17.471502   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:17.536478   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:15.939660   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:17.940456   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:16.207019   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:18.706191   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:15.450419   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:17.948912   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:20.036644   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:20.050169   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:20.050244   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:20.087943   75402 cri.go:89] found id: ""
	I0816 18:18:20.087971   75402 logs.go:276] 0 containers: []
	W0816 18:18:20.087981   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:20.087988   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:20.088051   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:20.119908   75402 cri.go:89] found id: ""
	I0816 18:18:20.119931   75402 logs.go:276] 0 containers: []
	W0816 18:18:20.119940   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:20.119945   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:20.120013   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:20.152115   75402 cri.go:89] found id: ""
	I0816 18:18:20.152146   75402 logs.go:276] 0 containers: []
	W0816 18:18:20.152156   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:20.152162   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:20.152209   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:20.189464   75402 cri.go:89] found id: ""
	I0816 18:18:20.189488   75402 logs.go:276] 0 containers: []
	W0816 18:18:20.189495   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:20.189500   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:20.189550   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:20.224779   75402 cri.go:89] found id: ""
	I0816 18:18:20.224807   75402 logs.go:276] 0 containers: []
	W0816 18:18:20.224817   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:20.224824   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:20.224888   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:20.257021   75402 cri.go:89] found id: ""
	I0816 18:18:20.257048   75402 logs.go:276] 0 containers: []
	W0816 18:18:20.257059   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:20.257067   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:20.257121   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:20.290991   75402 cri.go:89] found id: ""
	I0816 18:18:20.291023   75402 logs.go:276] 0 containers: []
	W0816 18:18:20.291032   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:20.291039   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:20.291099   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:20.323674   75402 cri.go:89] found id: ""
	I0816 18:18:20.323704   75402 logs.go:276] 0 containers: []
	W0816 18:18:20.323715   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:20.323726   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:20.323742   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:20.373411   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:20.373447   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:20.386954   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:20.386981   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:20.464366   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:20.464384   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:20.464403   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:20.541836   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:20.541881   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:23.085071   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:23.100460   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:23.100524   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:20.440656   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:22.942713   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:20.706771   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:23.207824   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:20.448676   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:22.948907   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:23.141239   75402 cri.go:89] found id: ""
	I0816 18:18:23.141269   75402 logs.go:276] 0 containers: []
	W0816 18:18:23.141280   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:23.141287   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:23.141354   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:23.172914   75402 cri.go:89] found id: ""
	I0816 18:18:23.172941   75402 logs.go:276] 0 containers: []
	W0816 18:18:23.172950   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:23.172958   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:23.173015   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:23.205593   75402 cri.go:89] found id: ""
	I0816 18:18:23.205621   75402 logs.go:276] 0 containers: []
	W0816 18:18:23.205632   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:23.205640   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:23.205706   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:23.239358   75402 cri.go:89] found id: ""
	I0816 18:18:23.239383   75402 logs.go:276] 0 containers: []
	W0816 18:18:23.239392   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:23.239401   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:23.239463   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:23.271798   75402 cri.go:89] found id: ""
	I0816 18:18:23.271828   75402 logs.go:276] 0 containers: []
	W0816 18:18:23.271838   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:23.271844   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:23.271911   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:23.305287   75402 cri.go:89] found id: ""
	I0816 18:18:23.305316   75402 logs.go:276] 0 containers: []
	W0816 18:18:23.305327   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:23.305335   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:23.305397   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:23.344041   75402 cri.go:89] found id: ""
	I0816 18:18:23.344067   75402 logs.go:276] 0 containers: []
	W0816 18:18:23.344075   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:23.344080   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:23.344134   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:23.376540   75402 cri.go:89] found id: ""
	I0816 18:18:23.376571   75402 logs.go:276] 0 containers: []
	W0816 18:18:23.376583   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:23.376601   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:23.376616   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:23.428265   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:23.428301   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:23.441377   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:23.441404   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:23.509219   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:23.509243   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:23.509259   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:23.589151   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:23.589186   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:26.126176   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:26.140228   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:26.140292   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:26.176768   75402 cri.go:89] found id: ""
	I0816 18:18:26.176807   75402 logs.go:276] 0 containers: []
	W0816 18:18:26.176820   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:26.176829   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:26.176887   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:26.212357   75402 cri.go:89] found id: ""
	I0816 18:18:26.212383   75402 logs.go:276] 0 containers: []
	W0816 18:18:26.212390   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:26.212396   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:26.212457   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:26.245256   75402 cri.go:89] found id: ""
	I0816 18:18:26.245290   75402 logs.go:276] 0 containers: []
	W0816 18:18:26.245302   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:26.245309   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:26.245370   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:26.277525   75402 cri.go:89] found id: ""
	I0816 18:18:26.277561   75402 logs.go:276] 0 containers: []
	W0816 18:18:26.277569   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:26.277575   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:26.277627   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:26.310928   75402 cri.go:89] found id: ""
	I0816 18:18:26.310956   75402 logs.go:276] 0 containers: []
	W0816 18:18:26.310967   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:26.310976   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:26.311052   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:26.344595   75402 cri.go:89] found id: ""
	I0816 18:18:26.344647   75402 logs.go:276] 0 containers: []
	W0816 18:18:26.344661   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:26.344669   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:26.344741   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:26.377776   75402 cri.go:89] found id: ""
	I0816 18:18:26.377805   75402 logs.go:276] 0 containers: []
	W0816 18:18:26.377814   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:26.377820   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:26.377872   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:26.411139   75402 cri.go:89] found id: ""
	I0816 18:18:26.411167   75402 logs.go:276] 0 containers: []
	W0816 18:18:26.411179   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:26.411190   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:26.411204   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:26.493802   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:26.493838   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:26.529542   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:26.529576   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:26.583544   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:26.583588   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:26.596429   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:26.596459   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:26.667858   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:25.441062   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:27.940609   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:25.706109   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:28.206196   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:25.448352   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:27.947950   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:29.168766   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:29.182032   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:29.182103   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:29.220213   75402 cri.go:89] found id: ""
	I0816 18:18:29.220239   75402 logs.go:276] 0 containers: []
	W0816 18:18:29.220247   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:29.220253   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:29.220300   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:29.257820   75402 cri.go:89] found id: ""
	I0816 18:18:29.257850   75402 logs.go:276] 0 containers: []
	W0816 18:18:29.257861   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:29.257867   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:29.257933   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:29.290450   75402 cri.go:89] found id: ""
	I0816 18:18:29.290473   75402 logs.go:276] 0 containers: []
	W0816 18:18:29.290480   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:29.290485   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:29.290546   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:29.328032   75402 cri.go:89] found id: ""
	I0816 18:18:29.328061   75402 logs.go:276] 0 containers: []
	W0816 18:18:29.328070   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:29.328076   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:29.328135   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:29.362104   75402 cri.go:89] found id: ""
	I0816 18:18:29.362132   75402 logs.go:276] 0 containers: []
	W0816 18:18:29.362141   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:29.362149   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:29.362218   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:29.395258   75402 cri.go:89] found id: ""
	I0816 18:18:29.395290   75402 logs.go:276] 0 containers: []
	W0816 18:18:29.395301   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:29.395309   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:29.395375   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:29.426617   75402 cri.go:89] found id: ""
	I0816 18:18:29.426646   75402 logs.go:276] 0 containers: []
	W0816 18:18:29.426656   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:29.426663   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:29.426725   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:29.462861   75402 cri.go:89] found id: ""
	I0816 18:18:29.462890   75402 logs.go:276] 0 containers: []
	W0816 18:18:29.462901   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:29.462912   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:29.462928   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:29.514882   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:29.514915   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:29.528101   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:29.528128   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:29.598983   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:29.599005   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:29.599020   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:29.684955   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:29.684991   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:32.230155   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:32.244158   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:32.244226   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:32.281993   75402 cri.go:89] found id: ""
	I0816 18:18:32.282020   75402 logs.go:276] 0 containers: []
	W0816 18:18:32.282031   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:32.282037   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:32.282100   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:32.316870   75402 cri.go:89] found id: ""
	I0816 18:18:32.316896   75402 logs.go:276] 0 containers: []
	W0816 18:18:32.316906   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:32.316914   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:32.316976   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:32.352597   75402 cri.go:89] found id: ""
	I0816 18:18:32.352637   75402 logs.go:276] 0 containers: []
	W0816 18:18:32.352649   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:32.352656   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:32.352722   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:32.387520   75402 cri.go:89] found id: ""
	I0816 18:18:32.387564   75402 logs.go:276] 0 containers: []
	W0816 18:18:32.387576   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:32.387584   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:32.387638   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:32.421499   75402 cri.go:89] found id: ""
	I0816 18:18:32.421526   75402 logs.go:276] 0 containers: []
	W0816 18:18:32.421537   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:32.421544   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:32.421603   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:32.460048   75402 cri.go:89] found id: ""
	I0816 18:18:32.460075   75402 logs.go:276] 0 containers: []
	W0816 18:18:32.460086   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:32.460093   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:32.460151   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:32.498148   75402 cri.go:89] found id: ""
	I0816 18:18:32.498176   75402 logs.go:276] 0 containers: []
	W0816 18:18:32.498184   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:32.498190   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:32.498248   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:32.530683   75402 cri.go:89] found id: ""
	I0816 18:18:32.530717   75402 logs.go:276] 0 containers: []
	W0816 18:18:32.530730   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:32.530741   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:32.530762   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:32.614776   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:32.614820   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:32.655628   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:32.655667   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:32.722763   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:32.722807   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:32.739817   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:32.739847   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:32.819297   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:30.440684   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:32.441210   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:30.206433   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:32.707436   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:30.448781   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:32.457660   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:35.320173   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:35.332427   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:35.332503   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:35.366316   75402 cri.go:89] found id: ""
	I0816 18:18:35.366346   75402 logs.go:276] 0 containers: []
	W0816 18:18:35.366357   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:35.366365   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:35.366433   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:35.399308   75402 cri.go:89] found id: ""
	I0816 18:18:35.399346   75402 logs.go:276] 0 containers: []
	W0816 18:18:35.399357   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:35.399367   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:35.399434   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:35.434926   75402 cri.go:89] found id: ""
	I0816 18:18:35.434958   75402 logs.go:276] 0 containers: []
	W0816 18:18:35.434971   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:35.434980   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:35.435042   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:35.473222   75402 cri.go:89] found id: ""
	I0816 18:18:35.473247   75402 logs.go:276] 0 containers: []
	W0816 18:18:35.473258   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:35.473266   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:35.473343   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:35.505484   75402 cri.go:89] found id: ""
	I0816 18:18:35.505521   75402 logs.go:276] 0 containers: []
	W0816 18:18:35.505533   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:35.505540   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:35.505608   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:35.540532   75402 cri.go:89] found id: ""
	I0816 18:18:35.540573   75402 logs.go:276] 0 containers: []
	W0816 18:18:35.540584   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:35.540590   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:35.540663   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:35.574205   75402 cri.go:89] found id: ""
	I0816 18:18:35.574235   75402 logs.go:276] 0 containers: []
	W0816 18:18:35.574245   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:35.574252   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:35.574343   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:35.614707   75402 cri.go:89] found id: ""
	I0816 18:18:35.614732   75402 logs.go:276] 0 containers: []
	W0816 18:18:35.614739   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:35.614747   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:35.614759   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:35.690830   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:35.690861   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:35.726601   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:35.726627   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:35.774706   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:35.774736   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:35.787557   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:35.787616   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:35.857474   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:34.940337   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:37.440507   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:34.701151   74828 pod_ready.go:82] duration metric: took 4m0.000965442s for pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace to be "Ready" ...
	E0816 18:18:34.701178   74828 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace to be "Ready" (will not retry!)
	I0816 18:18:34.701196   74828 pod_ready.go:39] duration metric: took 4m13.502588966s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:18:34.701228   74828 kubeadm.go:597] duration metric: took 4m21.306103533s to restartPrimaryControlPlane
	W0816 18:18:34.701293   74828 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0816 18:18:34.701330   74828 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 18:18:34.948583   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:37.447544   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:39.448942   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:38.358057   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:38.371128   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:38.371189   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:38.404812   75402 cri.go:89] found id: ""
	I0816 18:18:38.404844   75402 logs.go:276] 0 containers: []
	W0816 18:18:38.404855   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:38.404864   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:38.404926   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:38.437922   75402 cri.go:89] found id: ""
	I0816 18:18:38.437950   75402 logs.go:276] 0 containers: []
	W0816 18:18:38.437960   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:38.437967   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:38.438023   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:38.471474   75402 cri.go:89] found id: ""
	I0816 18:18:38.471509   75402 logs.go:276] 0 containers: []
	W0816 18:18:38.471519   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:38.471525   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:38.471582   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:38.510132   75402 cri.go:89] found id: ""
	I0816 18:18:38.510158   75402 logs.go:276] 0 containers: []
	W0816 18:18:38.510168   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:38.510184   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:38.510246   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:38.542212   75402 cri.go:89] found id: ""
	I0816 18:18:38.542251   75402 logs.go:276] 0 containers: []
	W0816 18:18:38.542262   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:38.542269   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:38.542341   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:38.579037   75402 cri.go:89] found id: ""
	I0816 18:18:38.579068   75402 logs.go:276] 0 containers: []
	W0816 18:18:38.579076   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:38.579082   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:38.579129   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:38.619219   75402 cri.go:89] found id: ""
	I0816 18:18:38.619252   75402 logs.go:276] 0 containers: []
	W0816 18:18:38.619263   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:38.619272   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:38.619335   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:38.655124   75402 cri.go:89] found id: ""
	I0816 18:18:38.655149   75402 logs.go:276] 0 containers: []
	W0816 18:18:38.655169   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:38.655180   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:38.655194   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:38.737857   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:38.737894   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:38.779777   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:38.779806   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:38.831556   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:38.831590   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:38.844496   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:38.844523   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:38.914543   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:41.415612   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:41.428187   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:41.428251   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:41.462932   75402 cri.go:89] found id: ""
	I0816 18:18:41.462964   75402 logs.go:276] 0 containers: []
	W0816 18:18:41.462975   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:41.462983   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:41.463043   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:41.497712   75402 cri.go:89] found id: ""
	I0816 18:18:41.497739   75402 logs.go:276] 0 containers: []
	W0816 18:18:41.497748   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:41.497754   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:41.497804   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:41.528430   75402 cri.go:89] found id: ""
	I0816 18:18:41.528455   75402 logs.go:276] 0 containers: []
	W0816 18:18:41.528463   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:41.528468   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:41.528527   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:41.560048   75402 cri.go:89] found id: ""
	I0816 18:18:41.560071   75402 logs.go:276] 0 containers: []
	W0816 18:18:41.560081   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:41.560088   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:41.560142   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:41.592536   75402 cri.go:89] found id: ""
	I0816 18:18:41.592566   75402 logs.go:276] 0 containers: []
	W0816 18:18:41.592577   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:41.592585   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:41.592663   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:41.626850   75402 cri.go:89] found id: ""
	I0816 18:18:41.626884   75402 logs.go:276] 0 containers: []
	W0816 18:18:41.626894   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:41.626902   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:41.626965   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:41.660452   75402 cri.go:89] found id: ""
	I0816 18:18:41.660478   75402 logs.go:276] 0 containers: []
	W0816 18:18:41.660486   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:41.660491   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:41.660542   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:41.695990   75402 cri.go:89] found id: ""
	I0816 18:18:41.696012   75402 logs.go:276] 0 containers: []
	W0816 18:18:41.696020   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:41.696028   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:41.696039   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:41.733107   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:41.733134   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:41.782812   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:41.782843   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:41.795954   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:41.795984   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:41.867473   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:41.867526   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:41.867545   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:39.442037   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:41.940088   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:41.948682   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:43.942215   75006 pod_ready.go:82] duration metric: took 4m0.000164284s for pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace to be "Ready" ...
	E0816 18:18:43.942239   75006 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace to be "Ready" (will not retry!)
	I0816 18:18:43.942255   75006 pod_ready.go:39] duration metric: took 4m12.163955241s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:18:43.942279   75006 kubeadm.go:597] duration metric: took 4m21.898271101s to restartPrimaryControlPlane
	W0816 18:18:43.942326   75006 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0816 18:18:43.942352   75006 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 18:18:44.450340   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:44.463299   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:44.463361   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:44.495068   75402 cri.go:89] found id: ""
	I0816 18:18:44.495098   75402 logs.go:276] 0 containers: []
	W0816 18:18:44.495108   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:44.495116   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:44.495221   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:44.529615   75402 cri.go:89] found id: ""
	I0816 18:18:44.529638   75402 logs.go:276] 0 containers: []
	W0816 18:18:44.529646   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:44.529651   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:44.529701   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:44.565275   75402 cri.go:89] found id: ""
	I0816 18:18:44.565298   75402 logs.go:276] 0 containers: []
	W0816 18:18:44.565306   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:44.565321   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:44.565384   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:44.598554   75402 cri.go:89] found id: ""
	I0816 18:18:44.598590   75402 logs.go:276] 0 containers: []
	W0816 18:18:44.598601   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:44.598609   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:44.598673   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:44.631389   75402 cri.go:89] found id: ""
	I0816 18:18:44.631422   75402 logs.go:276] 0 containers: []
	W0816 18:18:44.631436   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:44.631446   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:44.631519   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:44.663986   75402 cri.go:89] found id: ""
	I0816 18:18:44.664013   75402 logs.go:276] 0 containers: []
	W0816 18:18:44.664023   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:44.664031   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:44.664095   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:44.700238   75402 cri.go:89] found id: ""
	I0816 18:18:44.700263   75402 logs.go:276] 0 containers: []
	W0816 18:18:44.700272   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:44.700277   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:44.700330   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:44.732737   75402 cri.go:89] found id: ""
	I0816 18:18:44.732766   75402 logs.go:276] 0 containers: []
	W0816 18:18:44.732779   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:44.732790   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:44.732807   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:44.806427   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:44.806462   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:44.842965   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:44.842994   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:44.895745   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:44.895781   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:44.909850   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:44.909885   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:44.979315   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:47.479563   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:47.491876   75402 kubeadm.go:597] duration metric: took 4m4.431091965s to restartPrimaryControlPlane
	W0816 18:18:47.491939   75402 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0816 18:18:47.491962   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 18:18:43.941047   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:46.440592   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:48.441208   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:51.168302   75402 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.676317513s)
	I0816 18:18:51.168387   75402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 18:18:51.182492   75402 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 18:18:51.192403   75402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 18:18:51.202058   75402 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 18:18:51.202075   75402 kubeadm.go:157] found existing configuration files:
	
	I0816 18:18:51.202115   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 18:18:51.210661   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 18:18:51.210721   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 18:18:51.219979   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 18:18:51.228422   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 18:18:51.228488   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 18:18:51.237159   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 18:18:51.245555   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 18:18:51.245622   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 18:18:51.253986   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 18:18:51.261885   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 18:18:51.261927   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 18:18:51.270479   75402 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 18:18:51.335784   75402 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0816 18:18:51.335883   75402 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 18:18:51.482910   75402 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 18:18:51.483069   75402 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 18:18:51.483228   75402 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0816 18:18:51.652730   75402 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 18:18:51.655077   75402 out.go:235]   - Generating certificates and keys ...
	I0816 18:18:51.655185   75402 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 18:18:51.655304   75402 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 18:18:51.655425   75402 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 18:18:51.655521   75402 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 18:18:51.657408   75402 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 18:18:51.657485   75402 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 18:18:51.657561   75402 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 18:18:51.657645   75402 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 18:18:51.657748   75402 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 18:18:51.657854   75402 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 18:18:51.657911   75402 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 18:18:51.657984   75402 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 18:18:51.720786   75402 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 18:18:51.991165   75402 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 18:18:52.140983   75402 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 18:18:52.453361   75402 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 18:18:52.467210   75402 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 18:18:52.469222   75402 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 18:18:52.469338   75402 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 18:18:52.590938   75402 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 18:18:52.592875   75402 out.go:235]   - Booting up control plane ...
	I0816 18:18:52.592987   75402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 18:18:52.602597   75402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 18:18:52.603616   75402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 18:18:52.604417   75402 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 18:18:52.606669   75402 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0816 18:18:50.939639   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:52.940202   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:54.940917   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:57.439382   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:59.443139   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:01.940496   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:00.803654   74828 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.102297191s)
	I0816 18:19:00.803740   74828 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 18:19:00.818126   74828 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 18:19:00.827602   74828 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 18:19:00.836389   74828 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 18:19:00.836410   74828 kubeadm.go:157] found existing configuration files:
	
	I0816 18:19:00.836455   74828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 18:19:00.844830   74828 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 18:19:00.844880   74828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 18:19:00.853736   74828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 18:19:00.862795   74828 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 18:19:00.862855   74828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 18:19:00.872056   74828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 18:19:00.880410   74828 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 18:19:00.880461   74828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 18:19:00.889000   74828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 18:19:00.897508   74828 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 18:19:00.897568   74828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 18:19:00.906256   74828 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 18:19:00.953336   74828 kubeadm.go:310] W0816 18:19:00.929461    3053 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 18:19:00.955337   74828 kubeadm.go:310] W0816 18:19:00.931382    3053 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 18:19:01.068247   74828 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 18:19:03.940545   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:06.439727   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:08.440027   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:09.225829   74828 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0816 18:19:09.225908   74828 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 18:19:09.226014   74828 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 18:19:09.226126   74828 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 18:19:09.226242   74828 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0816 18:19:09.226329   74828 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 18:19:09.228065   74828 out.go:235]   - Generating certificates and keys ...
	I0816 18:19:09.228133   74828 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 18:19:09.228183   74828 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 18:19:09.228252   74828 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 18:19:09.228315   74828 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 18:19:09.228403   74828 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 18:19:09.228489   74828 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 18:19:09.228584   74828 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 18:19:09.228686   74828 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 18:19:09.228787   74828 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 18:19:09.228864   74828 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 18:19:09.228903   74828 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 18:19:09.228983   74828 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 18:19:09.229052   74828 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 18:19:09.229147   74828 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0816 18:19:09.229234   74828 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 18:19:09.229332   74828 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 18:19:09.229410   74828 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 18:19:09.229532   74828 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 18:19:09.229607   74828 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 18:19:09.230874   74828 out.go:235]   - Booting up control plane ...
	I0816 18:19:09.230948   74828 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 18:19:09.231032   74828 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 18:19:09.231090   74828 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 18:19:09.231202   74828 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 18:19:09.231321   74828 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 18:19:09.231381   74828 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 18:19:09.231572   74828 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0816 18:19:09.231662   74828 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0816 18:19:09.231711   74828 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.32263ms
	I0816 18:19:09.231774   74828 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0816 18:19:09.231824   74828 kubeadm.go:310] [api-check] The API server is healthy after 5.002367118s
	I0816 18:19:09.231923   74828 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0816 18:19:09.232091   74828 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0816 18:19:09.232166   74828 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0816 18:19:09.232419   74828 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-864476 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0816 18:19:09.232497   74828 kubeadm.go:310] [bootstrap-token] Using token: 6m1jus.xr9uhx26t28q092p
	I0816 18:19:09.233962   74828 out.go:235]   - Configuring RBAC rules ...
	I0816 18:19:09.234068   74828 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0816 18:19:09.234164   74828 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0816 18:19:09.234315   74828 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0816 18:19:09.234425   74828 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0816 18:19:09.234522   74828 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0816 18:19:09.234615   74828 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0816 18:19:09.234775   74828 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0816 18:19:09.234830   74828 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0816 18:19:09.234892   74828 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0816 18:19:09.234901   74828 kubeadm.go:310] 
	I0816 18:19:09.234971   74828 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0816 18:19:09.234980   74828 kubeadm.go:310] 
	I0816 18:19:09.235067   74828 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0816 18:19:09.235076   74828 kubeadm.go:310] 
	I0816 18:19:09.235115   74828 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0816 18:19:09.235194   74828 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0816 18:19:09.235271   74828 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0816 18:19:09.235280   74828 kubeadm.go:310] 
	I0816 18:19:09.235367   74828 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0816 18:19:09.235376   74828 kubeadm.go:310] 
	I0816 18:19:09.235448   74828 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0816 18:19:09.235459   74828 kubeadm.go:310] 
	I0816 18:19:09.235533   74828 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0816 18:19:09.235607   74828 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0816 18:19:09.235677   74828 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0816 18:19:09.235683   74828 kubeadm.go:310] 
	I0816 18:19:09.235795   74828 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0816 18:19:09.235907   74828 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0816 18:19:09.235916   74828 kubeadm.go:310] 
	I0816 18:19:09.235986   74828 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 6m1jus.xr9uhx26t28q092p \
	I0816 18:19:09.236080   74828 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:340cd7e6f4afd769ffe0b97bb40a910498795aaf29fab06cd6b8c9a3957ccd57 \
	I0816 18:19:09.236099   74828 kubeadm.go:310] 	--control-plane 
	I0816 18:19:09.236105   74828 kubeadm.go:310] 
	I0816 18:19:09.236177   74828 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0816 18:19:09.236185   74828 kubeadm.go:310] 
	I0816 18:19:09.236268   74828 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 6m1jus.xr9uhx26t28q092p \
	I0816 18:19:09.236403   74828 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:340cd7e6f4afd769ffe0b97bb40a910498795aaf29fab06cd6b8c9a3957ccd57 
	I0816 18:19:09.236416   74828 cni.go:84] Creating CNI manager for ""
	I0816 18:19:09.236422   74828 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 18:19:09.237971   74828 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 18:19:10.069497   75006 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.127122656s)
	I0816 18:19:10.069585   75006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 18:19:10.085322   75006 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 18:19:10.098736   75006 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 18:19:10.108163   75006 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 18:19:10.108183   75006 kubeadm.go:157] found existing configuration files:
	
	I0816 18:19:10.108224   75006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0816 18:19:10.117330   75006 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 18:19:10.117382   75006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 18:19:10.127090   75006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0816 18:19:10.135574   75006 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 18:19:10.135648   75006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 18:19:10.146127   75006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0816 18:19:10.154474   75006 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 18:19:10.154533   75006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 18:19:10.163245   75006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0816 18:19:10.171315   75006 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 18:19:10.171375   75006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 18:19:10.181088   75006 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 18:19:10.225495   75006 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0816 18:19:10.225571   75006 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 18:19:10.327332   75006 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 18:19:10.327442   75006 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 18:19:10.327586   75006 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0816 18:19:10.335739   75006 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 18:19:10.337610   75006 out.go:235]   - Generating certificates and keys ...
	I0816 18:19:10.337730   75006 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 18:19:10.337818   75006 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 18:19:10.337935   75006 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 18:19:10.338054   75006 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 18:19:10.338174   75006 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 18:19:10.338254   75006 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 18:19:10.338359   75006 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 18:19:10.338452   75006 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 18:19:10.338562   75006 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 18:19:10.338668   75006 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 18:19:10.338718   75006 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 18:19:10.338796   75006 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 18:19:10.437447   75006 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 18:19:10.868191   75006 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0816 18:19:10.961497   75006 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 18:19:11.363158   75006 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 18:19:11.963929   75006 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 18:19:11.964410   75006 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 18:19:11.967675   75006 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 18:19:09.239250   74828 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 18:19:09.250270   74828 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 18:19:09.267205   74828 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 18:19:09.267346   74828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:09.267366   74828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-864476 minikube.k8s.io/updated_at=2024_08_16T18_19_09_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd minikube.k8s.io/name=no-preload-864476 minikube.k8s.io/primary=true
	I0816 18:19:09.282111   74828 ops.go:34] apiserver oom_adj: -16
	I0816 18:19:09.471160   74828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:09.971453   74828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:10.471576   74828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:10.971748   74828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:11.471954   74828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:11.971371   74828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:12.471626   74828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:12.972021   74828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:13.472254   74828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:13.588350   74828 kubeadm.go:1113] duration metric: took 4.321062687s to wait for elevateKubeSystemPrivileges
	I0816 18:19:13.588392   74828 kubeadm.go:394] duration metric: took 5m0.245036951s to StartCluster
	I0816 18:19:13.588413   74828 settings.go:142] acquiring lock: {Name:mk7d6bbff611e99a9222156e58c7ea3b0a5541f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:19:13.588500   74828 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19461-9545/kubeconfig
	I0816 18:19:13.591118   74828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/kubeconfig: {Name:mk7d5abb3a1b9b654c5d7490d32aeae2c9a1bdd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:19:13.591418   74828 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.50 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 18:19:13.591683   74828 config.go:182] Loaded profile config "no-preload-864476": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 18:19:13.591744   74828 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 18:19:13.591809   74828 addons.go:69] Setting storage-provisioner=true in profile "no-preload-864476"
	I0816 18:19:13.591839   74828 addons.go:234] Setting addon storage-provisioner=true in "no-preload-864476"
	W0816 18:19:13.591851   74828 addons.go:243] addon storage-provisioner should already be in state true
	I0816 18:19:13.591882   74828 host.go:66] Checking if "no-preload-864476" exists ...
	I0816 18:19:13.592025   74828 addons.go:69] Setting default-storageclass=true in profile "no-preload-864476"
	I0816 18:19:13.592070   74828 addons.go:69] Setting metrics-server=true in profile "no-preload-864476"
	I0816 18:19:13.592135   74828 addons.go:234] Setting addon metrics-server=true in "no-preload-864476"
	W0816 18:19:13.592150   74828 addons.go:243] addon metrics-server should already be in state true
	I0816 18:19:13.592073   74828 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-864476"
	I0816 18:19:13.592272   74828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:13.592206   74828 host.go:66] Checking if "no-preload-864476" exists ...
	I0816 18:19:13.592326   74828 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:13.592654   74828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:13.592677   74828 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:13.592731   74828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:13.592753   74828 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:13.592790   74828 out.go:177] * Verifying Kubernetes components...
	I0816 18:19:13.594236   74828 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:19:13.613019   74828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42847
	I0816 18:19:13.613061   74828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44393
	I0816 18:19:13.613087   74828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40547
	I0816 18:19:13.613498   74828 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:13.613552   74828 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:13.613708   74828 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:13.614094   74828 main.go:141] libmachine: Using API Version  1
	I0816 18:19:13.614113   74828 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:13.614198   74828 main.go:141] libmachine: Using API Version  1
	I0816 18:19:13.614222   74828 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:13.614403   74828 main.go:141] libmachine: Using API Version  1
	I0816 18:19:13.614420   74828 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:13.614478   74828 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:13.614675   74828 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:13.614728   74828 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:13.614856   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetState
	I0816 18:19:13.615039   74828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:13.615068   74828 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:13.615401   74828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:13.615442   74828 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:13.619787   74828 addons.go:234] Setting addon default-storageclass=true in "no-preload-864476"
	W0816 18:19:13.619815   74828 addons.go:243] addon default-storageclass should already be in state true
	I0816 18:19:13.619848   74828 host.go:66] Checking if "no-preload-864476" exists ...
	I0816 18:19:13.620274   74828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:13.620438   74828 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:13.642013   74828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43679
	I0816 18:19:13.642196   74828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46207
	I0816 18:19:13.642654   74828 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:13.643201   74828 main.go:141] libmachine: Using API Version  1
	I0816 18:19:13.643227   74828 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:13.643304   74828 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:13.643888   74828 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:13.644065   74828 main.go:141] libmachine: Using API Version  1
	I0816 18:19:13.644086   74828 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:13.644537   74828 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:13.644548   74828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:13.644591   74828 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:13.645002   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetState
	I0816 18:19:13.646881   74828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40749
	I0816 18:19:13.647127   74828 main.go:141] libmachine: (no-preload-864476) Calling .DriverName
	I0816 18:19:13.647406   74828 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:13.648126   74828 main.go:141] libmachine: Using API Version  1
	I0816 18:19:13.648156   74828 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:13.648725   74828 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:13.648935   74828 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0816 18:19:13.649121   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetState
	I0816 18:19:13.649823   74828 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 18:19:13.649840   74828 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0816 18:19:13.649861   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:19:13.651524   74828 main.go:141] libmachine: (no-preload-864476) Calling .DriverName
	I0816 18:19:13.652917   74828 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:19:10.441027   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:12.939870   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:13.653916   74828 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 18:19:13.653933   74828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 18:19:13.653952   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:19:13.654035   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:19:13.654463   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:19:13.654482   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:19:13.654665   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHPort
	I0816 18:19:13.654883   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:19:13.655044   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHUsername
	I0816 18:19:13.655247   74828 sshutil.go:53] new ssh client: &{IP:192.168.50.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/no-preload-864476/id_rsa Username:docker}
	I0816 18:19:13.657315   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:19:13.657699   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:19:13.657783   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:19:13.657974   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHPort
	I0816 18:19:13.658125   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:19:13.658247   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHUsername
	I0816 18:19:13.658362   74828 sshutil.go:53] new ssh client: &{IP:192.168.50.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/no-preload-864476/id_rsa Username:docker}
	I0816 18:19:13.670111   74828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45349
	I0816 18:19:13.670711   74828 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:13.671220   74828 main.go:141] libmachine: Using API Version  1
	I0816 18:19:13.671239   74828 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:13.671585   74828 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:13.671778   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetState
	I0816 18:19:13.673274   74828 main.go:141] libmachine: (no-preload-864476) Calling .DriverName
	I0816 18:19:13.673480   74828 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 18:19:13.673493   74828 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 18:19:13.673511   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:19:13.677160   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:19:13.677542   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:19:13.677564   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:19:13.677854   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHPort
	I0816 18:19:13.678049   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:19:13.678170   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHUsername
	I0816 18:19:13.678263   74828 sshutil.go:53] new ssh client: &{IP:192.168.50.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/no-preload-864476/id_rsa Username:docker}
	I0816 18:19:11.970291   75006 out.go:235]   - Booting up control plane ...
	I0816 18:19:11.970385   75006 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 18:19:11.970516   75006 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 18:19:11.970617   75006 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 18:19:11.988374   75006 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 18:19:11.997980   75006 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 18:19:11.998045   75006 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 18:19:12.132297   75006 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0816 18:19:12.132447   75006 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0816 18:19:13.135489   75006 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.003222114s
	I0816 18:19:13.135584   75006 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0816 18:19:13.840111   74828 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 18:19:13.903130   74828 node_ready.go:35] waiting up to 6m0s for node "no-preload-864476" to be "Ready" ...
	I0816 18:19:13.915130   74828 node_ready.go:49] node "no-preload-864476" has status "Ready":"True"
	I0816 18:19:13.915163   74828 node_ready.go:38] duration metric: took 12.001127ms for node "no-preload-864476" to be "Ready" ...
	I0816 18:19:13.915174   74828 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:19:13.926756   74828 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-6zfgr" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:13.944598   74828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 18:19:13.971002   74828 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 18:19:13.971036   74828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0816 18:19:13.998897   74828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 18:19:14.015731   74828 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 18:19:14.015754   74828 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0816 18:19:14.080186   74828 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 18:19:14.080212   74828 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0816 18:19:14.187279   74828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 18:19:15.075984   74828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.077053329s)
	I0816 18:19:15.076058   74828 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:15.076071   74828 main.go:141] libmachine: (no-preload-864476) Calling .Close
	I0816 18:19:15.076364   74828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.131733705s)
	I0816 18:19:15.076478   74828 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:15.076495   74828 main.go:141] libmachine: (no-preload-864476) Calling .Close
	I0816 18:19:15.076405   74828 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:15.076567   74828 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:15.076591   74828 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:15.076600   74828 main.go:141] libmachine: (no-preload-864476) Calling .Close
	I0816 18:19:15.076436   74828 main.go:141] libmachine: (no-preload-864476) DBG | Closing plugin on server side
	I0816 18:19:15.076786   74828 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:15.076838   74828 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:15.076859   74828 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:15.076879   74828 main.go:141] libmachine: (no-preload-864476) Calling .Close
	I0816 18:19:15.076969   74828 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:15.076987   74828 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:15.077443   74828 main.go:141] libmachine: (no-preload-864476) DBG | Closing plugin on server side
	I0816 18:19:15.077517   74828 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:15.077535   74828 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:15.164872   74828 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:15.164903   74828 main.go:141] libmachine: (no-preload-864476) Calling .Close
	I0816 18:19:15.165218   74828 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:15.165238   74828 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:15.373294   74828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.1859614s)
	I0816 18:19:15.373399   74828 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:15.373417   74828 main.go:141] libmachine: (no-preload-864476) Calling .Close
	I0816 18:19:15.373716   74828 main.go:141] libmachine: (no-preload-864476) DBG | Closing plugin on server side
	I0816 18:19:15.373769   74828 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:15.373804   74828 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:15.373825   74828 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:15.373837   74828 main.go:141] libmachine: (no-preload-864476) Calling .Close
	I0816 18:19:15.374124   74828 main.go:141] libmachine: (no-preload-864476) DBG | Closing plugin on server side
	I0816 18:19:15.374130   74828 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:15.374181   74828 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:15.374192   74828 addons.go:475] Verifying addon metrics-server=true in "no-preload-864476"
	I0816 18:19:15.375801   74828 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0816 18:19:17.638005   75006 kubeadm.go:310] [api-check] The API server is healthy after 4.502130995s
	I0816 18:19:17.658334   75006 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0816 18:19:17.678882   75006 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0816 18:19:17.709612   75006 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0816 18:19:17.709881   75006 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-256678 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0816 18:19:17.724755   75006 kubeadm.go:310] [bootstrap-token] Using token: cdypho.k0vxtmnp4c93945s
	I0816 18:19:14.941895   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:17.440923   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:15.377611   74828 addons.go:510] duration metric: took 1.785861834s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0816 18:19:15.934515   74828 pod_ready.go:103] pod "coredns-6f6b679f8f-6zfgr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:18.435321   74828 pod_ready.go:103] pod "coredns-6f6b679f8f-6zfgr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:17.726222   75006 out.go:235]   - Configuring RBAC rules ...
	I0816 18:19:17.726361   75006 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0816 18:19:17.733325   75006 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0816 18:19:17.740707   75006 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0816 18:19:17.747325   75006 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0816 18:19:17.751554   75006 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0816 18:19:17.761084   75006 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0816 18:19:18.044607   75006 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0816 18:19:18.485134   75006 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0816 18:19:19.044481   75006 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0816 18:19:19.045968   75006 kubeadm.go:310] 
	I0816 18:19:19.046038   75006 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0816 18:19:19.046069   75006 kubeadm.go:310] 
	I0816 18:19:19.046185   75006 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0816 18:19:19.046198   75006 kubeadm.go:310] 
	I0816 18:19:19.046229   75006 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0816 18:19:19.046298   75006 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0816 18:19:19.046343   75006 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0816 18:19:19.046349   75006 kubeadm.go:310] 
	I0816 18:19:19.046396   75006 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0816 18:19:19.046413   75006 kubeadm.go:310] 
	I0816 18:19:19.046504   75006 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0816 18:19:19.046529   75006 kubeadm.go:310] 
	I0816 18:19:19.046614   75006 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0816 18:19:19.046718   75006 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0816 18:19:19.046813   75006 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0816 18:19:19.046828   75006 kubeadm.go:310] 
	I0816 18:19:19.046941   75006 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0816 18:19:19.047047   75006 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0816 18:19:19.047056   75006 kubeadm.go:310] 
	I0816 18:19:19.047153   75006 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token cdypho.k0vxtmnp4c93945s \
	I0816 18:19:19.047304   75006 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:340cd7e6f4afd769ffe0b97bb40a910498795aaf29fab06cd6b8c9a3957ccd57 \
	I0816 18:19:19.047346   75006 kubeadm.go:310] 	--control-plane 
	I0816 18:19:19.047358   75006 kubeadm.go:310] 
	I0816 18:19:19.047470   75006 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0816 18:19:19.047480   75006 kubeadm.go:310] 
	I0816 18:19:19.047596   75006 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token cdypho.k0vxtmnp4c93945s \
	I0816 18:19:19.047740   75006 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:340cd7e6f4afd769ffe0b97bb40a910498795aaf29fab06cd6b8c9a3957ccd57 
	I0816 18:19:19.048871   75006 kubeadm.go:310] W0816 18:19:10.202021    2564 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 18:19:19.049167   75006 kubeadm.go:310] W0816 18:19:10.202700    2564 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 18:19:19.049279   75006 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 18:19:19.049304   75006 cni.go:84] Creating CNI manager for ""
	I0816 18:19:19.049318   75006 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 18:19:19.051543   75006 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 18:19:19.052677   75006 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 18:19:19.063536   75006 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 18:19:19.084460   75006 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 18:19:19.084540   75006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:19.084608   75006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-256678 minikube.k8s.io/updated_at=2024_08_16T18_19_19_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd minikube.k8s.io/name=default-k8s-diff-port-256678 minikube.k8s.io/primary=true
	I0816 18:19:19.257760   75006 ops.go:34] apiserver oom_adj: -16
	I0816 18:19:19.258124   75006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:19.759000   75006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:19.940737   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:22.440273   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:20.934243   74828 pod_ready.go:103] pod "coredns-6f6b679f8f-6zfgr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:23.433046   74828 pod_ready.go:103] pod "coredns-6f6b679f8f-6zfgr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:20.258798   75006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:20.759112   75006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:21.258598   75006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:21.758433   75006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:22.258181   75006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:22.758276   75006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:23.258184   75006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:23.758168   75006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:23.846653   75006 kubeadm.go:1113] duration metric: took 4.762173901s to wait for elevateKubeSystemPrivileges
	I0816 18:19:23.846688   75006 kubeadm.go:394] duration metric: took 5m1.846731834s to StartCluster
	I0816 18:19:23.846708   75006 settings.go:142] acquiring lock: {Name:mk7d6bbff611e99a9222156e58c7ea3b0a5541f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:19:23.846784   75006 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19461-9545/kubeconfig
	I0816 18:19:23.848375   75006 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/kubeconfig: {Name:mk7d5abb3a1b9b654c5d7490d32aeae2c9a1bdd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:19:23.848662   75006 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.144 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 18:19:23.848750   75006 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 18:19:23.848814   75006 config.go:182] Loaded profile config "default-k8s-diff-port-256678": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 18:19:23.848840   75006 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-256678"
	I0816 18:19:23.848858   75006 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-256678"
	I0816 18:19:23.848866   75006 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-256678"
	I0816 18:19:23.848878   75006 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-256678"
	I0816 18:19:23.848882   75006 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-256678"
	W0816 18:19:23.848887   75006 addons.go:243] addon storage-provisioner should already be in state true
	W0816 18:19:23.848890   75006 addons.go:243] addon metrics-server should already be in state true
	I0816 18:19:23.848915   75006 host.go:66] Checking if "default-k8s-diff-port-256678" exists ...
	I0816 18:19:23.848918   75006 host.go:66] Checking if "default-k8s-diff-port-256678" exists ...
	I0816 18:19:23.848914   75006 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-256678"
	I0816 18:19:23.849232   75006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:23.849259   75006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:23.849271   75006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:23.849293   75006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:23.849362   75006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:23.849404   75006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:23.850478   75006 out.go:177] * Verifying Kubernetes components...
	I0816 18:19:23.852034   75006 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:19:23.865786   75006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32949
	I0816 18:19:23.865939   75006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32833
	I0816 18:19:23.866248   75006 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:23.866304   75006 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:23.866398   75006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46809
	I0816 18:19:23.866816   75006 main.go:141] libmachine: Using API Version  1
	I0816 18:19:23.866845   75006 main.go:141] libmachine: Using API Version  1
	I0816 18:19:23.866860   75006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:23.866863   75006 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:23.866935   75006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:23.867328   75006 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:23.867333   75006 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:23.867430   75006 main.go:141] libmachine: Using API Version  1
	I0816 18:19:23.867447   75006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:23.867517   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetState
	I0816 18:19:23.867742   75006 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:23.867871   75006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:23.867897   75006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:23.868227   75006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:23.868247   75006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:23.870993   75006 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-256678"
	W0816 18:19:23.871020   75006 addons.go:243] addon default-storageclass should already be in state true
	I0816 18:19:23.871051   75006 host.go:66] Checking if "default-k8s-diff-port-256678" exists ...
	I0816 18:19:23.871403   75006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:23.871433   75006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:23.885139   75006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42813
	I0816 18:19:23.885814   75006 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:23.886386   75006 main.go:141] libmachine: Using API Version  1
	I0816 18:19:23.886403   75006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:23.886814   75006 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:23.886856   75006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39359
	I0816 18:19:23.887024   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetState
	I0816 18:19:23.887202   75006 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:23.887542   75006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36361
	I0816 18:19:23.887784   75006 main.go:141] libmachine: Using API Version  1
	I0816 18:19:23.887797   75006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:23.887863   75006 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:23.888165   75006 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:23.888372   75006 main.go:141] libmachine: Using API Version  1
	I0816 18:19:23.888389   75006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:23.889026   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .DriverName
	I0816 18:19:23.889254   75006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:23.889268   75006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:23.889518   75006 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:23.889758   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetState
	I0816 18:19:23.890483   75006 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:19:23.891262   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .DriverName
	I0816 18:19:23.891838   75006 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 18:19:23.891859   75006 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 18:19:23.891877   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:19:23.892581   75006 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0816 18:19:23.893621   75006 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 18:19:23.893684   75006 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0816 18:19:23.893882   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:19:23.894413   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:19:23.894973   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:19:23.894994   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:19:23.895161   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHPort
	I0816 18:19:23.895322   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:19:23.895578   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHUsername
	I0816 18:19:23.895757   75006 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/default-k8s-diff-port-256678/id_rsa Username:docker}
	I0816 18:19:23.897167   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:19:23.897666   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:19:23.897685   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:19:23.897802   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHPort
	I0816 18:19:23.897972   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:19:23.898132   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHUsername
	I0816 18:19:23.898248   75006 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/default-k8s-diff-port-256678/id_rsa Username:docker}
	I0816 18:19:23.906377   75006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43895
	I0816 18:19:23.906708   75006 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:23.907497   75006 main.go:141] libmachine: Using API Version  1
	I0816 18:19:23.907513   75006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:23.907932   75006 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:23.908240   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetState
	I0816 18:19:23.909917   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .DriverName
	I0816 18:19:23.910141   75006 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 18:19:23.910159   75006 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 18:19:23.910177   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:19:23.912435   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:19:23.912678   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:19:23.912710   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:19:23.912858   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHPort
	I0816 18:19:23.912982   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:19:23.913066   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHUsername
	I0816 18:19:23.913138   75006 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/default-k8s-diff-port-256678/id_rsa Username:docker}
	I0816 18:19:24.062487   75006 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 18:19:24.083148   75006 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-256678" to be "Ready" ...
	I0816 18:19:24.092886   75006 node_ready.go:49] node "default-k8s-diff-port-256678" has status "Ready":"True"
	I0816 18:19:24.092907   75006 node_ready.go:38] duration metric: took 9.72996ms for node "default-k8s-diff-port-256678" to be "Ready" ...
	I0816 18:19:24.092916   75006 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:19:24.099123   75006 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-hx7sb" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:24.184211   75006 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 18:19:24.197461   75006 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 18:19:24.197491   75006 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0816 18:19:24.219263   75006 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 18:19:24.258463   75006 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 18:19:24.258498   75006 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0816 18:19:24.355822   75006 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 18:19:24.355902   75006 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0816 18:19:24.436401   75006 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 18:19:24.866038   75006 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:24.866125   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .Close
	I0816 18:19:24.866058   75006 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:24.866163   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .Close
	I0816 18:19:24.866478   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | Closing plugin on server side
	I0816 18:19:24.866517   75006 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:24.866526   75006 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:24.866536   75006 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:24.866546   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .Close
	I0816 18:19:24.866600   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | Closing plugin on server side
	I0816 18:19:24.866626   75006 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:24.866636   75006 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:24.866649   75006 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:24.866676   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .Close
	I0816 18:19:24.866778   75006 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:24.866793   75006 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:24.866810   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | Closing plugin on server side
	I0816 18:19:24.866888   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | Closing plugin on server side
	I0816 18:19:24.866923   75006 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:24.866932   75006 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:24.886041   75006 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:24.886065   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .Close
	I0816 18:19:24.886338   75006 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:24.886359   75006 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:24.886384   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | Closing plugin on server side
	I0816 18:19:25.225367   75006 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:25.225397   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .Close
	I0816 18:19:25.225704   75006 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:25.225720   75006 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:25.225730   75006 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:25.225739   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .Close
	I0816 18:19:25.225961   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | Closing plugin on server side
	I0816 18:19:25.226005   75006 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:25.226025   75006 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:25.226043   75006 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-256678"
	I0816 18:19:25.227605   75006 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0816 18:19:23.934167   74828 pod_ready.go:93] pod "coredns-6f6b679f8f-6zfgr" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:23.934191   74828 pod_ready.go:82] duration metric: took 10.007408518s for pod "coredns-6f6b679f8f-6zfgr" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:23.934200   74828 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-qr4q9" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:23.940226   74828 pod_ready.go:93] pod "coredns-6f6b679f8f-qr4q9" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:23.940249   74828 pod_ready.go:82] duration metric: took 6.040513ms for pod "coredns-6f6b679f8f-qr4q9" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:23.940260   74828 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:23.945330   74828 pod_ready.go:93] pod "etcd-no-preload-864476" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:23.945351   74828 pod_ready.go:82] duration metric: took 5.082362ms for pod "etcd-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:23.945361   74828 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:23.949772   74828 pod_ready.go:93] pod "kube-apiserver-no-preload-864476" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:23.949800   74828 pod_ready.go:82] duration metric: took 4.429575ms for pod "kube-apiserver-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:23.949810   74828 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:23.954308   74828 pod_ready.go:93] pod "kube-controller-manager-no-preload-864476" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:23.954328   74828 pod_ready.go:82] duration metric: took 4.510361ms for pod "kube-controller-manager-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:23.954338   74828 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6g6zx" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:24.331265   74828 pod_ready.go:93] pod "kube-proxy-6g6zx" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:24.331306   74828 pod_ready.go:82] duration metric: took 376.9609ms for pod "kube-proxy-6g6zx" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:24.331320   74828 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:24.730715   74828 pod_ready.go:93] pod "kube-scheduler-no-preload-864476" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:24.730740   74828 pod_ready.go:82] duration metric: took 399.412376ms for pod "kube-scheduler-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:24.730748   74828 pod_ready.go:39] duration metric: took 10.815561534s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:19:24.730761   74828 api_server.go:52] waiting for apiserver process to appear ...
	I0816 18:19:24.730820   74828 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:19:24.746674   74828 api_server.go:72] duration metric: took 11.155216371s to wait for apiserver process to appear ...
	I0816 18:19:24.746697   74828 api_server.go:88] waiting for apiserver healthz status ...
	I0816 18:19:24.746714   74828 api_server.go:253] Checking apiserver healthz at https://192.168.50.50:8443/healthz ...
	I0816 18:19:24.750801   74828 api_server.go:279] https://192.168.50.50:8443/healthz returned 200:
	ok
	I0816 18:19:24.751835   74828 api_server.go:141] control plane version: v1.31.0
	I0816 18:19:24.751864   74828 api_server.go:131] duration metric: took 5.159229ms to wait for apiserver health ...
	I0816 18:19:24.751872   74828 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 18:19:24.935471   74828 system_pods.go:59] 9 kube-system pods found
	I0816 18:19:24.935510   74828 system_pods.go:61] "coredns-6f6b679f8f-6zfgr" [99157766-5089-4abe-a888-ec5992e5720a] Running
	I0816 18:19:24.935520   74828 system_pods.go:61] "coredns-6f6b679f8f-qr4q9" [d20f51f3-6786-496b-a6bc-7457462e46e9] Running
	I0816 18:19:24.935539   74828 system_pods.go:61] "etcd-no-preload-864476" [246e2b57-dbfe-4fd2-bc9d-ef927d48ba0b] Running
	I0816 18:19:24.935548   74828 system_pods.go:61] "kube-apiserver-no-preload-864476" [0e386448-037f-4543-941a-63f07e0d3186] Running
	I0816 18:19:24.935555   74828 system_pods.go:61] "kube-controller-manager-no-preload-864476" [71617b5c-9968-4d49-ac6c-7728712ac880] Running
	I0816 18:19:24.935562   74828 system_pods.go:61] "kube-proxy-6g6zx" [71a027eb-99e3-4b48-b9f1-2fc80cad9d2e] Running
	I0816 18:19:24.935572   74828 system_pods.go:61] "kube-scheduler-no-preload-864476" [c9b6ef2a-41fa-408b-86b7-eae10db4bec6] Running
	I0816 18:19:24.935584   74828 system_pods.go:61] "metrics-server-6867b74b74-r6cph" [a842267c-2c75-4799-aefc-2fb92ccb9129] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 18:19:24.935596   74828 system_pods.go:61] "storage-provisioner" [c05cdb7c-d74e-4008-a0fc-5eb6df9595af] Running
	I0816 18:19:24.935607   74828 system_pods.go:74] duration metric: took 183.727841ms to wait for pod list to return data ...
	I0816 18:19:24.935621   74828 default_sa.go:34] waiting for default service account to be created ...
	I0816 18:19:25.132713   74828 default_sa.go:45] found service account: "default"
	I0816 18:19:25.132740   74828 default_sa.go:55] duration metric: took 197.112152ms for default service account to be created ...
	I0816 18:19:25.132750   74828 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 18:19:25.335012   74828 system_pods.go:86] 9 kube-system pods found
	I0816 18:19:25.335043   74828 system_pods.go:89] "coredns-6f6b679f8f-6zfgr" [99157766-5089-4abe-a888-ec5992e5720a] Running
	I0816 18:19:25.335048   74828 system_pods.go:89] "coredns-6f6b679f8f-qr4q9" [d20f51f3-6786-496b-a6bc-7457462e46e9] Running
	I0816 18:19:25.335052   74828 system_pods.go:89] "etcd-no-preload-864476" [246e2b57-dbfe-4fd2-bc9d-ef927d48ba0b] Running
	I0816 18:19:25.335057   74828 system_pods.go:89] "kube-apiserver-no-preload-864476" [0e386448-037f-4543-941a-63f07e0d3186] Running
	I0816 18:19:25.335061   74828 system_pods.go:89] "kube-controller-manager-no-preload-864476" [71617b5c-9968-4d49-ac6c-7728712ac880] Running
	I0816 18:19:25.335064   74828 system_pods.go:89] "kube-proxy-6g6zx" [71a027eb-99e3-4b48-b9f1-2fc80cad9d2e] Running
	I0816 18:19:25.335068   74828 system_pods.go:89] "kube-scheduler-no-preload-864476" [c9b6ef2a-41fa-408b-86b7-eae10db4bec6] Running
	I0816 18:19:25.335075   74828 system_pods.go:89] "metrics-server-6867b74b74-r6cph" [a842267c-2c75-4799-aefc-2fb92ccb9129] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 18:19:25.335081   74828 system_pods.go:89] "storage-provisioner" [c05cdb7c-d74e-4008-a0fc-5eb6df9595af] Running
	I0816 18:19:25.335089   74828 system_pods.go:126] duration metric: took 202.33381ms to wait for k8s-apps to be running ...
	I0816 18:19:25.335098   74828 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 18:19:25.335141   74828 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 18:19:25.349420   74828 system_svc.go:56] duration metric: took 14.310938ms WaitForService to wait for kubelet
	I0816 18:19:25.349457   74828 kubeadm.go:582] duration metric: took 11.758002576s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 18:19:25.349480   74828 node_conditions.go:102] verifying NodePressure condition ...
	I0816 18:19:25.532145   74828 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 18:19:25.532175   74828 node_conditions.go:123] node cpu capacity is 2
	I0816 18:19:25.532189   74828 node_conditions.go:105] duration metric: took 182.702662ms to run NodePressure ...
	I0816 18:19:25.532200   74828 start.go:241] waiting for startup goroutines ...
	I0816 18:19:25.532209   74828 start.go:246] waiting for cluster config update ...
	I0816 18:19:25.532222   74828 start.go:255] writing updated cluster config ...
	I0816 18:19:25.532529   74828 ssh_runner.go:195] Run: rm -f paused
	I0816 18:19:25.588070   74828 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0816 18:19:25.589615   74828 out.go:177] * Done! kubectl is now configured to use "no-preload-864476" cluster and "default" namespace by default
	I0816 18:19:24.440489   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:25.441683   74510 pod_ready.go:82] duration metric: took 4m0.007816418s for pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace to be "Ready" ...
	E0816 18:19:25.441706   74510 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0816 18:19:25.441714   74510 pod_ready.go:39] duration metric: took 4m6.551547163s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:19:25.441726   74510 api_server.go:52] waiting for apiserver process to appear ...
	I0816 18:19:25.441753   74510 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:19:25.441805   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:19:25.492207   74510 cri.go:89] found id: "8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b"
	I0816 18:19:25.492235   74510 cri.go:89] found id: ""
	I0816 18:19:25.492245   74510 logs.go:276] 1 containers: [8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b]
	I0816 18:19:25.492313   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:25.497307   74510 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:19:25.497388   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:19:25.537185   74510 cri.go:89] found id: "fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468"
	I0816 18:19:25.537211   74510 cri.go:89] found id: ""
	I0816 18:19:25.537220   74510 logs.go:276] 1 containers: [fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468]
	I0816 18:19:25.537422   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:25.546564   74510 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:19:25.546644   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:19:25.602794   74510 cri.go:89] found id: "3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d"
	I0816 18:19:25.602817   74510 cri.go:89] found id: ""
	I0816 18:19:25.602827   74510 logs.go:276] 1 containers: [3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d]
	I0816 18:19:25.602879   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:25.609018   74510 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:19:25.609097   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:19:25.657942   74510 cri.go:89] found id: "99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc"
	I0816 18:19:25.657970   74510 cri.go:89] found id: ""
	I0816 18:19:25.657980   74510 logs.go:276] 1 containers: [99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc]
	I0816 18:19:25.658044   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:25.663485   74510 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:19:25.663551   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:19:25.709526   74510 cri.go:89] found id: "92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf"
	I0816 18:19:25.709554   74510 cri.go:89] found id: ""
	I0816 18:19:25.709564   74510 logs.go:276] 1 containers: [92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf]
	I0816 18:19:25.709612   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:25.715845   74510 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:19:25.715898   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:19:25.766505   74510 cri.go:89] found id: "72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c"
	I0816 18:19:25.766522   74510 cri.go:89] found id: ""
	I0816 18:19:25.766529   74510 logs.go:276] 1 containers: [72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c]
	I0816 18:19:25.766573   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:25.771051   74510 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:19:25.771127   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:19:25.810669   74510 cri.go:89] found id: ""
	I0816 18:19:25.810699   74510 logs.go:276] 0 containers: []
	W0816 18:19:25.810711   74510 logs.go:278] No container was found matching "kindnet"
	I0816 18:19:25.810720   74510 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 18:19:25.810779   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 18:19:25.851412   74510 cri.go:89] found id: "08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970"
	I0816 18:19:25.851432   74510 cri.go:89] found id: "81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e"
	I0816 18:19:25.851438   74510 cri.go:89] found id: ""
	I0816 18:19:25.851454   74510 logs.go:276] 2 containers: [08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970 81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e]
	I0816 18:19:25.851507   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:25.856154   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:25.860812   74510 logs.go:123] Gathering logs for etcd [fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468] ...
	I0816 18:19:25.860837   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468"
	I0816 18:19:25.910929   74510 logs.go:123] Gathering logs for coredns [3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d] ...
	I0816 18:19:25.910957   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d"
	I0816 18:19:25.951932   74510 logs.go:123] Gathering logs for storage-provisioner [08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970] ...
	I0816 18:19:25.951959   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970"
	I0816 18:19:25.999861   74510 logs.go:123] Gathering logs for storage-provisioner [81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e] ...
	I0816 18:19:25.999894   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e"
	I0816 18:19:26.036535   74510 logs.go:123] Gathering logs for container status ...
	I0816 18:19:26.036559   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:19:26.089637   74510 logs.go:123] Gathering logs for kubelet ...
	I0816 18:19:26.089675   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:19:26.157679   74510 logs.go:123] Gathering logs for dmesg ...
	I0816 18:19:26.157714   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:19:26.171402   74510 logs.go:123] Gathering logs for kube-scheduler [99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc] ...
	I0816 18:19:26.171432   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc"
	I0816 18:19:26.209537   74510 logs.go:123] Gathering logs for kube-proxy [92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf] ...
	I0816 18:19:26.209564   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf"
	I0816 18:19:26.252702   74510 logs.go:123] Gathering logs for kube-controller-manager [72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c] ...
	I0816 18:19:26.252732   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c"
	I0816 18:19:26.303169   74510 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:19:26.303203   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:19:26.784058   74510 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:19:26.784090   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 18:19:26.904095   74510 logs.go:123] Gathering logs for kube-apiserver [8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b] ...
	I0816 18:19:26.904137   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b"
	I0816 18:19:25.228674   75006 addons.go:510] duration metric: took 1.37992722s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0816 18:19:26.105147   75006 pod_ready.go:103] pod "coredns-6f6b679f8f-hx7sb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:28.107202   75006 pod_ready.go:103] pod "coredns-6f6b679f8f-hx7sb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:32.607933   75402 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0816 18:19:32.608136   75402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:19:32.608430   75402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:19:29.459100   74510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:19:29.476158   74510 api_server.go:72] duration metric: took 4m17.827179017s to wait for apiserver process to appear ...
	I0816 18:19:29.476183   74510 api_server.go:88] waiting for apiserver healthz status ...
	I0816 18:19:29.476222   74510 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:19:29.476279   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:19:29.509739   74510 cri.go:89] found id: "8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b"
	I0816 18:19:29.509767   74510 cri.go:89] found id: ""
	I0816 18:19:29.509776   74510 logs.go:276] 1 containers: [8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b]
	I0816 18:19:29.509836   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:29.516078   74510 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:19:29.516150   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:19:29.553766   74510 cri.go:89] found id: "fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468"
	I0816 18:19:29.553795   74510 cri.go:89] found id: ""
	I0816 18:19:29.553805   74510 logs.go:276] 1 containers: [fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468]
	I0816 18:19:29.553857   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:29.558145   74510 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:19:29.558210   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:19:29.599559   74510 cri.go:89] found id: "3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d"
	I0816 18:19:29.599583   74510 cri.go:89] found id: ""
	I0816 18:19:29.599594   74510 logs.go:276] 1 containers: [3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d]
	I0816 18:19:29.599651   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:29.604108   74510 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:19:29.604187   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:19:29.641990   74510 cri.go:89] found id: "99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc"
	I0816 18:19:29.642009   74510 cri.go:89] found id: ""
	I0816 18:19:29.642016   74510 logs.go:276] 1 containers: [99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc]
	I0816 18:19:29.642062   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:29.645990   74510 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:19:29.646047   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:19:29.679480   74510 cri.go:89] found id: "92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf"
	I0816 18:19:29.679505   74510 cri.go:89] found id: ""
	I0816 18:19:29.679514   74510 logs.go:276] 1 containers: [92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf]
	I0816 18:19:29.679571   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:29.683361   74510 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:19:29.683425   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:19:29.733167   74510 cri.go:89] found id: "72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c"
	I0816 18:19:29.733197   74510 cri.go:89] found id: ""
	I0816 18:19:29.733208   74510 logs.go:276] 1 containers: [72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c]
	I0816 18:19:29.733266   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:29.737449   74510 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:19:29.737518   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:19:29.771597   74510 cri.go:89] found id: ""
	I0816 18:19:29.771628   74510 logs.go:276] 0 containers: []
	W0816 18:19:29.771639   74510 logs.go:278] No container was found matching "kindnet"
	I0816 18:19:29.771647   74510 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 18:19:29.771714   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 18:19:29.812346   74510 cri.go:89] found id: "08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970"
	I0816 18:19:29.812375   74510 cri.go:89] found id: "81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e"
	I0816 18:19:29.812381   74510 cri.go:89] found id: ""
	I0816 18:19:29.812390   74510 logs.go:276] 2 containers: [08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970 81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e]
	I0816 18:19:29.812447   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:29.817909   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:29.821575   74510 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:19:29.821602   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:19:30.288789   74510 logs.go:123] Gathering logs for container status ...
	I0816 18:19:30.288836   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:19:30.332874   74510 logs.go:123] Gathering logs for dmesg ...
	I0816 18:19:30.332904   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:19:30.347128   74510 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:19:30.347168   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 18:19:30.456809   74510 logs.go:123] Gathering logs for etcd [fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468] ...
	I0816 18:19:30.456845   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468"
	I0816 18:19:30.505332   74510 logs.go:123] Gathering logs for kube-scheduler [99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc] ...
	I0816 18:19:30.505362   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc"
	I0816 18:19:30.540765   74510 logs.go:123] Gathering logs for kube-proxy [92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf] ...
	I0816 18:19:30.540798   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf"
	I0816 18:19:30.576047   74510 logs.go:123] Gathering logs for storage-provisioner [81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e] ...
	I0816 18:19:30.576077   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e"
	I0816 18:19:30.611956   74510 logs.go:123] Gathering logs for kubelet ...
	I0816 18:19:30.611992   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:19:30.678135   74510 logs.go:123] Gathering logs for kube-apiserver [8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b] ...
	I0816 18:19:30.678177   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b"
	I0816 18:19:30.732409   74510 logs.go:123] Gathering logs for coredns [3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d] ...
	I0816 18:19:30.732437   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d"
	I0816 18:19:30.773306   74510 logs.go:123] Gathering logs for kube-controller-manager [72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c] ...
	I0816 18:19:30.773331   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c"
	I0816 18:19:30.827732   74510 logs.go:123] Gathering logs for storage-provisioner [08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970] ...
	I0816 18:19:30.827763   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970"
	I0816 18:19:33.367134   74510 api_server.go:253] Checking apiserver healthz at https://192.168.61.218:8443/healthz ...
	I0816 18:19:33.371523   74510 api_server.go:279] https://192.168.61.218:8443/healthz returned 200:
	ok
	I0816 18:19:33.372537   74510 api_server.go:141] control plane version: v1.31.0
	I0816 18:19:33.372560   74510 api_server.go:131] duration metric: took 3.896368169s to wait for apiserver health ...
	I0816 18:19:33.372568   74510 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 18:19:33.372589   74510 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:19:33.372653   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:19:33.409551   74510 cri.go:89] found id: "8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b"
	I0816 18:19:33.409579   74510 cri.go:89] found id: ""
	I0816 18:19:33.409590   74510 logs.go:276] 1 containers: [8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b]
	I0816 18:19:33.409648   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:33.413727   74510 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:19:33.413802   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:19:33.457246   74510 cri.go:89] found id: "fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468"
	I0816 18:19:33.457268   74510 cri.go:89] found id: ""
	I0816 18:19:33.457277   74510 logs.go:276] 1 containers: [fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468]
	I0816 18:19:33.457337   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:33.461490   74510 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:19:33.461556   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:19:33.497141   74510 cri.go:89] found id: "3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d"
	I0816 18:19:33.497169   74510 cri.go:89] found id: ""
	I0816 18:19:33.497180   74510 logs.go:276] 1 containers: [3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d]
	I0816 18:19:33.497241   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:33.501353   74510 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:19:33.501421   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:19:33.537797   74510 cri.go:89] found id: "99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc"
	I0816 18:19:33.537816   74510 cri.go:89] found id: ""
	I0816 18:19:33.537823   74510 logs.go:276] 1 containers: [99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc]
	I0816 18:19:33.537877   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:33.541727   74510 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:19:33.541784   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:19:33.575882   74510 cri.go:89] found id: "92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf"
	I0816 18:19:33.575905   74510 cri.go:89] found id: ""
	I0816 18:19:33.575913   74510 logs.go:276] 1 containers: [92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf]
	I0816 18:19:33.575964   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:33.579592   74510 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:19:33.579644   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:19:33.614425   74510 cri.go:89] found id: "72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c"
	I0816 18:19:33.614447   74510 cri.go:89] found id: ""
	I0816 18:19:33.614455   74510 logs.go:276] 1 containers: [72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c]
	I0816 18:19:33.614507   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:33.618130   74510 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:19:33.618178   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:19:33.652369   74510 cri.go:89] found id: ""
	I0816 18:19:33.652393   74510 logs.go:276] 0 containers: []
	W0816 18:19:33.652403   74510 logs.go:278] No container was found matching "kindnet"
	I0816 18:19:33.652410   74510 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 18:19:33.652463   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 18:19:33.687276   74510 cri.go:89] found id: "08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970"
	I0816 18:19:33.687295   74510 cri.go:89] found id: "81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e"
	I0816 18:19:33.687301   74510 cri.go:89] found id: ""
	I0816 18:19:33.687309   74510 logs.go:276] 2 containers: [08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970 81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e]
	I0816 18:19:33.687361   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:33.691100   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:33.695148   74510 logs.go:123] Gathering logs for kube-proxy [92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf] ...
	I0816 18:19:33.695179   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf"
	I0816 18:19:30.110901   75006 pod_ready.go:103] pod "coredns-6f6b679f8f-hx7sb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:32.606195   75006 pod_ready.go:103] pod "coredns-6f6b679f8f-hx7sb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:34.110732   75006 pod_ready.go:93] pod "coredns-6f6b679f8f-hx7sb" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:34.110764   75006 pod_ready.go:82] duration metric: took 10.011612904s for pod "coredns-6f6b679f8f-hx7sb" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.110778   75006 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-t74vf" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.116373   75006 pod_ready.go:93] pod "coredns-6f6b679f8f-t74vf" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:34.116392   75006 pod_ready.go:82] duration metric: took 5.607377ms for pod "coredns-6f6b679f8f-t74vf" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.116401   75006 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.124005   75006 pod_ready.go:93] pod "etcd-default-k8s-diff-port-256678" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:34.124027   75006 pod_ready.go:82] duration metric: took 7.618878ms for pod "etcd-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.124039   75006 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.129603   75006 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-256678" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:34.129623   75006 pod_ready.go:82] duration metric: took 5.575452ms for pod "kube-apiserver-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.129633   75006 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.145449   75006 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-256678" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:34.145474   75006 pod_ready.go:82] duration metric: took 15.831669ms for pod "kube-controller-manager-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.145486   75006 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qsskg" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.506455   75006 pod_ready.go:93] pod "kube-proxy-qsskg" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:34.506477   75006 pod_ready.go:82] duration metric: took 360.982998ms for pod "kube-proxy-qsskg" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.506486   75006 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.905345   75006 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-256678" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:34.905365   75006 pod_ready.go:82] duration metric: took 398.872303ms for pod "kube-scheduler-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.905373   75006 pod_ready.go:39] duration metric: took 10.812448791s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:19:34.905386   75006 api_server.go:52] waiting for apiserver process to appear ...
	I0816 18:19:34.905430   75006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:19:34.920554   75006 api_server.go:72] duration metric: took 11.071846456s to wait for apiserver process to appear ...
	I0816 18:19:34.920574   75006 api_server.go:88] waiting for apiserver healthz status ...
	I0816 18:19:34.920589   75006 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I0816 18:19:34.927194   75006 api_server.go:279] https://192.168.72.144:8444/healthz returned 200:
	ok
	I0816 18:19:34.928420   75006 api_server.go:141] control plane version: v1.31.0
	I0816 18:19:34.928437   75006 api_server.go:131] duration metric: took 7.857168ms to wait for apiserver health ...
	I0816 18:19:34.928443   75006 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 18:19:35.107220   75006 system_pods.go:59] 9 kube-system pods found
	I0816 18:19:35.107248   75006 system_pods.go:61] "coredns-6f6b679f8f-hx7sb" [4ebcdf34-c4e8-47bd-83f3-c56ea7bfb7d4] Running
	I0816 18:19:35.107254   75006 system_pods.go:61] "coredns-6f6b679f8f-t74vf" [41afd723-b034-460e-8e5f-197c8d8bcd7a] Running
	I0816 18:19:35.107258   75006 system_pods.go:61] "etcd-default-k8s-diff-port-256678" [46e68942-a5fc-433d-bf35-70f87a1b5962] Running
	I0816 18:19:35.107262   75006 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-256678" [0083826c-61fc-4597-84d9-a529df660696] Running
	I0816 18:19:35.107267   75006 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-256678" [e96435e2-1034-46d7-9f70-ba4435962528] Running
	I0816 18:19:35.107270   75006 system_pods.go:61] "kube-proxy-qsskg" [c863ca3c-8451-4fa7-b22d-c709e67bd26b] Running
	I0816 18:19:35.107274   75006 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-256678" [83bd764c-55ee-4fc4-8ebc-567b3fba1f95] Running
	I0816 18:19:35.107280   75006 system_pods.go:61] "metrics-server-6867b74b74-vmt5v" [8446e983-380f-42a8-ab5b-ce9b6d67ebad] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 18:19:35.107288   75006 system_pods.go:61] "storage-provisioner" [491e3d8e-5a8b-4187-a682-411c6fb9dd92] Running
	I0816 18:19:35.107296   75006 system_pods.go:74] duration metric: took 178.847431ms to wait for pod list to return data ...
	I0816 18:19:35.107302   75006 default_sa.go:34] waiting for default service account to be created ...
	I0816 18:19:35.303619   75006 default_sa.go:45] found service account: "default"
	I0816 18:19:35.303646   75006 default_sa.go:55] duration metric: took 196.337687ms for default service account to be created ...
	I0816 18:19:35.303655   75006 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 18:19:35.508401   75006 system_pods.go:86] 9 kube-system pods found
	I0816 18:19:35.508442   75006 system_pods.go:89] "coredns-6f6b679f8f-hx7sb" [4ebcdf34-c4e8-47bd-83f3-c56ea7bfb7d4] Running
	I0816 18:19:35.508452   75006 system_pods.go:89] "coredns-6f6b679f8f-t74vf" [41afd723-b034-460e-8e5f-197c8d8bcd7a] Running
	I0816 18:19:35.508460   75006 system_pods.go:89] "etcd-default-k8s-diff-port-256678" [46e68942-a5fc-433d-bf35-70f87a1b5962] Running
	I0816 18:19:35.508466   75006 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-256678" [0083826c-61fc-4597-84d9-a529df660696] Running
	I0816 18:19:35.508471   75006 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-256678" [e96435e2-1034-46d7-9f70-ba4435962528] Running
	I0816 18:19:35.508477   75006 system_pods.go:89] "kube-proxy-qsskg" [c863ca3c-8451-4fa7-b22d-c709e67bd26b] Running
	I0816 18:19:35.508483   75006 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-256678" [83bd764c-55ee-4fc4-8ebc-567b3fba1f95] Running
	I0816 18:19:35.508494   75006 system_pods.go:89] "metrics-server-6867b74b74-vmt5v" [8446e983-380f-42a8-ab5b-ce9b6d67ebad] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 18:19:35.508504   75006 system_pods.go:89] "storage-provisioner" [491e3d8e-5a8b-4187-a682-411c6fb9dd92] Running
	I0816 18:19:35.508521   75006 system_pods.go:126] duration metric: took 204.859728ms to wait for k8s-apps to be running ...
	I0816 18:19:35.508544   75006 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 18:19:35.508605   75006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 18:19:35.523660   75006 system_svc.go:56] duration metric: took 15.109288ms WaitForService to wait for kubelet
	I0816 18:19:35.523687   75006 kubeadm.go:582] duration metric: took 11.674985717s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 18:19:35.523704   75006 node_conditions.go:102] verifying NodePressure condition ...
	I0816 18:19:35.704770   75006 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 18:19:35.704797   75006 node_conditions.go:123] node cpu capacity is 2
	I0816 18:19:35.704808   75006 node_conditions.go:105] duration metric: took 181.099433ms to run NodePressure ...
	I0816 18:19:35.704818   75006 start.go:241] waiting for startup goroutines ...
	I0816 18:19:35.704824   75006 start.go:246] waiting for cluster config update ...
	I0816 18:19:35.704834   75006 start.go:255] writing updated cluster config ...
	I0816 18:19:35.705096   75006 ssh_runner.go:195] Run: rm -f paused
	I0816 18:19:35.753637   75006 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0816 18:19:35.755747   75006 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-256678" cluster and "default" namespace by default
	I0816 18:19:33.732856   74510 logs.go:123] Gathering logs for kube-controller-manager [72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c] ...
	I0816 18:19:33.732881   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c"
	I0816 18:19:33.796167   74510 logs.go:123] Gathering logs for storage-provisioner [08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970] ...
	I0816 18:19:33.796215   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970"
	I0816 18:19:33.835842   74510 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:19:33.835869   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 18:19:33.956412   74510 logs.go:123] Gathering logs for kube-apiserver [8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b] ...
	I0816 18:19:33.956450   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b"
	I0816 18:19:34.004102   74510 logs.go:123] Gathering logs for etcd [fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468] ...
	I0816 18:19:34.004137   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468"
	I0816 18:19:34.050504   74510 logs.go:123] Gathering logs for coredns [3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d] ...
	I0816 18:19:34.050548   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d"
	I0816 18:19:34.087815   74510 logs.go:123] Gathering logs for kube-scheduler [99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc] ...
	I0816 18:19:34.087850   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc"
	I0816 18:19:34.124096   74510 logs.go:123] Gathering logs for kubelet ...
	I0816 18:19:34.124127   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:19:34.193377   74510 logs.go:123] Gathering logs for dmesg ...
	I0816 18:19:34.193410   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:19:34.206480   74510 logs.go:123] Gathering logs for storage-provisioner [81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e] ...
	I0816 18:19:34.206505   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e"
	I0816 18:19:34.240262   74510 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:19:34.240305   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:19:34.591979   74510 logs.go:123] Gathering logs for container status ...
	I0816 18:19:34.592014   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:19:37.142552   74510 system_pods.go:59] 8 kube-system pods found
	I0816 18:19:37.142580   74510 system_pods.go:61] "coredns-6f6b679f8f-8njs2" [f29c31e1-4c2a-4dd8-ba60-62998504c55e] Running
	I0816 18:19:37.142585   74510 system_pods.go:61] "etcd-embed-certs-777541" [9cad9a1c-cea5-4271-9a68-3689ecdad607] Running
	I0816 18:19:37.142590   74510 system_pods.go:61] "kube-apiserver-embed-certs-777541" [a5105f98-368f-4687-8eca-ecc66ae59b42] Running
	I0816 18:19:37.142593   74510 system_pods.go:61] "kube-controller-manager-embed-certs-777541" [63cb72fb-167b-446b-874d-6ee665ad8a55] Running
	I0816 18:19:37.142596   74510 system_pods.go:61] "kube-proxy-j5rl7" [fcbc8903-6fa2-4f55-9ec0-92b77e21fb08] Running
	I0816 18:19:37.142600   74510 system_pods.go:61] "kube-scheduler-embed-certs-777541" [f2224375-d2f9-4e85-9eb8-26070a90642f] Running
	I0816 18:19:37.142605   74510 system_pods.go:61] "metrics-server-6867b74b74-6hkzb" [3e01da8d-7ddf-47cc-9079-5162cf2c2b53] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 18:19:37.142609   74510 system_pods.go:61] "storage-provisioner" [6fc6c4da-0e0f-45cc-84a6-bd4907f5e852] Running
	I0816 18:19:37.142616   74510 system_pods.go:74] duration metric: took 3.770043434s to wait for pod list to return data ...
	I0816 18:19:37.142625   74510 default_sa.go:34] waiting for default service account to be created ...
	I0816 18:19:37.145135   74510 default_sa.go:45] found service account: "default"
	I0816 18:19:37.145161   74510 default_sa.go:55] duration metric: took 2.530779ms for default service account to be created ...
	I0816 18:19:37.145169   74510 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 18:19:37.149397   74510 system_pods.go:86] 8 kube-system pods found
	I0816 18:19:37.149423   74510 system_pods.go:89] "coredns-6f6b679f8f-8njs2" [f29c31e1-4c2a-4dd8-ba60-62998504c55e] Running
	I0816 18:19:37.149431   74510 system_pods.go:89] "etcd-embed-certs-777541" [9cad9a1c-cea5-4271-9a68-3689ecdad607] Running
	I0816 18:19:37.149437   74510 system_pods.go:89] "kube-apiserver-embed-certs-777541" [a5105f98-368f-4687-8eca-ecc66ae59b42] Running
	I0816 18:19:37.149443   74510 system_pods.go:89] "kube-controller-manager-embed-certs-777541" [63cb72fb-167b-446b-874d-6ee665ad8a55] Running
	I0816 18:19:37.149451   74510 system_pods.go:89] "kube-proxy-j5rl7" [fcbc8903-6fa2-4f55-9ec0-92b77e21fb08] Running
	I0816 18:19:37.149458   74510 system_pods.go:89] "kube-scheduler-embed-certs-777541" [f2224375-d2f9-4e85-9eb8-26070a90642f] Running
	I0816 18:19:37.149471   74510 system_pods.go:89] "metrics-server-6867b74b74-6hkzb" [3e01da8d-7ddf-47cc-9079-5162cf2c2b53] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 18:19:37.149480   74510 system_pods.go:89] "storage-provisioner" [6fc6c4da-0e0f-45cc-84a6-bd4907f5e852] Running
	I0816 18:19:37.149491   74510 system_pods.go:126] duration metric: took 4.31556ms to wait for k8s-apps to be running ...
	I0816 18:19:37.149502   74510 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 18:19:37.149564   74510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 18:19:37.166663   74510 system_svc.go:56] duration metric: took 17.15398ms WaitForService to wait for kubelet
	I0816 18:19:37.166692   74510 kubeadm.go:582] duration metric: took 4m25.517719342s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 18:19:37.166711   74510 node_conditions.go:102] verifying NodePressure condition ...
	I0816 18:19:37.170081   74510 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 18:19:37.170102   74510 node_conditions.go:123] node cpu capacity is 2
	I0816 18:19:37.170112   74510 node_conditions.go:105] duration metric: took 3.396116ms to run NodePressure ...
	I0816 18:19:37.170122   74510 start.go:241] waiting for startup goroutines ...
	I0816 18:19:37.170129   74510 start.go:246] waiting for cluster config update ...
	I0816 18:19:37.170138   74510 start.go:255] writing updated cluster config ...
	I0816 18:19:37.170406   74510 ssh_runner.go:195] Run: rm -f paused
	I0816 18:19:37.218383   74510 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0816 18:19:37.220397   74510 out.go:177] * Done! kubectl is now configured to use "embed-certs-777541" cluster and "default" namespace by default
	I0816 18:19:37.609143   75402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:19:37.609401   75402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:19:47.609941   75402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:19:47.610185   75402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:20:07.611108   75402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:20:07.611350   75402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:20:47.613446   75402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:20:47.613708   75402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:20:47.613742   75402 kubeadm.go:310] 
	I0816 18:20:47.613809   75402 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0816 18:20:47.613902   75402 kubeadm.go:310] 		timed out waiting for the condition
	I0816 18:20:47.613926   75402 kubeadm.go:310] 
	I0816 18:20:47.613976   75402 kubeadm.go:310] 	This error is likely caused by:
	I0816 18:20:47.614028   75402 kubeadm.go:310] 		- The kubelet is not running
	I0816 18:20:47.614160   75402 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0816 18:20:47.614174   75402 kubeadm.go:310] 
	I0816 18:20:47.614323   75402 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0816 18:20:47.614383   75402 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0816 18:20:47.614432   75402 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0816 18:20:47.614441   75402 kubeadm.go:310] 
	I0816 18:20:47.614601   75402 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0816 18:20:47.614730   75402 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0816 18:20:47.614751   75402 kubeadm.go:310] 
	I0816 18:20:47.614875   75402 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0816 18:20:47.614982   75402 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0816 18:20:47.615101   75402 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0816 18:20:47.615217   75402 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0816 18:20:47.615230   75402 kubeadm.go:310] 
	I0816 18:20:47.616865   75402 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 18:20:47.616971   75402 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0816 18:20:47.617028   75402 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0816 18:20:47.617173   75402 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0816 18:20:47.617226   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 18:20:48.158066   75402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 18:20:48.172568   75402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 18:20:48.182445   75402 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 18:20:48.182468   75402 kubeadm.go:157] found existing configuration files:
	
	I0816 18:20:48.182527   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 18:20:48.191779   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 18:20:48.191847   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 18:20:48.201531   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 18:20:48.210495   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 18:20:48.210568   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 18:20:48.219701   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 18:20:48.228170   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 18:20:48.228242   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 18:20:48.237366   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 18:20:48.246335   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 18:20:48.246393   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 18:20:48.255655   75402 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 18:20:48.321873   75402 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0816 18:20:48.321930   75402 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 18:20:48.462199   75402 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 18:20:48.462324   75402 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 18:20:48.462448   75402 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0816 18:20:48.646565   75402 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 18:20:48.648485   75402 out.go:235]   - Generating certificates and keys ...
	I0816 18:20:48.648605   75402 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 18:20:48.648748   75402 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 18:20:48.648895   75402 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 18:20:48.648994   75402 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 18:20:48.649088   75402 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 18:20:48.649185   75402 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 18:20:48.649282   75402 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 18:20:48.649368   75402 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 18:20:48.649485   75402 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 18:20:48.649595   75402 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 18:20:48.649649   75402 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 18:20:48.649753   75402 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 18:20:48.864525   75402 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 18:20:49.035729   75402 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 18:20:49.086765   75402 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 18:20:49.222612   75402 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 18:20:49.239121   75402 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 18:20:49.240158   75402 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 18:20:49.240200   75402 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 18:20:49.366027   75402 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 18:20:49.367770   75402 out.go:235]   - Booting up control plane ...
	I0816 18:20:49.367907   75402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 18:20:49.373047   75402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 18:20:49.373886   75402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 18:20:49.374691   75402 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 18:20:49.379220   75402 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0816 18:21:29.381362   75402 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0816 18:21:29.381473   75402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:21:29.381700   75402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:21:34.381889   75402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:21:34.382065   75402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:21:44.382765   75402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:21:44.382964   75402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:22:04.383485   75402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:22:04.383748   75402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:22:44.382265   75402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:22:44.382558   75402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:22:44.382572   75402 kubeadm.go:310] 
	I0816 18:22:44.382628   75402 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0816 18:22:44.382715   75402 kubeadm.go:310] 		timed out waiting for the condition
	I0816 18:22:44.382741   75402 kubeadm.go:310] 
	I0816 18:22:44.382789   75402 kubeadm.go:310] 	This error is likely caused by:
	I0816 18:22:44.382837   75402 kubeadm.go:310] 		- The kubelet is not running
	I0816 18:22:44.382986   75402 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0816 18:22:44.382997   75402 kubeadm.go:310] 
	I0816 18:22:44.383149   75402 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0816 18:22:44.383202   75402 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0816 18:22:44.383246   75402 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0816 18:22:44.383258   75402 kubeadm.go:310] 
	I0816 18:22:44.383421   75402 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0816 18:22:44.383534   75402 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0816 18:22:44.383549   75402 kubeadm.go:310] 
	I0816 18:22:44.383743   75402 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0816 18:22:44.383877   75402 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0816 18:22:44.383993   75402 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0816 18:22:44.384092   75402 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0816 18:22:44.384103   75402 kubeadm.go:310] 
	I0816 18:22:44.384783   75402 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 18:22:44.384895   75402 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0816 18:22:44.384986   75402 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0816 18:22:44.385062   75402 kubeadm.go:394] duration metric: took 8m1.372176417s to StartCluster
	I0816 18:22:44.385108   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:22:44.385173   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:22:44.425862   75402 cri.go:89] found id: ""
	I0816 18:22:44.425892   75402 logs.go:276] 0 containers: []
	W0816 18:22:44.425901   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:22:44.425909   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:22:44.425982   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:22:44.461988   75402 cri.go:89] found id: ""
	I0816 18:22:44.462019   75402 logs.go:276] 0 containers: []
	W0816 18:22:44.462030   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:22:44.462038   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:22:44.462109   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:22:44.496063   75402 cri.go:89] found id: ""
	I0816 18:22:44.496095   75402 logs.go:276] 0 containers: []
	W0816 18:22:44.496106   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:22:44.496114   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:22:44.496175   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:22:44.529875   75402 cri.go:89] found id: ""
	I0816 18:22:44.529899   75402 logs.go:276] 0 containers: []
	W0816 18:22:44.529906   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:22:44.529912   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:22:44.529958   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:22:44.565745   75402 cri.go:89] found id: ""
	I0816 18:22:44.565781   75402 logs.go:276] 0 containers: []
	W0816 18:22:44.565791   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:22:44.565798   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:22:44.565860   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:22:44.604122   75402 cri.go:89] found id: ""
	I0816 18:22:44.604149   75402 logs.go:276] 0 containers: []
	W0816 18:22:44.604160   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:22:44.604168   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:22:44.604228   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:22:44.636607   75402 cri.go:89] found id: ""
	I0816 18:22:44.636658   75402 logs.go:276] 0 containers: []
	W0816 18:22:44.636669   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:22:44.636677   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:22:44.636736   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:22:44.670942   75402 cri.go:89] found id: ""
	I0816 18:22:44.670973   75402 logs.go:276] 0 containers: []
	W0816 18:22:44.670981   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:22:44.670989   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:22:44.671001   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:22:44.722403   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:22:44.722433   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:22:44.738587   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:22:44.738627   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:22:44.854530   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:22:44.854563   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:22:44.854579   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:22:44.957308   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:22:44.957342   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0816 18:22:44.997652   75402 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0816 18:22:44.997714   75402 out.go:270] * 
	W0816 18:22:44.997804   75402 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0816 18:22:44.997828   75402 out.go:270] * 
	W0816 18:22:44.998787   75402 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 18:22:45.002189   75402 out.go:201] 
	W0816 18:22:45.003254   75402 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0816 18:22:45.003310   75402 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0816 18:22:45.003340   75402 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0816 18:22:45.004826   75402 out.go:201] 
	
	
	==> CRI-O <==
	Aug 16 18:28:39 embed-certs-777541 crio[736]: time="2024-08-16 18:28:39.465829812Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723832919465792079,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d7a80070-6d8d-46a0-8730-86fb1111b068 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 18:28:39 embed-certs-777541 crio[736]: time="2024-08-16 18:28:39.466666468Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4cfa4462-aca8-4641-b519-ad30e6a060dd name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:28:39 embed-certs-777541 crio[736]: time="2024-08-16 18:28:39.466766517Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4cfa4462-aca8-4641-b519-ad30e6a060dd name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:28:39 embed-certs-777541 crio[736]: time="2024-08-16 18:28:39.467056926Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970,PodSandboxId:663df6db7136a976826aaaf88c4e1823067edfed6bf8c598f8f6d136918acf15,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723832140172178791,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fc6c4da-0e0f-45cc-84a6-bd4907f5e852,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb524cb685d6bf3dc37257140f2e9a94f5bb5bd0bba0637396282b003e70175e,PodSandboxId:2022c533c0df1a055930e1ce1a93a252a21a4005c3c2701897c30ae194b0c47f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723832119804513910,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb629961-107a-4695-8482-6072d7bab160,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d,PodSandboxId:e25c7a557e9b5d93671dfb881d1122e6d91fa6853444a85157abda8a2c13cfe6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723832116930817598,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-8njs2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f29c31e1-4c2a-4dd8-ba60-62998504c55e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf,PodSandboxId:d2f7a4d8ee312c29d90db2c136370ed244c30e957652e73807bbcdc31c8245c7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723832109408363708,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j5rl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcbc8903-6fa2-4f55-9
ec0-92b77e21fb08,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e,PodSandboxId:663df6db7136a976826aaaf88c4e1823067edfed6bf8c598f8f6d136918acf15,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723832109359982498,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fc6c4da-0e0f-45cc-84a6-bd4907f5e8
52,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468,PodSandboxId:84d93b538a2582bcd546399dc7a0fae9489d5c294e7e3ba490e59ab62a796b5b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723832104832811736,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-777541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f22074243edb8e08ecfa486f630ccc29,},Annotations:map[string]string{io.kube
rnetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc,PodSandboxId:b9f249361f3f5270bd416e8c14197235c7e922feb092ccf237b361abd9b2148b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723832104828044327,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-777541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96f8ddd7fc45d6d6753c8a9d4ff3a367,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c,PodSandboxId:3376ece6a713c0dd5d7a72c91ef1b6c79ed390ac94da860ba4ffbde38e6b8c23,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723832104844608255,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-777541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50b85a095b7258a42f869852fe62b607,},Annotations:map[string]string{io.k
ubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b,PodSandboxId:420ec3a2ffdea6bbd41b8799792cc17ee49c35c3d7c0ed9dd775c3c5cca8bb64,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723832104818800668,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-777541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6efe5d70b9d3c0a5949e8858ebf4ca8d,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4cfa4462-aca8-4641-b519-ad30e6a060dd name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:28:39 embed-certs-777541 crio[736]: time="2024-08-16 18:28:39.524071249Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=650345a2-2a8e-4ac1-a9d5-81d9430d4b73 name=/runtime.v1.RuntimeService/Version
	Aug 16 18:28:39 embed-certs-777541 crio[736]: time="2024-08-16 18:28:39.524597773Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=650345a2-2a8e-4ac1-a9d5-81d9430d4b73 name=/runtime.v1.RuntimeService/Version
	Aug 16 18:28:39 embed-certs-777541 crio[736]: time="2024-08-16 18:28:39.527734449Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=527a083d-e4ab-4055-990c-baa71e253466 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 18:28:39 embed-certs-777541 crio[736]: time="2024-08-16 18:28:39.528255956Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723832919528224194,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=527a083d-e4ab-4055-990c-baa71e253466 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 18:28:39 embed-certs-777541 crio[736]: time="2024-08-16 18:28:39.529230383Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=308e1f87-9f72-487f-8c5c-bf5d7b9c26d5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:28:39 embed-certs-777541 crio[736]: time="2024-08-16 18:28:39.529297060Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=308e1f87-9f72-487f-8c5c-bf5d7b9c26d5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:28:39 embed-certs-777541 crio[736]: time="2024-08-16 18:28:39.529542158Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970,PodSandboxId:663df6db7136a976826aaaf88c4e1823067edfed6bf8c598f8f6d136918acf15,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723832140172178791,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fc6c4da-0e0f-45cc-84a6-bd4907f5e852,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb524cb685d6bf3dc37257140f2e9a94f5bb5bd0bba0637396282b003e70175e,PodSandboxId:2022c533c0df1a055930e1ce1a93a252a21a4005c3c2701897c30ae194b0c47f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723832119804513910,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb629961-107a-4695-8482-6072d7bab160,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d,PodSandboxId:e25c7a557e9b5d93671dfb881d1122e6d91fa6853444a85157abda8a2c13cfe6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723832116930817598,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-8njs2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f29c31e1-4c2a-4dd8-ba60-62998504c55e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf,PodSandboxId:d2f7a4d8ee312c29d90db2c136370ed244c30e957652e73807bbcdc31c8245c7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723832109408363708,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j5rl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcbc8903-6fa2-4f55-9
ec0-92b77e21fb08,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e,PodSandboxId:663df6db7136a976826aaaf88c4e1823067edfed6bf8c598f8f6d136918acf15,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723832109359982498,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fc6c4da-0e0f-45cc-84a6-bd4907f5e8
52,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468,PodSandboxId:84d93b538a2582bcd546399dc7a0fae9489d5c294e7e3ba490e59ab62a796b5b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723832104832811736,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-777541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f22074243edb8e08ecfa486f630ccc29,},Annotations:map[string]string{io.kube
rnetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc,PodSandboxId:b9f249361f3f5270bd416e8c14197235c7e922feb092ccf237b361abd9b2148b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723832104828044327,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-777541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96f8ddd7fc45d6d6753c8a9d4ff3a367,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c,PodSandboxId:3376ece6a713c0dd5d7a72c91ef1b6c79ed390ac94da860ba4ffbde38e6b8c23,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723832104844608255,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-777541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50b85a095b7258a42f869852fe62b607,},Annotations:map[string]string{io.k
ubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b,PodSandboxId:420ec3a2ffdea6bbd41b8799792cc17ee49c35c3d7c0ed9dd775c3c5cca8bb64,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723832104818800668,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-777541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6efe5d70b9d3c0a5949e8858ebf4ca8d,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=308e1f87-9f72-487f-8c5c-bf5d7b9c26d5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:28:39 embed-certs-777541 crio[736]: time="2024-08-16 18:28:39.578329712Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bca366f1-8d7a-4516-a919-364fd879813c name=/runtime.v1.RuntimeService/Version
	Aug 16 18:28:39 embed-certs-777541 crio[736]: time="2024-08-16 18:28:39.578434450Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bca366f1-8d7a-4516-a919-364fd879813c name=/runtime.v1.RuntimeService/Version
	Aug 16 18:28:39 embed-certs-777541 crio[736]: time="2024-08-16 18:28:39.580001186Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fa05780b-f12a-4b8b-876c-fdaa18613f0f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 18:28:39 embed-certs-777541 crio[736]: time="2024-08-16 18:28:39.580786586Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723832919580752167,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fa05780b-f12a-4b8b-876c-fdaa18613f0f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 18:28:39 embed-certs-777541 crio[736]: time="2024-08-16 18:28:39.581469767Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8450b410-af54-437f-8df8-668d51db99ac name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:28:39 embed-certs-777541 crio[736]: time="2024-08-16 18:28:39.581585861Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8450b410-af54-437f-8df8-668d51db99ac name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:28:39 embed-certs-777541 crio[736]: time="2024-08-16 18:28:39.581989626Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970,PodSandboxId:663df6db7136a976826aaaf88c4e1823067edfed6bf8c598f8f6d136918acf15,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723832140172178791,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fc6c4da-0e0f-45cc-84a6-bd4907f5e852,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb524cb685d6bf3dc37257140f2e9a94f5bb5bd0bba0637396282b003e70175e,PodSandboxId:2022c533c0df1a055930e1ce1a93a252a21a4005c3c2701897c30ae194b0c47f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723832119804513910,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb629961-107a-4695-8482-6072d7bab160,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d,PodSandboxId:e25c7a557e9b5d93671dfb881d1122e6d91fa6853444a85157abda8a2c13cfe6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723832116930817598,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-8njs2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f29c31e1-4c2a-4dd8-ba60-62998504c55e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf,PodSandboxId:d2f7a4d8ee312c29d90db2c136370ed244c30e957652e73807bbcdc31c8245c7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723832109408363708,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j5rl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcbc8903-6fa2-4f55-9
ec0-92b77e21fb08,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e,PodSandboxId:663df6db7136a976826aaaf88c4e1823067edfed6bf8c598f8f6d136918acf15,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723832109359982498,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fc6c4da-0e0f-45cc-84a6-bd4907f5e8
52,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468,PodSandboxId:84d93b538a2582bcd546399dc7a0fae9489d5c294e7e3ba490e59ab62a796b5b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723832104832811736,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-777541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f22074243edb8e08ecfa486f630ccc29,},Annotations:map[string]string{io.kube
rnetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc,PodSandboxId:b9f249361f3f5270bd416e8c14197235c7e922feb092ccf237b361abd9b2148b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723832104828044327,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-777541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96f8ddd7fc45d6d6753c8a9d4ff3a367,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c,PodSandboxId:3376ece6a713c0dd5d7a72c91ef1b6c79ed390ac94da860ba4ffbde38e6b8c23,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723832104844608255,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-777541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50b85a095b7258a42f869852fe62b607,},Annotations:map[string]string{io.k
ubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b,PodSandboxId:420ec3a2ffdea6bbd41b8799792cc17ee49c35c3d7c0ed9dd775c3c5cca8bb64,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723832104818800668,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-777541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6efe5d70b9d3c0a5949e8858ebf4ca8d,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8450b410-af54-437f-8df8-668d51db99ac name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:28:39 embed-certs-777541 crio[736]: time="2024-08-16 18:28:39.628447435Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fb922091-887e-44b9-a982-2636fd928d0c name=/runtime.v1.RuntimeService/Version
	Aug 16 18:28:39 embed-certs-777541 crio[736]: time="2024-08-16 18:28:39.628542969Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fb922091-887e-44b9-a982-2636fd928d0c name=/runtime.v1.RuntimeService/Version
	Aug 16 18:28:39 embed-certs-777541 crio[736]: time="2024-08-16 18:28:39.629638603Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5549107b-9893-4f9d-81fd-118657a2fae4 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 18:28:39 embed-certs-777541 crio[736]: time="2024-08-16 18:28:39.630773687Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723832919630732990,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5549107b-9893-4f9d-81fd-118657a2fae4 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 18:28:39 embed-certs-777541 crio[736]: time="2024-08-16 18:28:39.631555900Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=19c27e30-2f92-467f-bffc-edf5ee4d2da7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:28:39 embed-certs-777541 crio[736]: time="2024-08-16 18:28:39.631612633Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=19c27e30-2f92-467f-bffc-edf5ee4d2da7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:28:39 embed-certs-777541 crio[736]: time="2024-08-16 18:28:39.631885562Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970,PodSandboxId:663df6db7136a976826aaaf88c4e1823067edfed6bf8c598f8f6d136918acf15,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723832140172178791,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fc6c4da-0e0f-45cc-84a6-bd4907f5e852,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb524cb685d6bf3dc37257140f2e9a94f5bb5bd0bba0637396282b003e70175e,PodSandboxId:2022c533c0df1a055930e1ce1a93a252a21a4005c3c2701897c30ae194b0c47f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723832119804513910,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb629961-107a-4695-8482-6072d7bab160,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d,PodSandboxId:e25c7a557e9b5d93671dfb881d1122e6d91fa6853444a85157abda8a2c13cfe6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723832116930817598,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-8njs2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f29c31e1-4c2a-4dd8-ba60-62998504c55e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf,PodSandboxId:d2f7a4d8ee312c29d90db2c136370ed244c30e957652e73807bbcdc31c8245c7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723832109408363708,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j5rl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcbc8903-6fa2-4f55-9
ec0-92b77e21fb08,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e,PodSandboxId:663df6db7136a976826aaaf88c4e1823067edfed6bf8c598f8f6d136918acf15,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723832109359982498,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fc6c4da-0e0f-45cc-84a6-bd4907f5e8
52,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468,PodSandboxId:84d93b538a2582bcd546399dc7a0fae9489d5c294e7e3ba490e59ab62a796b5b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723832104832811736,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-777541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f22074243edb8e08ecfa486f630ccc29,},Annotations:map[string]string{io.kube
rnetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc,PodSandboxId:b9f249361f3f5270bd416e8c14197235c7e922feb092ccf237b361abd9b2148b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723832104828044327,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-777541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96f8ddd7fc45d6d6753c8a9d4ff3a367,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c,PodSandboxId:3376ece6a713c0dd5d7a72c91ef1b6c79ed390ac94da860ba4ffbde38e6b8c23,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723832104844608255,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-777541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50b85a095b7258a42f869852fe62b607,},Annotations:map[string]string{io.k
ubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b,PodSandboxId:420ec3a2ffdea6bbd41b8799792cc17ee49c35c3d7c0ed9dd775c3c5cca8bb64,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723832104818800668,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-777541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6efe5d70b9d3c0a5949e8858ebf4ca8d,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=19c27e30-2f92-467f-bffc-edf5ee4d2da7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	08db52c38328f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   663df6db7136a       storage-provisioner
	eb524cb685d6b       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   2022c533c0df1       busybox
	3918f8eb004ee       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   e25c7a557e9b5       coredns-6f6b679f8f-8njs2
	92401f8df7e94       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      13 minutes ago      Running             kube-proxy                1                   d2f7a4d8ee312       kube-proxy-j5rl7
	81f4d0a570266       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   663df6db7136a       storage-provisioner
	72d29c313c76c       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      13 minutes ago      Running             kube-controller-manager   1                   3376ece6a713c       kube-controller-manager-embed-certs-777541
	fd0d63ff38eb4       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago      Running             etcd                      1                   84d93b538a258       etcd-embed-certs-777541
	99d68f23b3bc9       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      13 minutes ago      Running             kube-scheduler            1                   b9f249361f3f5       kube-scheduler-embed-certs-777541
	8c78984b6e3a7       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      13 minutes ago      Running             kube-apiserver            1                   420ec3a2ffdea       kube-apiserver-embed-certs-777541
	
	
	==> coredns [3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:34987 - 15764 "HINFO IN 2476056286808905898.6093248778645637882. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017274135s
	
	
	==> describe nodes <==
	Name:               embed-certs-777541
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-777541
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd
	                    minikube.k8s.io/name=embed-certs-777541
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_16T18_05_32_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 18:05:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-777541
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 18:28:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 18:25:51 +0000   Fri, 16 Aug 2024 18:05:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 18:25:51 +0000   Fri, 16 Aug 2024 18:05:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 18:25:51 +0000   Fri, 16 Aug 2024 18:05:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 18:25:51 +0000   Fri, 16 Aug 2024 18:15:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.218
	  Hostname:    embed-certs-777541
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b2459fc1d07041d0b6f59364f1497951
	  System UUID:                b2459fc1-d070-41d0-b6f5-9364f1497951
	  Boot ID:                    ece19e17-996b-42c3-b7d3-9e5df75bd9fe
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 coredns-6f6b679f8f-8njs2                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     23m
	  kube-system                 etcd-embed-certs-777541                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         23m
	  kube-system                 kube-apiserver-embed-certs-777541             250m (12%)    0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-controller-manager-embed-certs-777541    200m (10%)    0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-proxy-j5rl7                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-scheduler-embed-certs-777541             100m (5%)     0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 metrics-server-6867b74b74-6hkzb               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         22m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     23m                kubelet          Node embed-certs-777541 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  23m                kubelet          Node embed-certs-777541 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m                kubelet          Node embed-certs-777541 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 23m                kubelet          Starting kubelet.
	  Normal  NodeReady                23m                kubelet          Node embed-certs-777541 status is now: NodeReady
	  Normal  RegisteredNode           23m                node-controller  Node embed-certs-777541 event: Registered Node embed-certs-777541 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node embed-certs-777541 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node embed-certs-777541 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node embed-certs-777541 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m                node-controller  Node embed-certs-777541 event: Registered Node embed-certs-777541 in Controller
	
	
	==> dmesg <==
	[Aug16 18:14] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.054069] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042273] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.035837] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.950396] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.408864] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.741474] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +0.054029] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054801] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +0.182881] systemd-fstab-generator[675]: Ignoring "noauto" option for root device
	[  +0.118336] systemd-fstab-generator[687]: Ignoring "noauto" option for root device
	[  +0.252108] systemd-fstab-generator[720]: Ignoring "noauto" option for root device
	[  +3.935901] systemd-fstab-generator[817]: Ignoring "noauto" option for root device
	[Aug16 18:15] systemd-fstab-generator[937]: Ignoring "noauto" option for root device
	[  +0.061674] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.526971] kauditd_printk_skb: 69 callbacks suppressed
	[  +2.396369] systemd-fstab-generator[1554]: Ignoring "noauto" option for root device
	[  +3.318516] kauditd_printk_skb: 64 callbacks suppressed
	[ +25.186739] kauditd_printk_skb: 49 callbacks suppressed
	
	
	==> etcd [fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468] <==
	{"level":"info","ts":"2024-08-16T18:15:05.375578Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.61.218:2380"}
	{"level":"info","ts":"2024-08-16T18:15:05.375809Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"73f9a34abd6fe987","initial-advertise-peer-urls":["https://192.168.61.218:2380"],"listen-peer-urls":["https://192.168.61.218:2380"],"advertise-client-urls":["https://192.168.61.218:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.218:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-16T18:15:05.377591Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"2cb457bdfb3a296b","local-member-id":"73f9a34abd6fe987","added-peer-id":"73f9a34abd6fe987","added-peer-peer-urls":["https://192.168.61.218:2380"]}
	{"level":"info","ts":"2024-08-16T18:15:05.378359Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"2cb457bdfb3a296b","local-member-id":"73f9a34abd6fe987","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-16T18:15:05.378414Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-16T18:15:05.378052Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-16T18:15:07.206046Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"73f9a34abd6fe987 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-16T18:15:07.206100Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"73f9a34abd6fe987 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-16T18:15:07.206160Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"73f9a34abd6fe987 received MsgPreVoteResp from 73f9a34abd6fe987 at term 2"}
	{"level":"info","ts":"2024-08-16T18:15:07.206177Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"73f9a34abd6fe987 became candidate at term 3"}
	{"level":"info","ts":"2024-08-16T18:15:07.206183Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"73f9a34abd6fe987 received MsgVoteResp from 73f9a34abd6fe987 at term 3"}
	{"level":"info","ts":"2024-08-16T18:15:07.206192Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"73f9a34abd6fe987 became leader at term 3"}
	{"level":"info","ts":"2024-08-16T18:15:07.206199Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 73f9a34abd6fe987 elected leader 73f9a34abd6fe987 at term 3"}
	{"level":"info","ts":"2024-08-16T18:15:07.214901Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"73f9a34abd6fe987","local-member-attributes":"{Name:embed-certs-777541 ClientURLs:[https://192.168.61.218:2379]}","request-path":"/0/members/73f9a34abd6fe987/attributes","cluster-id":"2cb457bdfb3a296b","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-16T18:15:07.215281Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-16T18:15:07.215538Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-16T18:15:07.215599Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-16T18:15:07.215748Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-16T18:15:07.216783Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T18:15:07.217181Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T18:15:07.217676Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.218:2379"}
	{"level":"info","ts":"2024-08-16T18:15:07.218513Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-16T18:25:07.245313Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":859}
	{"level":"info","ts":"2024-08-16T18:25:07.255343Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":859,"took":"9.666948ms","hash":1634259651,"current-db-size-bytes":2732032,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2732032,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-08-16T18:25:07.255422Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1634259651,"revision":859,"compact-revision":-1}
	
	
	==> kernel <==
	 18:28:40 up 14 min,  0 users,  load average: 0.09, 0.10, 0.08
	Linux embed-certs-777541 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b] <==
	W0816 18:25:09.470828       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 18:25:09.470890       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0816 18:25:09.472024       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0816 18:25:09.472097       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0816 18:26:09.472704       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 18:26:09.472790       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0816 18:26:09.472645       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 18:26:09.472816       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0816 18:26:09.473948       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0816 18:26:09.474019       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0816 18:28:09.474314       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 18:28:09.474408       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0816 18:28:09.474548       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 18:28:09.474607       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0816 18:28:09.475564       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0816 18:28:09.475649       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c] <==
	E0816 18:23:12.126982       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 18:23:12.569671       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 18:23:42.133959       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 18:23:42.577090       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 18:24:12.140222       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 18:24:12.584406       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 18:24:42.146660       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 18:24:42.593962       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 18:25:12.153306       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 18:25:12.600894       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 18:25:42.159729       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 18:25:42.608978       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0816 18:25:51.405542       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-777541"
	I0816 18:26:00.996548       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="258.336µs"
	E0816 18:26:12.166044       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 18:26:12.618391       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0816 18:26:13.993042       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="135.836µs"
	E0816 18:26:42.172559       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 18:26:42.625715       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 18:27:12.178767       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 18:27:12.632957       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 18:27:42.184923       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 18:27:42.639948       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 18:28:12.191008       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 18:28:12.647492       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0816 18:15:09.705563       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0816 18:15:09.715958       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.218"]
	E0816 18:15:09.716033       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0816 18:15:09.743816       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0816 18:15:09.743851       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0816 18:15:09.743880       1 server_linux.go:169] "Using iptables Proxier"
	I0816 18:15:09.746001       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0816 18:15:09.746265       1 server.go:483] "Version info" version="v1.31.0"
	I0816 18:15:09.746287       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 18:15:09.751445       1 config.go:197] "Starting service config controller"
	I0816 18:15:09.751514       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0816 18:15:09.751555       1 config.go:104] "Starting endpoint slice config controller"
	I0816 18:15:09.751577       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0816 18:15:09.752561       1 config.go:326] "Starting node config controller"
	I0816 18:15:09.752849       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0816 18:15:09.851695       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0816 18:15:09.851797       1 shared_informer.go:320] Caches are synced for service config
	I0816 18:15:09.853218       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc] <==
	I0816 18:15:06.361493       1 serving.go:386] Generated self-signed cert in-memory
	W0816 18:15:08.432686       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0816 18:15:08.432809       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0816 18:15:08.432839       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0816 18:15:08.432901       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0816 18:15:08.479820       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0816 18:15:08.479859       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 18:15:08.482198       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0816 18:15:08.482264       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0816 18:15:08.482219       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0816 18:15:08.482312       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0816 18:15:08.582917       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 16 18:27:29 embed-certs-777541 kubelet[944]: E0816 18:27:29.977671     944 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-6hkzb" podUID="3e01da8d-7ddf-47cc-9079-5162cf2c2b53"
	Aug 16 18:27:33 embed-certs-777541 kubelet[944]: E0816 18:27:33.177870     944 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723832853177554346,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:27:33 embed-certs-777541 kubelet[944]: E0816 18:27:33.178230     944 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723832853177554346,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:27:42 embed-certs-777541 kubelet[944]: E0816 18:27:42.979326     944 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-6hkzb" podUID="3e01da8d-7ddf-47cc-9079-5162cf2c2b53"
	Aug 16 18:27:43 embed-certs-777541 kubelet[944]: E0816 18:27:43.179964     944 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723832863179689437,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:27:43 embed-certs-777541 kubelet[944]: E0816 18:27:43.180000     944 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723832863179689437,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:27:53 embed-certs-777541 kubelet[944]: E0816 18:27:53.182396     944 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723832873181895493,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:27:53 embed-certs-777541 kubelet[944]: E0816 18:27:53.182779     944 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723832873181895493,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:27:54 embed-certs-777541 kubelet[944]: E0816 18:27:54.978569     944 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-6hkzb" podUID="3e01da8d-7ddf-47cc-9079-5162cf2c2b53"
	Aug 16 18:28:02 embed-certs-777541 kubelet[944]: E0816 18:28:02.998550     944 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 18:28:02 embed-certs-777541 kubelet[944]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 18:28:02 embed-certs-777541 kubelet[944]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 18:28:02 embed-certs-777541 kubelet[944]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 18:28:02 embed-certs-777541 kubelet[944]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 18:28:03 embed-certs-777541 kubelet[944]: E0816 18:28:03.185026     944 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723832883184575727,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:28:03 embed-certs-777541 kubelet[944]: E0816 18:28:03.185090     944 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723832883184575727,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:28:06 embed-certs-777541 kubelet[944]: E0816 18:28:06.978736     944 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-6hkzb" podUID="3e01da8d-7ddf-47cc-9079-5162cf2c2b53"
	Aug 16 18:28:13 embed-certs-777541 kubelet[944]: E0816 18:28:13.187395     944 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723832893186969403,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:28:13 embed-certs-777541 kubelet[944]: E0816 18:28:13.187837     944 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723832893186969403,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:28:17 embed-certs-777541 kubelet[944]: E0816 18:28:17.977558     944 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-6hkzb" podUID="3e01da8d-7ddf-47cc-9079-5162cf2c2b53"
	Aug 16 18:28:23 embed-certs-777541 kubelet[944]: E0816 18:28:23.189174     944 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723832903188831017,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:28:23 embed-certs-777541 kubelet[944]: E0816 18:28:23.189225     944 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723832903188831017,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:28:31 embed-certs-777541 kubelet[944]: E0816 18:28:31.977469     944 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-6hkzb" podUID="3e01da8d-7ddf-47cc-9079-5162cf2c2b53"
	Aug 16 18:28:33 embed-certs-777541 kubelet[944]: E0816 18:28:33.190995     944 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723832913190654062,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:28:33 embed-certs-777541 kubelet[944]: E0816 18:28:33.191313     944 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723832913190654062,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970] <==
	I0816 18:15:40.254869       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0816 18:15:40.262831       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0816 18:15:40.262908       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0816 18:15:57.665960       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0816 18:15:57.666602       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"81595c3c-39f4-4f4e-a45f-e2659ab69722", APIVersion:"v1", ResourceVersion:"642", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-777541_31124775-bfcb-43b2-b7cc-5d32dd9342a4 became leader
	I0816 18:15:57.668218       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-777541_31124775-bfcb-43b2-b7cc-5d32dd9342a4!
	I0816 18:15:57.768771       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-777541_31124775-bfcb-43b2-b7cc-5d32dd9342a4!
	
	
	==> storage-provisioner [81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e] <==
	I0816 18:15:09.530878       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0816 18:15:39.534335       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-777541 -n embed-certs-777541
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-777541 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-6hkzb
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-777541 describe pod metrics-server-6867b74b74-6hkzb
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-777541 describe pod metrics-server-6867b74b74-6hkzb: exit status 1 (66.142242ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-6hkzb" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-777541 describe pod metrics-server-6867b74b74-6hkzb: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0816 18:22:48.775232   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/flannel-791304/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0816 18:23:21.061823   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/functional-654639/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0816 18:23:29.899833   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/enable-default-cni-791304/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0816 18:23:43.679887   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/bridge-791304/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0816 18:24:11.231879   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/calico-791304/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0816 18:24:11.838575   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/flannel-791304/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0816 18:24:15.344171   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0816 18:24:32.866200   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/kindnet-791304/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0816 18:24:52.964081   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/enable-default-cni-791304/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0816 18:25:06.744176   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/bridge-791304/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0816 18:25:14.631597   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/custom-flannel-791304/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0816 18:25:34.295322   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/calico-791304/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0816 18:25:55.931695   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/kindnet-791304/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0816 18:26:12.269414   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0816 18:26:23.210550   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/auto-791304/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0816 18:26:37.696202   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/custom-flannel-791304/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0816 18:27:48.775454   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/flannel-791304/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0816 18:28:21.061445   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/functional-654639/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0816 18:28:43.679448   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/bridge-791304/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0816 18:29:11.231909   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/calico-791304/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0816 18:29:32.866399   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/kindnet-791304/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0816 18:30:14.631164   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/custom-flannel-791304/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0816 18:31:12.269506   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0816 18:31:23.210224   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/auto-791304/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0816 18:31:24.135020   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/functional-654639/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-783465 -n old-k8s-version-783465
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-783465 -n old-k8s-version-783465: exit status 2 (219.584886ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-783465" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-783465 -n old-k8s-version-783465
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-783465 -n old-k8s-version-783465: exit status 2 (218.598641ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-783465 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-783465 logs -n 25: (1.626875447s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p custom-flannel-791304 sudo cat                      | custom-flannel-791304        | jenkins | v1.33.1 | 16 Aug 24 18:05 UTC | 16 Aug 24 18:05 UTC |
	|         | /lib/systemd/system/containerd.service                 |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-791304                               | custom-flannel-791304        | jenkins | v1.33.1 | 16 Aug 24 18:05 UTC | 16 Aug 24 18:05 UTC |
	|         | sudo cat                                               |                              |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-791304 sudo                          | custom-flannel-791304        | jenkins | v1.33.1 | 16 Aug 24 18:05 UTC | 16 Aug 24 18:05 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-791304 sudo                          | custom-flannel-791304        | jenkins | v1.33.1 | 16 Aug 24 18:05 UTC | 16 Aug 24 18:05 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-791304 sudo                          | custom-flannel-791304        | jenkins | v1.33.1 | 16 Aug 24 18:05 UTC | 16 Aug 24 18:05 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-791304 sudo                          | custom-flannel-791304        | jenkins | v1.33.1 | 16 Aug 24 18:05 UTC | 16 Aug 24 18:05 UTC |
	|         | find /etc/crio -type f -exec                           |                              |         |         |                     |                     |
	|         | sh -c 'echo {}; cat {}' \;                             |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-791304 sudo                          | custom-flannel-791304        | jenkins | v1.33.1 | 16 Aug 24 18:05 UTC | 16 Aug 24 18:05 UTC |
	|         | crio config                                            |                              |         |         |                     |                     |
	| delete  | -p custom-flannel-791304                               | custom-flannel-791304        | jenkins | v1.33.1 | 16 Aug 24 18:05 UTC | 16 Aug 24 18:05 UTC |
	| start   | -p                                                     | default-k8s-diff-port-256678 | jenkins | v1.33.1 | 16 Aug 24 18:05 UTC | 16 Aug 24 18:07 UTC |
	|         | default-k8s-diff-port-256678                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-777541            | embed-certs-777541           | jenkins | v1.33.1 | 16 Aug 24 18:06 UTC | 16 Aug 24 18:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-777541                                  | embed-certs-777541           | jenkins | v1.33.1 | 16 Aug 24 18:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-864476             | no-preload-864476            | jenkins | v1.33.1 | 16 Aug 24 18:07 UTC | 16 Aug 24 18:07 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-864476                                   | no-preload-864476            | jenkins | v1.33.1 | 16 Aug 24 18:07 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-256678  | default-k8s-diff-port-256678 | jenkins | v1.33.1 | 16 Aug 24 18:07 UTC | 16 Aug 24 18:07 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-256678 | jenkins | v1.33.1 | 16 Aug 24 18:07 UTC |                     |
	|         | default-k8s-diff-port-256678                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-777541                 | embed-certs-777541           | jenkins | v1.33.1 | 16 Aug 24 18:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-783465        | old-k8s-version-783465       | jenkins | v1.33.1 | 16 Aug 24 18:08 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-777541                                  | embed-certs-777541           | jenkins | v1.33.1 | 16 Aug 24 18:08 UTC | 16 Aug 24 18:19 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-864476                  | no-preload-864476            | jenkins | v1.33.1 | 16 Aug 24 18:09 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-864476                                   | no-preload-864476            | jenkins | v1.33.1 | 16 Aug 24 18:09 UTC | 16 Aug 24 18:19 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-256678       | default-k8s-diff-port-256678 | jenkins | v1.33.1 | 16 Aug 24 18:09 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-256678 | jenkins | v1.33.1 | 16 Aug 24 18:09 UTC | 16 Aug 24 18:19 UTC |
	|         | default-k8s-diff-port-256678                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-783465                              | old-k8s-version-783465       | jenkins | v1.33.1 | 16 Aug 24 18:10 UTC | 16 Aug 24 18:10 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-783465             | old-k8s-version-783465       | jenkins | v1.33.1 | 16 Aug 24 18:10 UTC | 16 Aug 24 18:10 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-783465                              | old-k8s-version-783465       | jenkins | v1.33.1 | 16 Aug 24 18:10 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 18:10:53
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 18:10:53.101149   75402 out.go:345] Setting OutFile to fd 1 ...
	I0816 18:10:53.101401   75402 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 18:10:53.101412   75402 out.go:358] Setting ErrFile to fd 2...
	I0816 18:10:53.101418   75402 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 18:10:53.101600   75402 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-9545/.minikube/bin
	I0816 18:10:53.102131   75402 out.go:352] Setting JSON to false
	I0816 18:10:53.103018   75402 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6751,"bootTime":1723825102,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0816 18:10:53.103076   75402 start.go:139] virtualization: kvm guest
	I0816 18:10:53.105216   75402 out.go:177] * [old-k8s-version-783465] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0816 18:10:53.106496   75402 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 18:10:53.106504   75402 notify.go:220] Checking for updates...
	I0816 18:10:53.109235   75402 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 18:10:53.110572   75402 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19461-9545/kubeconfig
	I0816 18:10:53.111747   75402 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19461-9545/.minikube
	I0816 18:10:53.113164   75402 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 18:10:53.114589   75402 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 18:10:53.116284   75402 config.go:182] Loaded profile config "old-k8s-version-783465": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0816 18:10:53.116746   75402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:10:53.116806   75402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:10:53.132445   75402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46631
	I0816 18:10:53.132886   75402 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:10:53.133456   75402 main.go:141] libmachine: Using API Version  1
	I0816 18:10:53.133494   75402 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:10:53.133836   75402 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:10:53.134015   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	I0816 18:10:53.135791   75402 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0816 18:10:53.136942   75402 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 18:10:53.137229   75402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:10:53.137260   75402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:10:53.151853   75402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34617
	I0816 18:10:53.152327   75402 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:10:53.152881   75402 main.go:141] libmachine: Using API Version  1
	I0816 18:10:53.152905   75402 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:10:53.153159   75402 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:10:53.153307   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	I0816 18:10:53.188002   75402 out.go:177] * Using the kvm2 driver based on existing profile
	I0816 18:10:53.189287   75402 start.go:297] selected driver: kvm2
	I0816 18:10:53.189309   75402 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-783465 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-783465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 18:10:53.189432   75402 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 18:10:53.190098   75402 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 18:10:53.190187   75402 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19461-9545/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 18:10:53.205024   75402 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0816 18:10:53.205386   75402 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 18:10:53.205417   75402 cni.go:84] Creating CNI manager for ""
	I0816 18:10:53.205425   75402 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 18:10:53.205458   75402 start.go:340] cluster config:
	{Name:old-k8s-version-783465 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-783465 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 18:10:53.205557   75402 iso.go:125] acquiring lock: {Name:mke35866c1d0f078e1f027c2d727692722810e04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 18:10:53.207241   75402 out.go:177] * Starting "old-k8s-version-783465" primary control-plane node in "old-k8s-version-783465" cluster
	I0816 18:10:53.208254   75402 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0816 18:10:53.208286   75402 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0816 18:10:53.208298   75402 cache.go:56] Caching tarball of preloaded images
	I0816 18:10:53.208386   75402 preload.go:172] Found /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 18:10:53.208400   75402 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0816 18:10:53.208510   75402 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/config.json ...
	I0816 18:10:53.208736   75402 start.go:360] acquireMachinesLock for old-k8s-version-783465: {Name:mke1ffa1f2d6d714bdd85e184816ba8f4dfd08f1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 18:10:54.604889   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:10:57.676891   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:03.756940   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:06.828911   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:12.908885   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:15.980925   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:22.060891   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:25.132961   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:31.212919   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:34.284876   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:40.365032   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:43.436910   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:49.516914   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:52.588969   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:58.668915   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:01.740965   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:07.820898   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:10.892922   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:16.972913   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:20.044913   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:26.124921   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:29.196968   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:35.276952   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:38.348971   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:44.428932   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:47.500897   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:53.580923   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:56.652927   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:13:02.732992   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:13:05.804929   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:13:11.884953   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:13:14.956943   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:13:21.036963   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:13:24.108915   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:13:30.188851   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:13:33.260936   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:13:36.264963   74828 start.go:364] duration metric: took 4m2.37855556s to acquireMachinesLock for "no-preload-864476"
	I0816 18:13:36.265020   74828 start.go:96] Skipping create...Using existing machine configuration
	I0816 18:13:36.265027   74828 fix.go:54] fixHost starting: 
	I0816 18:13:36.265379   74828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:13:36.265409   74828 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:13:36.280707   74828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35227
	I0816 18:13:36.281167   74828 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:13:36.281747   74828 main.go:141] libmachine: Using API Version  1
	I0816 18:13:36.281778   74828 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:13:36.282122   74828 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:13:36.282330   74828 main.go:141] libmachine: (no-preload-864476) Calling .DriverName
	I0816 18:13:36.282457   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetState
	I0816 18:13:36.284064   74828 fix.go:112] recreateIfNeeded on no-preload-864476: state=Stopped err=<nil>
	I0816 18:13:36.284084   74828 main.go:141] libmachine: (no-preload-864476) Calling .DriverName
	W0816 18:13:36.284217   74828 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 18:13:36.286749   74828 out.go:177] * Restarting existing kvm2 VM for "no-preload-864476" ...
	I0816 18:13:36.262619   74510 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 18:13:36.262654   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetMachineName
	I0816 18:13:36.262944   74510 buildroot.go:166] provisioning hostname "embed-certs-777541"
	I0816 18:13:36.262967   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetMachineName
	I0816 18:13:36.263222   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:13:36.264803   74510 machine.go:96] duration metric: took 4m37.429582668s to provisionDockerMachine
	I0816 18:13:36.264858   74510 fix.go:56] duration metric: took 4m37.449862851s for fixHost
	I0816 18:13:36.264867   74510 start.go:83] releasing machines lock for "embed-certs-777541", held for 4m37.449881856s
	W0816 18:13:36.264895   74510 start.go:714] error starting host: provision: host is not running
	W0816 18:13:36.264994   74510 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0816 18:13:36.265005   74510 start.go:729] Will try again in 5 seconds ...
	I0816 18:13:36.288329   74828 main.go:141] libmachine: (no-preload-864476) Calling .Start
	I0816 18:13:36.288484   74828 main.go:141] libmachine: (no-preload-864476) Ensuring networks are active...
	I0816 18:13:36.289285   74828 main.go:141] libmachine: (no-preload-864476) Ensuring network default is active
	I0816 18:13:36.289912   74828 main.go:141] libmachine: (no-preload-864476) Ensuring network mk-no-preload-864476 is active
	I0816 18:13:36.290318   74828 main.go:141] libmachine: (no-preload-864476) Getting domain xml...
	I0816 18:13:36.291176   74828 main.go:141] libmachine: (no-preload-864476) Creating domain...
	I0816 18:13:37.504191   74828 main.go:141] libmachine: (no-preload-864476) Waiting to get IP...
	I0816 18:13:37.505110   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:37.505575   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:37.505621   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:37.505543   75973 retry.go:31] will retry after 308.411866ms: waiting for machine to come up
	I0816 18:13:37.816219   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:37.816877   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:37.816931   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:37.816852   75973 retry.go:31] will retry after 321.445064ms: waiting for machine to come up
	I0816 18:13:38.140594   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:38.141059   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:38.141082   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:38.141018   75973 retry.go:31] will retry after 337.935433ms: waiting for machine to come up
	I0816 18:13:38.480699   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:38.481110   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:38.481135   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:38.481033   75973 retry.go:31] will retry after 449.775503ms: waiting for machine to come up
	I0816 18:13:41.266589   74510 start.go:360] acquireMachinesLock for embed-certs-777541: {Name:mke1ffa1f2d6d714bdd85e184816ba8f4dfd08f1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 18:13:38.932812   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:38.933232   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:38.933259   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:38.933171   75973 retry.go:31] will retry after 482.676832ms: waiting for machine to come up
	I0816 18:13:39.417939   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:39.418323   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:39.418350   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:39.418276   75973 retry.go:31] will retry after 740.37516ms: waiting for machine to come up
	I0816 18:13:40.160491   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:40.160917   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:40.160942   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:40.160867   75973 retry.go:31] will retry after 1.10464436s: waiting for machine to come up
	I0816 18:13:41.267213   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:41.267654   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:41.267680   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:41.267613   75973 retry.go:31] will retry after 1.395131164s: waiting for machine to come up
	I0816 18:13:42.664731   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:42.665229   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:42.665252   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:42.665181   75973 retry.go:31] will retry after 1.560403289s: waiting for machine to come up
	I0816 18:13:44.226847   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:44.227375   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:44.227404   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:44.227342   75973 retry.go:31] will retry after 1.647944685s: waiting for machine to come up
	I0816 18:13:45.876965   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:45.877411   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:45.877440   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:45.877366   75973 retry.go:31] will retry after 1.971325886s: waiting for machine to come up
	I0816 18:13:47.849950   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:47.850457   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:47.850490   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:47.850383   75973 retry.go:31] will retry after 2.95642392s: waiting for machine to come up
	I0816 18:13:50.810560   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:50.811013   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:50.811045   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:50.810930   75973 retry.go:31] will retry after 4.510008193s: waiting for machine to come up
	I0816 18:13:56.529339   75006 start.go:364] duration metric: took 4m6.515818295s to acquireMachinesLock for "default-k8s-diff-port-256678"
	I0816 18:13:56.529444   75006 start.go:96] Skipping create...Using existing machine configuration
	I0816 18:13:56.529459   75006 fix.go:54] fixHost starting: 
	I0816 18:13:56.529851   75006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:13:56.529890   75006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:13:56.547077   75006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45661
	I0816 18:13:56.547585   75006 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:13:56.548068   75006 main.go:141] libmachine: Using API Version  1
	I0816 18:13:56.548091   75006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:13:56.548421   75006 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:13:56.548610   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .DriverName
	I0816 18:13:56.548766   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetState
	I0816 18:13:56.550373   75006 fix.go:112] recreateIfNeeded on default-k8s-diff-port-256678: state=Stopped err=<nil>
	I0816 18:13:56.550414   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .DriverName
	W0816 18:13:56.550604   75006 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 18:13:56.552781   75006 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-256678" ...
	I0816 18:13:55.326062   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.326558   74828 main.go:141] libmachine: (no-preload-864476) Found IP for machine: 192.168.50.50
	I0816 18:13:55.326576   74828 main.go:141] libmachine: (no-preload-864476) Reserving static IP address...
	I0816 18:13:55.326593   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has current primary IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.327109   74828 main.go:141] libmachine: (no-preload-864476) Reserved static IP address: 192.168.50.50
	I0816 18:13:55.327142   74828 main.go:141] libmachine: (no-preload-864476) Waiting for SSH to be available...
	I0816 18:13:55.327167   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "no-preload-864476", mac: "52:54:00:f3:50:53", ip: "192.168.50.50"} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:55.327191   74828 main.go:141] libmachine: (no-preload-864476) DBG | skip adding static IP to network mk-no-preload-864476 - found existing host DHCP lease matching {name: "no-preload-864476", mac: "52:54:00:f3:50:53", ip: "192.168.50.50"}
	I0816 18:13:55.327205   74828 main.go:141] libmachine: (no-preload-864476) DBG | Getting to WaitForSSH function...
	I0816 18:13:55.329001   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.329350   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:55.329378   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.329534   74828 main.go:141] libmachine: (no-preload-864476) DBG | Using SSH client type: external
	I0816 18:13:55.329574   74828 main.go:141] libmachine: (no-preload-864476) DBG | Using SSH private key: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/no-preload-864476/id_rsa (-rw-------)
	I0816 18:13:55.329604   74828 main.go:141] libmachine: (no-preload-864476) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.50 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19461-9545/.minikube/machines/no-preload-864476/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 18:13:55.329622   74828 main.go:141] libmachine: (no-preload-864476) DBG | About to run SSH command:
	I0816 18:13:55.329636   74828 main.go:141] libmachine: (no-preload-864476) DBG | exit 0
	I0816 18:13:55.452553   74828 main.go:141] libmachine: (no-preload-864476) DBG | SSH cmd err, output: <nil>: 
	I0816 18:13:55.452964   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetConfigRaw
	I0816 18:13:55.453557   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetIP
	I0816 18:13:55.455951   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.456334   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:55.456370   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.456564   74828 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/no-preload-864476/config.json ...
	I0816 18:13:55.456782   74828 machine.go:93] provisionDockerMachine start ...
	I0816 18:13:55.456801   74828 main.go:141] libmachine: (no-preload-864476) Calling .DriverName
	I0816 18:13:55.456983   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:13:55.459149   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.459547   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:55.459570   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.459730   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHPort
	I0816 18:13:55.459918   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:55.460068   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:55.460207   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHUsername
	I0816 18:13:55.460418   74828 main.go:141] libmachine: Using SSH client type: native
	I0816 18:13:55.460603   74828 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.50 22 <nil> <nil>}
	I0816 18:13:55.460637   74828 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 18:13:55.564875   74828 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 18:13:55.564903   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetMachineName
	I0816 18:13:55.565203   74828 buildroot.go:166] provisioning hostname "no-preload-864476"
	I0816 18:13:55.565229   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetMachineName
	I0816 18:13:55.565455   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:13:55.568114   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.568578   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:55.568612   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.568777   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHPort
	I0816 18:13:55.568912   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:55.569023   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:55.569200   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHUsername
	I0816 18:13:55.569448   74828 main.go:141] libmachine: Using SSH client type: native
	I0816 18:13:55.569649   74828 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.50 22 <nil> <nil>}
	I0816 18:13:55.569667   74828 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-864476 && echo "no-preload-864476" | sudo tee /etc/hostname
	I0816 18:13:55.686349   74828 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-864476
	
	I0816 18:13:55.686378   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:13:55.689171   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.689572   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:55.689608   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.689792   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHPort
	I0816 18:13:55.690008   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:55.690183   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:55.690418   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHUsername
	I0816 18:13:55.690623   74828 main.go:141] libmachine: Using SSH client type: native
	I0816 18:13:55.690782   74828 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.50 22 <nil> <nil>}
	I0816 18:13:55.690798   74828 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-864476' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-864476/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-864476' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 18:13:55.800352   74828 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 18:13:55.800386   74828 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19461-9545/.minikube CaCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19461-9545/.minikube}
	I0816 18:13:55.800436   74828 buildroot.go:174] setting up certificates
	I0816 18:13:55.800452   74828 provision.go:84] configureAuth start
	I0816 18:13:55.800470   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetMachineName
	I0816 18:13:55.800793   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetIP
	I0816 18:13:55.803388   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.803786   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:55.803822   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.804025   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:13:55.806567   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.806977   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:55.807003   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.807129   74828 provision.go:143] copyHostCerts
	I0816 18:13:55.807178   74828 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem, removing ...
	I0816 18:13:55.807198   74828 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem
	I0816 18:13:55.807286   74828 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem (1082 bytes)
	I0816 18:13:55.807401   74828 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem, removing ...
	I0816 18:13:55.807412   74828 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem
	I0816 18:13:55.807439   74828 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem (1123 bytes)
	I0816 18:13:55.807554   74828 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem, removing ...
	I0816 18:13:55.807565   74828 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem
	I0816 18:13:55.807588   74828 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem (1679 bytes)
	I0816 18:13:55.807648   74828 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem org=jenkins.no-preload-864476 san=[127.0.0.1 192.168.50.50 localhost minikube no-preload-864476]
	I0816 18:13:55.881474   74828 provision.go:177] copyRemoteCerts
	I0816 18:13:55.881529   74828 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 18:13:55.881558   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:13:55.884424   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.884952   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:55.884983   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.885138   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHPort
	I0816 18:13:55.885335   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:55.885486   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHUsername
	I0816 18:13:55.885669   74828 sshutil.go:53] new ssh client: &{IP:192.168.50.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/no-preload-864476/id_rsa Username:docker}
	I0816 18:13:55.966915   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0816 18:13:55.989812   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 18:13:56.011744   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 18:13:56.032745   74828 provision.go:87] duration metric: took 232.276991ms to configureAuth
	I0816 18:13:56.032778   74828 buildroot.go:189] setting minikube options for container-runtime
	I0816 18:13:56.033001   74828 config.go:182] Loaded profile config "no-preload-864476": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 18:13:56.033096   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:13:56.035919   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:56.036283   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:56.036311   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:56.036499   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHPort
	I0816 18:13:56.036713   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:56.036861   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:56.036975   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHUsername
	I0816 18:13:56.037100   74828 main.go:141] libmachine: Using SSH client type: native
	I0816 18:13:56.037275   74828 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.50 22 <nil> <nil>}
	I0816 18:13:56.037294   74828 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 18:13:56.296112   74828 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 18:13:56.296140   74828 machine.go:96] duration metric: took 839.343895ms to provisionDockerMachine
	I0816 18:13:56.296152   74828 start.go:293] postStartSetup for "no-preload-864476" (driver="kvm2")
	I0816 18:13:56.296162   74828 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 18:13:56.296177   74828 main.go:141] libmachine: (no-preload-864476) Calling .DriverName
	I0816 18:13:56.296537   74828 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 18:13:56.296570   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:13:56.299838   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:56.300364   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:56.300396   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:56.300603   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHPort
	I0816 18:13:56.300833   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:56.300985   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHUsername
	I0816 18:13:56.301187   74828 sshutil.go:53] new ssh client: &{IP:192.168.50.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/no-preload-864476/id_rsa Username:docker}
	I0816 18:13:56.383095   74828 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 18:13:56.387172   74828 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 18:13:56.387200   74828 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/addons for local assets ...
	I0816 18:13:56.387286   74828 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/files for local assets ...
	I0816 18:13:56.387392   74828 filesync.go:149] local asset: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem -> 167532.pem in /etc/ssl/certs
	I0816 18:13:56.387550   74828 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 18:13:56.396072   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /etc/ssl/certs/167532.pem (1708 bytes)
	I0816 18:13:56.419470   74828 start.go:296] duration metric: took 123.306644ms for postStartSetup
	I0816 18:13:56.419509   74828 fix.go:56] duration metric: took 20.154482872s for fixHost
	I0816 18:13:56.419529   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:13:56.422047   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:56.422454   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:56.422503   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:56.422573   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHPort
	I0816 18:13:56.422764   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:56.422963   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:56.423150   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHUsername
	I0816 18:13:56.423388   74828 main.go:141] libmachine: Using SSH client type: native
	I0816 18:13:56.423597   74828 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.50 22 <nil> <nil>}
	I0816 18:13:56.423610   74828 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 18:13:56.529164   74828 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723832036.506687395
	
	I0816 18:13:56.529190   74828 fix.go:216] guest clock: 1723832036.506687395
	I0816 18:13:56.529200   74828 fix.go:229] Guest: 2024-08-16 18:13:56.506687395 +0000 UTC Remote: 2024-08-16 18:13:56.419513163 +0000 UTC m=+262.671840210 (delta=87.174232ms)
	I0816 18:13:56.529229   74828 fix.go:200] guest clock delta is within tolerance: 87.174232ms
	I0816 18:13:56.529246   74828 start.go:83] releasing machines lock for "no-preload-864476", held for 20.264231324s
	I0816 18:13:56.529276   74828 main.go:141] libmachine: (no-preload-864476) Calling .DriverName
	I0816 18:13:56.529645   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetIP
	I0816 18:13:56.532279   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:56.532599   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:56.532660   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:56.532824   74828 main.go:141] libmachine: (no-preload-864476) Calling .DriverName
	I0816 18:13:56.533348   74828 main.go:141] libmachine: (no-preload-864476) Calling .DriverName
	I0816 18:13:56.533522   74828 main.go:141] libmachine: (no-preload-864476) Calling .DriverName
	I0816 18:13:56.533604   74828 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 18:13:56.533663   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:13:56.533759   74828 ssh_runner.go:195] Run: cat /version.json
	I0816 18:13:56.533786   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:13:56.536427   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:56.536711   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:56.536822   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:56.536845   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:56.536996   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHPort
	I0816 18:13:56.537071   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:56.537105   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:56.537191   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:56.537334   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHUsername
	I0816 18:13:56.537430   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHPort
	I0816 18:13:56.537497   74828 sshutil.go:53] new ssh client: &{IP:192.168.50.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/no-preload-864476/id_rsa Username:docker}
	I0816 18:13:56.537582   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:56.537728   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHUsername
	I0816 18:13:56.537964   74828 sshutil.go:53] new ssh client: &{IP:192.168.50.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/no-preload-864476/id_rsa Username:docker}
	I0816 18:13:56.654319   74828 ssh_runner.go:195] Run: systemctl --version
	I0816 18:13:56.660640   74828 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 18:13:56.806359   74828 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 18:13:56.812415   74828 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 18:13:56.812489   74828 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 18:13:56.828095   74828 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 18:13:56.828122   74828 start.go:495] detecting cgroup driver to use...
	I0816 18:13:56.828186   74828 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 18:13:56.843041   74828 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 18:13:56.856322   74828 docker.go:217] disabling cri-docker service (if available) ...
	I0816 18:13:56.856386   74828 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 18:13:56.869899   74828 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 18:13:56.884609   74828 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 18:13:56.990986   74828 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 18:13:57.134218   74828 docker.go:233] disabling docker service ...
	I0816 18:13:57.134283   74828 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 18:13:57.156415   74828 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 18:13:57.172969   74828 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 18:13:57.328279   74828 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 18:13:57.448217   74828 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 18:13:57.461630   74828 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 18:13:57.478199   74828 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 18:13:57.478271   74828 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:13:57.487845   74828 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 18:13:57.487918   74828 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:13:57.497895   74828 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:13:57.509260   74828 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:13:57.519090   74828 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 18:13:57.529351   74828 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:13:57.539816   74828 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:13:57.559271   74828 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:13:57.573027   74828 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 18:13:57.583410   74828 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 18:13:57.583490   74828 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 18:13:57.598762   74828 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 18:13:57.609589   74828 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:13:57.727016   74828 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 18:13:57.876815   74828 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 18:13:57.876876   74828 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 18:13:57.882172   74828 start.go:563] Will wait 60s for crictl version
	I0816 18:13:57.882241   74828 ssh_runner.go:195] Run: which crictl
	I0816 18:13:57.885706   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 18:13:57.926981   74828 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 18:13:57.927070   74828 ssh_runner.go:195] Run: crio --version
	I0816 18:13:57.957802   74828 ssh_runner.go:195] Run: crio --version
	I0816 18:13:57.984920   74828 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 18:13:57.986450   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetIP
	I0816 18:13:57.989584   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:57.990205   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:57.990257   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:57.990552   74828 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0816 18:13:57.994584   74828 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 18:13:58.007996   74828 kubeadm.go:883] updating cluster {Name:no-preload-864476 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-864476 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.50 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 18:13:58.008137   74828 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 18:13:58.008184   74828 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 18:13:58.041643   74828 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0816 18:13:58.041672   74828 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0816 18:13:58.041751   74828 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:13:58.041778   74828 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0816 18:13:58.041794   74828 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0816 18:13:58.041741   74828 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0816 18:13:58.041779   74828 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 18:13:58.041899   74828 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0816 18:13:58.041918   74828 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0816 18:13:58.041798   74828 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0816 18:13:58.043387   74828 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0816 18:13:58.043471   74828 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0816 18:13:58.043386   74828 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:13:58.043471   74828 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 18:13:58.043388   74828 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0816 18:13:58.043387   74828 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0816 18:13:58.043386   74828 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0816 18:13:58.043394   74828 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0816 18:13:58.289223   74828 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0816 18:13:58.299125   74828 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0816 18:13:58.308703   74828 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0816 18:13:58.339031   74828 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 18:13:58.351467   74828 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0816 18:13:58.351514   74828 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0816 18:13:58.351572   74828 ssh_runner.go:195] Run: which crictl
	I0816 18:13:58.358019   74828 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0816 18:13:58.359198   74828 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0816 18:13:58.385487   74828 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0816 18:13:58.385529   74828 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0816 18:13:58.385571   74828 ssh_runner.go:195] Run: which crictl
	I0816 18:13:58.392417   74828 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0816 18:13:58.506834   74828 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0816 18:13:58.506886   74828 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 18:13:58.506896   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0816 18:13:58.506924   74828 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0816 18:13:58.506963   74828 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0816 18:13:58.507003   74828 ssh_runner.go:195] Run: which crictl
	I0816 18:13:58.506928   74828 ssh_runner.go:195] Run: which crictl
	I0816 18:13:58.507072   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0816 18:13:58.507004   74828 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0816 18:13:58.507099   74828 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0816 18:13:58.507124   74828 ssh_runner.go:195] Run: which crictl
	I0816 18:13:58.507160   74828 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0816 18:13:58.507181   74828 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0816 18:13:58.507228   74828 ssh_runner.go:195] Run: which crictl
	I0816 18:13:58.562410   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0816 18:13:58.562469   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0816 18:13:58.562481   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0816 18:13:58.562554   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 18:13:58.562590   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0816 18:13:58.562628   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0816 18:13:58.686069   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0816 18:13:58.690288   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0816 18:13:58.690352   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0816 18:13:58.692851   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 18:13:58.692911   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0816 18:13:58.693027   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0816 18:13:58.777263   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0816 18:13:56.554238   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .Start
	I0816 18:13:56.554426   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Ensuring networks are active...
	I0816 18:13:56.555221   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Ensuring network default is active
	I0816 18:13:56.555599   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Ensuring network mk-default-k8s-diff-port-256678 is active
	I0816 18:13:56.556004   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Getting domain xml...
	I0816 18:13:56.556809   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Creating domain...
	I0816 18:13:57.825641   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting to get IP...
	I0816 18:13:57.826681   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:13:57.827158   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:13:57.827219   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:13:57.827129   76107 retry.go:31] will retry after 267.923612ms: waiting for machine to come up
	I0816 18:13:58.096794   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:13:58.097184   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:13:58.097219   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:13:58.097158   76107 retry.go:31] will retry after 286.726817ms: waiting for machine to come up
	I0816 18:13:58.386213   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:13:58.386757   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:13:58.386782   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:13:58.386704   76107 retry.go:31] will retry after 386.697374ms: waiting for machine to come up
	I0816 18:13:58.775483   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:13:58.775989   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:13:58.776014   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:13:58.775949   76107 retry.go:31] will retry after 554.398617ms: waiting for machine to come up
	I0816 18:13:59.331517   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:13:59.332002   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:13:59.332024   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:13:59.331943   76107 retry.go:31] will retry after 589.24333ms: waiting for machine to come up
	I0816 18:13:58.823309   74828 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0816 18:13:58.823318   74828 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0816 18:13:58.823410   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 18:13:58.823434   74828 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0816 18:13:58.823437   74828 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0816 18:13:58.823549   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0816 18:13:58.836312   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0816 18:13:58.894363   74828 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0816 18:13:58.894428   74828 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0816 18:13:58.894447   74828 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0816 18:13:58.894495   74828 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0816 18:13:58.894495   74828 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0816 18:13:58.933183   74828 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0816 18:13:58.933290   74828 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0816 18:13:58.934389   74828 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0816 18:13:58.934456   74828 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0816 18:13:58.934491   74828 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0816 18:13:58.934550   74828 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0816 18:13:58.934569   74828 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0816 18:13:58.934682   74828 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:14:00.792156   74828 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (1.897633034s)
	I0816 18:14:00.792196   74828 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0816 18:14:00.792224   74828 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (1.89763588s)
	I0816 18:14:00.792257   74828 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0816 18:14:00.792230   74828 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0816 18:14:00.792281   74828 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.858968807s)
	I0816 18:14:00.792300   74828 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0816 18:14:00.792317   74828 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0816 18:14:00.792355   74828 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1: (1.85778817s)
	I0816 18:14:00.792370   74828 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0816 18:14:00.792415   74828 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.857704749s)
	I0816 18:14:00.792422   74828 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.857843473s)
	I0816 18:14:00.792436   74828 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0816 18:14:00.792457   74828 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0816 18:14:00.792491   74828 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:14:00.792528   74828 ssh_runner.go:195] Run: which crictl
	I0816 18:14:00.797103   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:14:03.171070   74828 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.378727123s)
	I0816 18:14:03.171118   74828 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0816 18:14:03.171149   74828 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.374004458s)
	I0816 18:14:03.171155   74828 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0816 18:14:03.171274   74828 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0816 18:14:03.171225   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:13:59.922834   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:13:59.923439   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:13:59.923467   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:13:59.923368   76107 retry.go:31] will retry after 779.656786ms: waiting for machine to come up
	I0816 18:14:00.704929   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:00.705395   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:14:00.705417   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:14:00.705344   76107 retry.go:31] will retry after 790.87115ms: waiting for machine to come up
	I0816 18:14:01.497557   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:01.497999   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:14:01.498052   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:14:01.497981   76107 retry.go:31] will retry after 919.825072ms: waiting for machine to come up
	I0816 18:14:02.419821   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:02.420280   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:14:02.420312   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:14:02.420227   76107 retry.go:31] will retry after 1.304504009s: waiting for machine to come up
	I0816 18:14:03.725928   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:03.726378   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:14:03.726400   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:14:03.726344   76107 retry.go:31] will retry after 2.105251359s: waiting for machine to come up
	I0816 18:14:06.879864   74828 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.708558161s)
	I0816 18:14:06.879904   74828 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0816 18:14:06.879905   74828 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.708563338s)
	I0816 18:14:06.879935   74828 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0816 18:14:06.879981   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:14:06.879991   74828 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0816 18:14:08.769077   74828 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.889063218s)
	I0816 18:14:08.769114   74828 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0816 18:14:08.769145   74828 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0816 18:14:08.769231   74828 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0816 18:14:08.769146   74828 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.889146748s)
	I0816 18:14:08.769343   74828 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0816 18:14:08.769431   74828 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0816 18:14:05.833605   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:05.834078   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:14:05.834109   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:14:05.834025   76107 retry.go:31] will retry after 2.042421539s: waiting for machine to come up
	I0816 18:14:07.878000   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:07.878510   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:14:07.878541   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:14:07.878432   76107 retry.go:31] will retry after 2.777402825s: waiting for machine to come up
	I0816 18:14:10.627286   74828 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.858028746s)
	I0816 18:14:10.627331   74828 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0816 18:14:10.627346   74828 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.857891086s)
	I0816 18:14:10.627358   74828 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0816 18:14:10.627378   74828 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0816 18:14:10.627402   74828 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0816 18:14:11.977277   74828 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.349851948s)
	I0816 18:14:11.977314   74828 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0816 18:14:11.977339   74828 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0816 18:14:11.977389   74828 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0816 18:14:12.630939   74828 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0816 18:14:12.630999   74828 cache_images.go:123] Successfully loaded all cached images
	I0816 18:14:12.631004   74828 cache_images.go:92] duration metric: took 14.589319022s to LoadCachedImages
	I0816 18:14:12.631016   74828 kubeadm.go:934] updating node { 192.168.50.50 8443 v1.31.0 crio true true} ...
	I0816 18:14:12.631132   74828 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-864476 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.50
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-864476 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 18:14:12.631207   74828 ssh_runner.go:195] Run: crio config
	I0816 18:14:12.683072   74828 cni.go:84] Creating CNI manager for ""
	I0816 18:14:12.683094   74828 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 18:14:12.683107   74828 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 18:14:12.683129   74828 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.50 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-864476 NodeName:no-preload-864476 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.50"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.50 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 18:14:12.683276   74828 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.50
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-864476"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.50
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.50"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 18:14:12.683345   74828 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 18:14:12.693879   74828 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 18:14:12.693941   74828 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 18:14:12.702601   74828 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0816 18:14:12.718235   74828 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 18:14:12.733455   74828 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0816 18:14:12.748878   74828 ssh_runner.go:195] Run: grep 192.168.50.50	control-plane.minikube.internal$ /etc/hosts
	I0816 18:14:12.752276   74828 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.50	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 18:14:12.763390   74828 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:14:12.872450   74828 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 18:14:12.888531   74828 certs.go:68] Setting up /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/no-preload-864476 for IP: 192.168.50.50
	I0816 18:14:12.888569   74828 certs.go:194] generating shared ca certs ...
	I0816 18:14:12.888589   74828 certs.go:226] acquiring lock for ca certs: {Name:mk1e000575c94c138513704c2900b8a68810eb65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:14:12.888783   74828 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key
	I0816 18:14:12.888845   74828 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key
	I0816 18:14:12.888860   74828 certs.go:256] generating profile certs ...
	I0816 18:14:12.888971   74828 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/no-preload-864476/client.key
	I0816 18:14:12.889070   74828 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/no-preload-864476/apiserver.key.30cf6dcb
	I0816 18:14:12.889136   74828 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/no-preload-864476/proxy-client.key
	I0816 18:14:12.889298   74828 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem (1338 bytes)
	W0816 18:14:12.889339   74828 certs.go:480] ignoring /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753_empty.pem, impossibly tiny 0 bytes
	I0816 18:14:12.889351   74828 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 18:14:12.889391   74828 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem (1082 bytes)
	I0816 18:14:12.889421   74828 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem (1123 bytes)
	I0816 18:14:12.889452   74828 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem (1679 bytes)
	I0816 18:14:12.889507   74828 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem (1708 bytes)
	I0816 18:14:12.890441   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 18:14:12.919571   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 18:14:12.947375   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 18:14:12.975197   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 18:14:13.007308   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/no-preload-864476/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0816 18:14:13.056151   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/no-preload-864476/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 18:14:13.080317   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/no-preload-864476/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 18:14:13.102231   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/no-preload-864476/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 18:14:13.124045   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 18:14:13.145312   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem --> /usr/share/ca-certificates/16753.pem (1338 bytes)
	I0816 18:14:13.166806   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /usr/share/ca-certificates/167532.pem (1708 bytes)
	I0816 18:14:13.188173   74828 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 18:14:13.203594   74828 ssh_runner.go:195] Run: openssl version
	I0816 18:14:13.209148   74828 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16753.pem && ln -fs /usr/share/ca-certificates/16753.pem /etc/ssl/certs/16753.pem"
	I0816 18:14:13.220266   74828 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16753.pem
	I0816 18:14:13.224569   74828 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 17:00 /usr/share/ca-certificates/16753.pem
	I0816 18:14:13.224635   74828 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16753.pem
	I0816 18:14:13.230141   74828 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16753.pem /etc/ssl/certs/51391683.0"
	I0816 18:14:13.241362   74828 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167532.pem && ln -fs /usr/share/ca-certificates/167532.pem /etc/ssl/certs/167532.pem"
	I0816 18:14:13.252437   74828 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167532.pem
	I0816 18:14:13.256658   74828 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 17:00 /usr/share/ca-certificates/167532.pem
	I0816 18:14:13.256712   74828 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167532.pem
	I0816 18:14:13.262006   74828 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167532.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 18:14:13.273168   74828 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 18:14:13.284518   74828 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:14:13.288566   74828 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:49 /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:14:13.288611   74828 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:14:13.293944   74828 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 18:14:13.305148   74828 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 18:14:13.309460   74828 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 18:14:13.315123   74828 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 18:14:13.320854   74828 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 18:14:13.326676   74828 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 18:14:13.332183   74828 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 18:14:13.337794   74828 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 18:14:13.343369   74828 kubeadm.go:392] StartCluster: {Name:no-preload-864476 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-864476 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.50 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 18:14:13.343470   74828 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 18:14:13.343527   74828 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 18:14:13.384490   74828 cri.go:89] found id: ""
	I0816 18:14:13.384567   74828 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 18:14:13.395094   74828 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 18:14:13.395116   74828 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 18:14:13.395183   74828 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 18:14:13.406605   74828 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 18:14:13.407898   74828 kubeconfig.go:125] found "no-preload-864476" server: "https://192.168.50.50:8443"
	I0816 18:14:13.410808   74828 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 18:14:13.420516   74828 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.50
	I0816 18:14:13.420541   74828 kubeadm.go:1160] stopping kube-system containers ...
	I0816 18:14:13.420554   74828 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 18:14:13.420589   74828 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 18:14:13.459174   74828 cri.go:89] found id: ""
	I0816 18:14:13.459242   74828 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 18:14:13.475598   74828 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 18:14:13.484685   74828 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 18:14:13.484707   74828 kubeadm.go:157] found existing configuration files:
	
	I0816 18:14:13.484756   74828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 18:14:13.493092   74828 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 18:14:13.493147   74828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 18:14:13.501649   74828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 18:14:13.509987   74828 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 18:14:13.510028   74828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 18:14:13.518500   74828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 18:14:13.526689   74828 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 18:14:13.526737   74828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 18:14:13.535606   74828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 18:14:13.545130   74828 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 18:14:13.545185   74828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 18:14:13.553947   74828 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 18:14:13.562763   74828 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:13.663383   74828 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:10.657652   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:10.658062   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:14:10.658105   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:14:10.657999   76107 retry.go:31] will retry after 3.856225979s: waiting for machine to come up
	I0816 18:14:14.518358   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:14.518875   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Found IP for machine: 192.168.72.144
	I0816 18:14:14.518896   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Reserving static IP address...
	I0816 18:14:14.518915   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has current primary IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:14.519296   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Reserved static IP address: 192.168.72.144
	I0816 18:14:14.519334   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-256678", mac: "52:54:00:76:32:d8", ip: "192.168.72.144"} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:14.519346   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for SSH to be available...
	I0816 18:14:14.519377   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | skip adding static IP to network mk-default-k8s-diff-port-256678 - found existing host DHCP lease matching {name: "default-k8s-diff-port-256678", mac: "52:54:00:76:32:d8", ip: "192.168.72.144"}
	I0816 18:14:14.519391   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | Getting to WaitForSSH function...
	I0816 18:14:14.521566   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:14.521926   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:14.521969   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:14.522133   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | Using SSH client type: external
	I0816 18:14:14.522160   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | Using SSH private key: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/default-k8s-diff-port-256678/id_rsa (-rw-------)
	I0816 18:14:14.522202   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.144 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19461-9545/.minikube/machines/default-k8s-diff-port-256678/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 18:14:14.522221   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | About to run SSH command:
	I0816 18:14:14.522235   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | exit 0
	I0816 18:14:14.648603   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | SSH cmd err, output: <nil>: 
	I0816 18:14:14.649005   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetConfigRaw
	I0816 18:14:14.649616   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetIP
	I0816 18:14:14.652340   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:14.652767   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:14.652796   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:14.653116   75006 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/default-k8s-diff-port-256678/config.json ...
	I0816 18:14:14.653337   75006 machine.go:93] provisionDockerMachine start ...
	I0816 18:14:14.653361   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .DriverName
	I0816 18:14:14.653598   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:14:14.656062   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:14.656412   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:14.656442   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:14.656565   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHPort
	I0816 18:14:14.656757   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:14.656895   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:14.657015   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHUsername
	I0816 18:14:14.657128   75006 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:14.657312   75006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I0816 18:14:14.657321   75006 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 18:14:14.768721   75006 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 18:14:14.768749   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetMachineName
	I0816 18:14:14.768990   75006 buildroot.go:166] provisioning hostname "default-k8s-diff-port-256678"
	I0816 18:14:14.769021   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetMachineName
	I0816 18:14:14.769246   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:14:14.772310   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:14.772675   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:14.772704   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:14.772922   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHPort
	I0816 18:14:14.773084   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:14.773242   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:14.773361   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHUsername
	I0816 18:14:14.773564   75006 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:14.773764   75006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I0816 18:14:14.773783   75006 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-256678 && echo "default-k8s-diff-port-256678" | sudo tee /etc/hostname
	I0816 18:14:14.894016   75006 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-256678
	
	I0816 18:14:14.894047   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:14:14.896797   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:14.897150   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:14.897184   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:14.897424   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHPort
	I0816 18:14:14.897613   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:14.897800   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:14.897933   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHUsername
	I0816 18:14:14.898124   75006 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:14.898286   75006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I0816 18:14:14.898303   75006 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-256678' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-256678/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-256678' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 18:14:15.814480   75402 start.go:364] duration metric: took 3m22.605706427s to acquireMachinesLock for "old-k8s-version-783465"
	I0816 18:14:15.814546   75402 start.go:96] Skipping create...Using existing machine configuration
	I0816 18:14:15.814554   75402 fix.go:54] fixHost starting: 
	I0816 18:14:15.815001   75402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:14:15.815062   75402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:14:15.834710   75402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46611
	I0816 18:14:15.835124   75402 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:14:15.835653   75402 main.go:141] libmachine: Using API Version  1
	I0816 18:14:15.835676   75402 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:14:15.836005   75402 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:14:15.836258   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	I0816 18:14:15.836392   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetState
	I0816 18:14:15.838010   75402 fix.go:112] recreateIfNeeded on old-k8s-version-783465: state=Stopped err=<nil>
	I0816 18:14:15.838043   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	W0816 18:14:15.838200   75402 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 18:14:15.840214   75402 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-783465" ...
	I0816 18:14:15.016150   75006 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 18:14:15.016176   75006 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19461-9545/.minikube CaCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19461-9545/.minikube}
	I0816 18:14:15.016200   75006 buildroot.go:174] setting up certificates
	I0816 18:14:15.016213   75006 provision.go:84] configureAuth start
	I0816 18:14:15.016231   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetMachineName
	I0816 18:14:15.016518   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetIP
	I0816 18:14:15.019132   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.019687   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:15.019725   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.019907   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:14:15.022758   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.023192   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:15.023233   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.023408   75006 provision.go:143] copyHostCerts
	I0816 18:14:15.023468   75006 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem, removing ...
	I0816 18:14:15.023489   75006 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem
	I0816 18:14:15.023552   75006 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem (1082 bytes)
	I0816 18:14:15.023649   75006 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem, removing ...
	I0816 18:14:15.023659   75006 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem
	I0816 18:14:15.023681   75006 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem (1123 bytes)
	I0816 18:14:15.023733   75006 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem, removing ...
	I0816 18:14:15.023740   75006 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem
	I0816 18:14:15.023756   75006 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem (1679 bytes)
	I0816 18:14:15.023802   75006 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-256678 san=[127.0.0.1 192.168.72.144 default-k8s-diff-port-256678 localhost minikube]
	I0816 18:14:15.142549   75006 provision.go:177] copyRemoteCerts
	I0816 18:14:15.142601   75006 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 18:14:15.142625   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:14:15.145515   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.145867   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:15.145903   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.146029   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHPort
	I0816 18:14:15.146250   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:15.146436   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHUsername
	I0816 18:14:15.146604   75006 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/default-k8s-diff-port-256678/id_rsa Username:docker}
	I0816 18:14:15.230785   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 18:14:15.258450   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0816 18:14:15.286008   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 18:14:15.308690   75006 provision.go:87] duration metric: took 292.45797ms to configureAuth
	I0816 18:14:15.308725   75006 buildroot.go:189] setting minikube options for container-runtime
	I0816 18:14:15.308927   75006 config.go:182] Loaded profile config "default-k8s-diff-port-256678": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 18:14:15.308996   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:14:15.311959   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.312310   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:15.312332   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.312492   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHPort
	I0816 18:14:15.312713   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:15.312890   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:15.313028   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHUsername
	I0816 18:14:15.313184   75006 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:15.313369   75006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I0816 18:14:15.313387   75006 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 18:14:15.574487   75006 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 18:14:15.574517   75006 machine.go:96] duration metric: took 921.166622ms to provisionDockerMachine
	I0816 18:14:15.574529   75006 start.go:293] postStartSetup for "default-k8s-diff-port-256678" (driver="kvm2")
	I0816 18:14:15.574538   75006 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 18:14:15.574552   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .DriverName
	I0816 18:14:15.574835   75006 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 18:14:15.574854   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:14:15.577944   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.578266   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:15.578295   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.578469   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHPort
	I0816 18:14:15.578651   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:15.578800   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHUsername
	I0816 18:14:15.578912   75006 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/default-k8s-diff-port-256678/id_rsa Username:docker}
	I0816 18:14:15.664404   75006 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 18:14:15.668362   75006 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 18:14:15.668389   75006 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/addons for local assets ...
	I0816 18:14:15.668459   75006 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/files for local assets ...
	I0816 18:14:15.668562   75006 filesync.go:149] local asset: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem -> 167532.pem in /etc/ssl/certs
	I0816 18:14:15.668705   75006 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 18:14:15.678830   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /etc/ssl/certs/167532.pem (1708 bytes)
	I0816 18:14:15.702087   75006 start.go:296] duration metric: took 127.545675ms for postStartSetup
	I0816 18:14:15.702129   75006 fix.go:56] duration metric: took 19.172678011s for fixHost
	I0816 18:14:15.702152   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:14:15.704680   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.705117   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:15.705154   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.705288   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHPort
	I0816 18:14:15.705479   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:15.705643   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:15.705766   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHUsername
	I0816 18:14:15.705922   75006 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:15.706084   75006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I0816 18:14:15.706095   75006 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 18:14:15.814313   75006 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723832055.788948458
	
	I0816 18:14:15.814337   75006 fix.go:216] guest clock: 1723832055.788948458
	I0816 18:14:15.814348   75006 fix.go:229] Guest: 2024-08-16 18:14:15.788948458 +0000 UTC Remote: 2024-08-16 18:14:15.702133997 +0000 UTC m=+265.826862410 (delta=86.814461ms)
	I0816 18:14:15.814372   75006 fix.go:200] guest clock delta is within tolerance: 86.814461ms
	I0816 18:14:15.814382   75006 start.go:83] releasing machines lock for "default-k8s-diff-port-256678", held for 19.284958633s
	I0816 18:14:15.814416   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .DriverName
	I0816 18:14:15.814723   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetIP
	I0816 18:14:15.817995   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.818426   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:15.818467   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.818620   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .DriverName
	I0816 18:14:15.819299   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .DriverName
	I0816 18:14:15.819518   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .DriverName
	I0816 18:14:15.819616   75006 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 18:14:15.819656   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:14:15.819840   75006 ssh_runner.go:195] Run: cat /version.json
	I0816 18:14:15.819869   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:14:15.822797   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.823189   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.823478   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:15.823521   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.823659   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHPort
	I0816 18:14:15.823804   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:15.823811   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:15.823828   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.823965   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHUsername
	I0816 18:14:15.824064   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHPort
	I0816 18:14:15.824177   75006 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/default-k8s-diff-port-256678/id_rsa Username:docker}
	I0816 18:14:15.824234   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:15.824368   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHUsername
	I0816 18:14:15.824486   75006 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/default-k8s-diff-port-256678/id_rsa Username:docker}
	I0816 18:14:15.948709   75006 ssh_runner.go:195] Run: systemctl --version
	I0816 18:14:15.956239   75006 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 18:14:16.103538   75006 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 18:14:16.109299   75006 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 18:14:16.109385   75006 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 18:14:16.125056   75006 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 18:14:16.125092   75006 start.go:495] detecting cgroup driver to use...
	I0816 18:14:16.125188   75006 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 18:14:16.141741   75006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 18:14:16.158917   75006 docker.go:217] disabling cri-docker service (if available) ...
	I0816 18:14:16.158993   75006 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 18:14:16.173256   75006 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 18:14:16.187026   75006 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 18:14:16.332452   75006 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 18:14:16.503181   75006 docker.go:233] disabling docker service ...
	I0816 18:14:16.503254   75006 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 18:14:16.517961   75006 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 18:14:16.535991   75006 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 18:14:16.667874   75006 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 18:14:16.799300   75006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 18:14:16.813852   75006 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 18:14:16.832891   75006 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 18:14:16.832953   75006 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:16.845621   75006 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 18:14:16.845716   75006 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:16.856045   75006 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:16.866117   75006 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:16.877586   75006 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 18:14:16.887643   75006 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:16.897164   75006 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:16.915247   75006 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:16.924887   75006 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 18:14:16.933645   75006 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 18:14:16.933709   75006 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 18:14:16.946920   75006 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 18:14:16.955928   75006 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:14:17.090148   75006 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 18:14:17.241434   75006 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 18:14:17.241531   75006 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 18:14:17.246730   75006 start.go:563] Will wait 60s for crictl version
	I0816 18:14:17.246796   75006 ssh_runner.go:195] Run: which crictl
	I0816 18:14:17.250397   75006 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 18:14:17.289194   75006 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 18:14:17.289295   75006 ssh_runner.go:195] Run: crio --version
	I0816 18:14:17.324401   75006 ssh_runner.go:195] Run: crio --version
	I0816 18:14:17.361220   75006 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 18:14:15.841411   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .Start
	I0816 18:14:15.841576   75402 main.go:141] libmachine: (old-k8s-version-783465) Ensuring networks are active...
	I0816 18:14:15.842263   75402 main.go:141] libmachine: (old-k8s-version-783465) Ensuring network default is active
	I0816 18:14:15.842609   75402 main.go:141] libmachine: (old-k8s-version-783465) Ensuring network mk-old-k8s-version-783465 is active
	I0816 18:14:15.843023   75402 main.go:141] libmachine: (old-k8s-version-783465) Getting domain xml...
	I0816 18:14:15.844141   75402 main.go:141] libmachine: (old-k8s-version-783465) Creating domain...
	I0816 18:14:17.215163   75402 main.go:141] libmachine: (old-k8s-version-783465) Waiting to get IP...
	I0816 18:14:17.216445   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:17.216933   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:17.217029   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:17.216922   76298 retry.go:31] will retry after 286.243503ms: waiting for machine to come up
	I0816 18:14:17.504645   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:17.505240   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:17.505262   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:17.505175   76298 retry.go:31] will retry after 275.715235ms: waiting for machine to come up
	I0816 18:14:17.782804   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:17.783365   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:17.783392   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:17.783292   76298 retry.go:31] will retry after 343.088129ms: waiting for machine to come up
	I0816 18:14:14.936549   74828 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.273126441s)
	I0816 18:14:14.936584   74828 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:15.139778   74828 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:15.201814   74828 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:15.270552   74828 api_server.go:52] waiting for apiserver process to appear ...
	I0816 18:14:15.270667   74828 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:15.771379   74828 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:16.271296   74828 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:16.335242   74828 api_server.go:72] duration metric: took 1.064710561s to wait for apiserver process to appear ...
	I0816 18:14:16.335265   74828 api_server.go:88] waiting for apiserver healthz status ...
	I0816 18:14:16.335282   74828 api_server.go:253] Checking apiserver healthz at https://192.168.50.50:8443/healthz ...
	I0816 18:14:16.335727   74828 api_server.go:269] stopped: https://192.168.50.50:8443/healthz: Get "https://192.168.50.50:8443/healthz": dial tcp 192.168.50.50:8443: connect: connection refused
	I0816 18:14:16.835361   74828 api_server.go:253] Checking apiserver healthz at https://192.168.50.50:8443/healthz ...
	I0816 18:14:17.362436   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetIP
	I0816 18:14:17.365728   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:17.366122   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:17.366154   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:17.366403   75006 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0816 18:14:17.370322   75006 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 18:14:17.383153   75006 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-256678 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-256678 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.144 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 18:14:17.383303   75006 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 18:14:17.383364   75006 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 18:14:17.420269   75006 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0816 18:14:17.420339   75006 ssh_runner.go:195] Run: which lz4
	I0816 18:14:17.424477   75006 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 18:14:17.428507   75006 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 18:14:17.428547   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0816 18:14:18.717202   75006 crio.go:462] duration metric: took 1.292754157s to copy over tarball
	I0816 18:14:18.717278   75006 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 18:14:19.241691   74828 api_server.go:279] https://192.168.50.50:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 18:14:19.241729   74828 api_server.go:103] status: https://192.168.50.50:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 18:14:19.241746   74828 api_server.go:253] Checking apiserver healthz at https://192.168.50.50:8443/healthz ...
	I0816 18:14:19.292883   74828 api_server.go:279] https://192.168.50.50:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 18:14:19.292924   74828 api_server.go:103] status: https://192.168.50.50:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 18:14:19.336097   74828 api_server.go:253] Checking apiserver healthz at https://192.168.50.50:8443/healthz ...
	I0816 18:14:19.363715   74828 api_server.go:279] https://192.168.50.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:14:19.363753   74828 api_server.go:103] status: https://192.168.50.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:14:19.835848   74828 api_server.go:253] Checking apiserver healthz at https://192.168.50.50:8443/healthz ...
	I0816 18:14:19.840615   74828 api_server.go:279] https://192.168.50.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:14:19.840666   74828 api_server.go:103] status: https://192.168.50.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:14:20.336291   74828 api_server.go:253] Checking apiserver healthz at https://192.168.50.50:8443/healthz ...
	I0816 18:14:20.343751   74828 api_server.go:279] https://192.168.50.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:14:20.343785   74828 api_server.go:103] status: https://192.168.50.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:14:20.835470   74828 api_server.go:253] Checking apiserver healthz at https://192.168.50.50:8443/healthz ...
	I0816 18:14:20.841217   74828 api_server.go:279] https://192.168.50.50:8443/healthz returned 200:
	ok
	I0816 18:14:20.849609   74828 api_server.go:141] control plane version: v1.31.0
	I0816 18:14:20.849642   74828 api_server.go:131] duration metric: took 4.514370955s to wait for apiserver health ...
	I0816 18:14:20.849653   74828 cni.go:84] Creating CNI manager for ""
	I0816 18:14:20.849662   74828 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 18:14:20.851828   74828 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 18:14:18.127538   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:18.128044   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:18.128077   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:18.127958   76298 retry.go:31] will retry after 543.91951ms: waiting for machine to come up
	I0816 18:14:18.673778   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:18.674328   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:18.674351   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:18.674274   76298 retry.go:31] will retry after 694.978788ms: waiting for machine to come up
	I0816 18:14:19.370976   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:19.371577   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:19.371605   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:19.371538   76298 retry.go:31] will retry after 578.640883ms: waiting for machine to come up
	I0816 18:14:19.952328   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:19.952917   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:19.952941   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:19.952863   76298 retry.go:31] will retry after 820.19233ms: waiting for machine to come up
	I0816 18:14:20.774767   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:20.775175   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:20.775200   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:20.775134   76298 retry.go:31] will retry after 1.262201815s: waiting for machine to come up
	I0816 18:14:22.038872   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:22.039357   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:22.039385   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:22.039302   76298 retry.go:31] will retry after 1.164593889s: waiting for machine to come up
	I0816 18:14:20.853121   74828 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 18:14:20.866117   74828 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 18:14:20.888451   74828 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 18:14:20.902482   74828 system_pods.go:59] 8 kube-system pods found
	I0816 18:14:20.902530   74828 system_pods.go:61] "coredns-6f6b679f8f-w9cbm" [9b50c913-f492-4432-a50a-e0f727a7b856] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 18:14:20.902545   74828 system_pods.go:61] "etcd-no-preload-864476" [e45a11b8-fa3e-4a6e-9d06-5d82fdaf20dc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 18:14:20.902557   74828 system_pods.go:61] "kube-apiserver-no-preload-864476" [1cf82575-b520-4bc0-9e90-d40c02b4468d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 18:14:20.902568   74828 system_pods.go:61] "kube-controller-manager-no-preload-864476" [8c9123e0-16a4-4940-8464-4bec383bac90] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 18:14:20.902577   74828 system_pods.go:61] "kube-proxy-vdqxz" [0332e87e-5c0c-41f5-88a9-31b7f8494eb6] Running
	I0816 18:14:20.902587   74828 system_pods.go:61] "kube-scheduler-no-preload-864476" [6139753f-b5cf-4af5-a9fa-03fb220e3dc9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 18:14:20.902606   74828 system_pods.go:61] "metrics-server-6867b74b74-rxtwg" [f0d04fc9-24c0-47e3-afdc-f250ef07900c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 18:14:20.902620   74828 system_pods.go:61] "storage-provisioner" [65303dd8-27d7-4bf3-ae58-ff5fe556f17f] Running
	I0816 18:14:20.902631   74828 system_pods.go:74] duration metric: took 14.150825ms to wait for pod list to return data ...
	I0816 18:14:20.902645   74828 node_conditions.go:102] verifying NodePressure condition ...
	I0816 18:14:20.909305   74828 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 18:14:20.909342   74828 node_conditions.go:123] node cpu capacity is 2
	I0816 18:14:20.909355   74828 node_conditions.go:105] duration metric: took 6.699359ms to run NodePressure ...
	I0816 18:14:20.909377   74828 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:21.193348   74828 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0816 18:14:21.198555   74828 kubeadm.go:739] kubelet initialised
	I0816 18:14:21.198585   74828 kubeadm.go:740] duration metric: took 5.20722ms waiting for restarted kubelet to initialise ...
	I0816 18:14:21.198595   74828 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:14:21.204695   74828 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-w9cbm" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:21.212855   74828 pod_ready.go:98] node "no-preload-864476" hosting pod "coredns-6f6b679f8f-w9cbm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-864476" has status "Ready":"False"
	I0816 18:14:21.212877   74828 pod_ready.go:82] duration metric: took 8.157781ms for pod "coredns-6f6b679f8f-w9cbm" in "kube-system" namespace to be "Ready" ...
	E0816 18:14:21.212889   74828 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-864476" hosting pod "coredns-6f6b679f8f-w9cbm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-864476" has status "Ready":"False"
	I0816 18:14:21.212899   74828 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:21.220125   74828 pod_ready.go:98] node "no-preload-864476" hosting pod "etcd-no-preload-864476" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-864476" has status "Ready":"False"
	I0816 18:14:21.220150   74828 pod_ready.go:82] duration metric: took 7.241861ms for pod "etcd-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	E0816 18:14:21.220158   74828 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-864476" hosting pod "etcd-no-preload-864476" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-864476" has status "Ready":"False"
	I0816 18:14:21.220166   74828 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:21.226930   74828 pod_ready.go:98] node "no-preload-864476" hosting pod "kube-apiserver-no-preload-864476" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-864476" has status "Ready":"False"
	I0816 18:14:21.226957   74828 pod_ready.go:82] duration metric: took 6.783402ms for pod "kube-apiserver-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	E0816 18:14:21.226967   74828 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-864476" hosting pod "kube-apiserver-no-preload-864476" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-864476" has status "Ready":"False"
	I0816 18:14:21.226976   74828 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:21.292011   74828 pod_ready.go:98] node "no-preload-864476" hosting pod "kube-controller-manager-no-preload-864476" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-864476" has status "Ready":"False"
	I0816 18:14:21.292054   74828 pod_ready.go:82] duration metric: took 65.066708ms for pod "kube-controller-manager-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	E0816 18:14:21.292066   74828 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-864476" hosting pod "kube-controller-manager-no-preload-864476" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-864476" has status "Ready":"False"
	I0816 18:14:21.292075   74828 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-vdqxz" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:21.692536   74828 pod_ready.go:93] pod "kube-proxy-vdqxz" in "kube-system" namespace has status "Ready":"True"
	I0816 18:14:21.692564   74828 pod_ready.go:82] duration metric: took 400.476293ms for pod "kube-proxy-vdqxz" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:21.692577   74828 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:21.155261   75006 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.437939279s)
	I0816 18:14:21.155296   75006 crio.go:469] duration metric: took 2.438065212s to extract the tarball
	I0816 18:14:21.155325   75006 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 18:14:21.199451   75006 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 18:14:21.249963   75006 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 18:14:21.249990   75006 cache_images.go:84] Images are preloaded, skipping loading
	I0816 18:14:21.250002   75006 kubeadm.go:934] updating node { 192.168.72.144 8444 v1.31.0 crio true true} ...
	I0816 18:14:21.250129   75006 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-256678 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.144
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-256678 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 18:14:21.250211   75006 ssh_runner.go:195] Run: crio config
	I0816 18:14:21.299619   75006 cni.go:84] Creating CNI manager for ""
	I0816 18:14:21.299644   75006 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 18:14:21.299663   75006 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 18:14:21.299684   75006 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.144 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-256678 NodeName:default-k8s-diff-port-256678 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.144"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.144 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 18:14:21.299813   75006 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.144
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-256678"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.144
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.144"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 18:14:21.299880   75006 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 18:14:21.310127   75006 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 18:14:21.310205   75006 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 18:14:21.319566   75006 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0816 18:14:21.337043   75006 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 18:14:21.352319   75006 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0816 18:14:21.370117   75006 ssh_runner.go:195] Run: grep 192.168.72.144	control-plane.minikube.internal$ /etc/hosts
	I0816 18:14:21.373986   75006 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.144	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 18:14:21.386518   75006 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:14:21.508855   75006 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 18:14:21.525184   75006 certs.go:68] Setting up /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/default-k8s-diff-port-256678 for IP: 192.168.72.144
	I0816 18:14:21.525209   75006 certs.go:194] generating shared ca certs ...
	I0816 18:14:21.525230   75006 certs.go:226] acquiring lock for ca certs: {Name:mk1e000575c94c138513704c2900b8a68810eb65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:14:21.525413   75006 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key
	I0816 18:14:21.525468   75006 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key
	I0816 18:14:21.525481   75006 certs.go:256] generating profile certs ...
	I0816 18:14:21.525604   75006 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/default-k8s-diff-port-256678/client.key
	I0816 18:14:21.525688   75006 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/default-k8s-diff-port-256678/apiserver.key.ac6d83aa
	I0816 18:14:21.525738   75006 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/default-k8s-diff-port-256678/proxy-client.key
	I0816 18:14:21.525888   75006 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem (1338 bytes)
	W0816 18:14:21.525931   75006 certs.go:480] ignoring /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753_empty.pem, impossibly tiny 0 bytes
	I0816 18:14:21.525944   75006 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 18:14:21.525991   75006 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem (1082 bytes)
	I0816 18:14:21.526028   75006 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem (1123 bytes)
	I0816 18:14:21.526052   75006 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem (1679 bytes)
	I0816 18:14:21.526101   75006 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem (1708 bytes)
	I0816 18:14:21.526719   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 18:14:21.556992   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 18:14:21.590311   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 18:14:21.624782   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 18:14:21.655118   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/default-k8s-diff-port-256678/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0816 18:14:21.695431   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/default-k8s-diff-port-256678/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 18:14:21.722575   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/default-k8s-diff-port-256678/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 18:14:21.744870   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/default-k8s-diff-port-256678/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 18:14:21.770850   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 18:14:21.793906   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem --> /usr/share/ca-certificates/16753.pem (1338 bytes)
	I0816 18:14:21.817643   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /usr/share/ca-certificates/167532.pem (1708 bytes)
	I0816 18:14:21.839584   75006 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 18:14:21.856447   75006 ssh_runner.go:195] Run: openssl version
	I0816 18:14:21.862104   75006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16753.pem && ln -fs /usr/share/ca-certificates/16753.pem /etc/ssl/certs/16753.pem"
	I0816 18:14:21.872584   75006 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16753.pem
	I0816 18:14:21.876886   75006 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 17:00 /usr/share/ca-certificates/16753.pem
	I0816 18:14:21.876945   75006 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16753.pem
	I0816 18:14:21.882424   75006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16753.pem /etc/ssl/certs/51391683.0"
	I0816 18:14:21.892761   75006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167532.pem && ln -fs /usr/share/ca-certificates/167532.pem /etc/ssl/certs/167532.pem"
	I0816 18:14:21.904506   75006 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167532.pem
	I0816 18:14:21.909624   75006 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 17:00 /usr/share/ca-certificates/167532.pem
	I0816 18:14:21.909687   75006 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167532.pem
	I0816 18:14:21.915765   75006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167532.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 18:14:21.927160   75006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 18:14:21.937381   75006 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:14:21.941423   75006 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:49 /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:14:21.941477   75006 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:14:21.946741   75006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 18:14:21.958082   75006 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 18:14:21.962431   75006 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 18:14:21.969889   75006 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 18:14:21.977302   75006 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 18:14:21.983468   75006 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 18:14:21.989115   75006 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 18:14:21.994569   75006 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 18:14:21.999962   75006 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-256678 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-256678 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.144 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 18:14:22.000090   75006 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 18:14:22.000139   75006 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 18:14:22.034063   75006 cri.go:89] found id: ""
	I0816 18:14:22.034158   75006 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 18:14:22.043988   75006 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 18:14:22.044003   75006 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 18:14:22.044040   75006 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 18:14:22.053276   75006 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 18:14:22.054255   75006 kubeconfig.go:125] found "default-k8s-diff-port-256678" server: "https://192.168.72.144:8444"
	I0816 18:14:22.056408   75006 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 18:14:22.065394   75006 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.144
	I0816 18:14:22.065429   75006 kubeadm.go:1160] stopping kube-system containers ...
	I0816 18:14:22.065443   75006 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 18:14:22.065496   75006 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 18:14:22.112797   75006 cri.go:89] found id: ""
	I0816 18:14:22.112889   75006 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 18:14:22.130231   75006 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 18:14:22.139432   75006 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 18:14:22.139451   75006 kubeadm.go:157] found existing configuration files:
	
	I0816 18:14:22.139493   75006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0816 18:14:22.148118   75006 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 18:14:22.148168   75006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 18:14:22.158088   75006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0816 18:14:22.166741   75006 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 18:14:22.166803   75006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 18:14:22.175578   75006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0816 18:14:22.185238   75006 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 18:14:22.185286   75006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 18:14:22.194074   75006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0816 18:14:22.205053   75006 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 18:14:22.205105   75006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 18:14:22.216506   75006 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 18:14:22.228754   75006 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:22.344597   75006 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:23.006750   75006 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:23.275587   75006 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:23.356515   75006 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:23.432890   75006 api_server.go:52] waiting for apiserver process to appear ...
	I0816 18:14:23.432991   75006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:23.933834   75006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:24.433736   75006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:23.205567   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:23.206051   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:23.206078   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:23.206007   76298 retry.go:31] will retry after 2.304886921s: waiting for machine to come up
	I0816 18:14:25.512748   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:25.513295   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:25.513321   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:25.513261   76298 retry.go:31] will retry after 2.603393394s: waiting for machine to come up
	I0816 18:14:23.801346   74828 pod_ready.go:103] pod "kube-scheduler-no-preload-864476" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:26.199045   74828 pod_ready.go:103] pod "kube-scheduler-no-preload-864476" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:28.205981   74828 pod_ready.go:103] pod "kube-scheduler-no-preload-864476" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:24.933846   75006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:24.954190   75006 api_server.go:72] duration metric: took 1.521307594s to wait for apiserver process to appear ...
	I0816 18:14:24.954219   75006 api_server.go:88] waiting for apiserver healthz status ...
	I0816 18:14:24.954242   75006 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I0816 18:14:27.835517   75006 api_server.go:279] https://192.168.72.144:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W0816 18:14:27.835552   75006 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I0816 18:14:27.835567   75006 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I0816 18:14:27.842961   75006 api_server.go:279] https://192.168.72.144:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W0816 18:14:27.842992   75006 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I0816 18:14:27.954290   75006 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I0816 18:14:27.963372   75006 api_server.go:279] https://192.168.72.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:14:27.963400   75006 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:14:28.455035   75006 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I0816 18:14:28.460244   75006 api_server.go:279] https://192.168.72.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:14:28.460279   75006 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:14:28.954475   75006 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I0816 18:14:28.962766   75006 api_server.go:279] https://192.168.72.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:14:28.962802   75006 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:14:29.454298   75006 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I0816 18:14:29.458650   75006 api_server.go:279] https://192.168.72.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:14:29.458681   75006 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:14:29.954582   75006 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I0816 18:14:29.959359   75006 api_server.go:279] https://192.168.72.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:14:29.959384   75006 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:14:30.455077   75006 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I0816 18:14:30.461068   75006 api_server.go:279] https://192.168.72.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:14:30.461099   75006 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:14:30.954772   75006 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I0816 18:14:30.960557   75006 api_server.go:279] https://192.168.72.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:14:30.960588   75006 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:14:31.455232   75006 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I0816 18:14:31.460157   75006 api_server.go:279] https://192.168.72.144:8444/healthz returned 200:
	ok
	I0816 18:14:31.471015   75006 api_server.go:141] control plane version: v1.31.0
	I0816 18:14:31.471046   75006 api_server.go:131] duration metric: took 6.516819341s to wait for apiserver health ...
	I0816 18:14:31.471056   75006 cni.go:84] Creating CNI manager for ""
	I0816 18:14:31.471064   75006 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 18:14:31.472930   75006 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 18:14:28.118105   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:28.118675   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:28.118706   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:28.118637   76298 retry.go:31] will retry after 2.400714985s: waiting for machine to come up
	I0816 18:14:30.521623   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:30.522157   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:30.522196   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:30.522111   76298 retry.go:31] will retry after 3.210603239s: waiting for machine to come up
	I0816 18:14:30.699930   74828 pod_ready.go:103] pod "kube-scheduler-no-preload-864476" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:33.200755   74828 pod_ready.go:103] pod "kube-scheduler-no-preload-864476" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:31.474388   75006 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 18:14:31.484723   75006 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 18:14:31.502094   75006 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 18:14:31.511169   75006 system_pods.go:59] 8 kube-system pods found
	I0816 18:14:31.511207   75006 system_pods.go:61] "coredns-6f6b679f8f-2sgmk" [3c98207c-ab70-435e-a725-3d6b108515d9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 18:14:31.511215   75006 system_pods.go:61] "etcd-default-k8s-diff-port-256678" [c6d0dbe2-8b80-4fb2-8408-7b2e668cf4cc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 18:14:31.511221   75006 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-256678" [4506e38e-6685-41f8-98b1-738b35476ad7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 18:14:31.511228   75006 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-256678" [14282ea5-2ebc-4ea6-8e06-829e86296333] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 18:14:31.511232   75006 system_pods.go:61] "kube-proxy-l4lr2" [880ceec6-c3d1-4934-b02a-7a175ded8a02] Running
	I0816 18:14:31.511236   75006 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-256678" [b122d1cd-12e8-4b87-a179-c50baf4c89d4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 18:14:31.511241   75006 system_pods.go:61] "metrics-server-6867b74b74-fc4h4" [3cb9624e-98b4-4edb-a2de-d6a971520cac] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 18:14:31.511244   75006 system_pods.go:61] "storage-provisioner" [79442d12-c28b-447e-ae96-e4c2ddb5c4da] Running
	I0816 18:14:31.511250   75006 system_pods.go:74] duration metric: took 9.137933ms to wait for pod list to return data ...
	I0816 18:14:31.511256   75006 node_conditions.go:102] verifying NodePressure condition ...
	I0816 18:14:31.515339   75006 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 18:14:31.515361   75006 node_conditions.go:123] node cpu capacity is 2
	I0816 18:14:31.515370   75006 node_conditions.go:105] duration metric: took 4.110442ms to run NodePressure ...
	I0816 18:14:31.515387   75006 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:31.774197   75006 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0816 18:14:31.778258   75006 kubeadm.go:739] kubelet initialised
	I0816 18:14:31.778276   75006 kubeadm.go:740] duration metric: took 4.052927ms waiting for restarted kubelet to initialise ...
	I0816 18:14:31.778283   75006 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:14:31.782595   75006 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-2sgmk" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:33.788205   75006 pod_ready.go:103] pod "coredns-6f6b679f8f-2sgmk" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:35.053312   74510 start.go:364] duration metric: took 53.786665535s to acquireMachinesLock for "embed-certs-777541"
	I0816 18:14:35.053367   74510 start.go:96] Skipping create...Using existing machine configuration
	I0816 18:14:35.053372   74510 fix.go:54] fixHost starting: 
	I0816 18:14:35.053687   74510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:14:35.053718   74510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:14:35.073509   74510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33223
	I0816 18:14:35.073935   74510 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:14:35.074396   74510 main.go:141] libmachine: Using API Version  1
	I0816 18:14:35.074420   74510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:14:35.074749   74510 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:14:35.074928   74510 main.go:141] libmachine: (embed-certs-777541) Calling .DriverName
	I0816 18:14:35.075102   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetState
	I0816 18:14:35.076710   74510 fix.go:112] recreateIfNeeded on embed-certs-777541: state=Stopped err=<nil>
	I0816 18:14:35.076738   74510 main.go:141] libmachine: (embed-certs-777541) Calling .DriverName
	W0816 18:14:35.076903   74510 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 18:14:35.078759   74510 out.go:177] * Restarting existing kvm2 VM for "embed-certs-777541" ...
	I0816 18:14:33.735394   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:33.735898   75402 main.go:141] libmachine: (old-k8s-version-783465) Found IP for machine: 192.168.39.211
	I0816 18:14:33.735925   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has current primary IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:33.735933   75402 main.go:141] libmachine: (old-k8s-version-783465) Reserving static IP address...
	I0816 18:14:33.736407   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "old-k8s-version-783465", mac: "52:54:00:d1:97:35", ip: "192.168.39.211"} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:33.736439   75402 main.go:141] libmachine: (old-k8s-version-783465) Reserved static IP address: 192.168.39.211
	I0816 18:14:33.736459   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | skip adding static IP to network mk-old-k8s-version-783465 - found existing host DHCP lease matching {name: "old-k8s-version-783465", mac: "52:54:00:d1:97:35", ip: "192.168.39.211"}
	I0816 18:14:33.736478   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | Getting to WaitForSSH function...
	I0816 18:14:33.736492   75402 main.go:141] libmachine: (old-k8s-version-783465) Waiting for SSH to be available...
	I0816 18:14:33.739028   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:33.739377   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:33.739397   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:33.739596   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | Using SSH client type: external
	I0816 18:14:33.739689   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | Using SSH private key: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/old-k8s-version-783465/id_rsa (-rw-------)
	I0816 18:14:33.739724   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.211 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19461-9545/.minikube/machines/old-k8s-version-783465/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 18:14:33.739747   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | About to run SSH command:
	I0816 18:14:33.739785   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | exit 0
	I0816 18:14:33.861036   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | SSH cmd err, output: <nil>: 
	I0816 18:14:33.861405   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetConfigRaw
	I0816 18:14:33.862105   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetIP
	I0816 18:14:33.864850   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:33.865245   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:33.865272   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:33.865542   75402 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/config.json ...
	I0816 18:14:33.865796   75402 machine.go:93] provisionDockerMachine start ...
	I0816 18:14:33.865820   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	I0816 18:14:33.866053   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:14:33.868422   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:33.868761   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:33.868795   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:33.868911   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHPort
	I0816 18:14:33.869095   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:33.869267   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:33.869415   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHUsername
	I0816 18:14:33.869579   75402 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:33.869796   75402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0816 18:14:33.869810   75402 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 18:14:33.972880   75402 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 18:14:33.972907   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetMachineName
	I0816 18:14:33.973141   75402 buildroot.go:166] provisioning hostname "old-k8s-version-783465"
	I0816 18:14:33.973172   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetMachineName
	I0816 18:14:33.973378   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:14:33.976198   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:33.976530   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:33.976563   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:33.976747   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHPort
	I0816 18:14:33.976945   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:33.977086   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:33.977228   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHUsername
	I0816 18:14:33.977369   75402 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:33.977529   75402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0816 18:14:33.977540   75402 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-783465 && echo "old-k8s-version-783465" | sudo tee /etc/hostname
	I0816 18:14:34.086092   75402 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-783465
	
	I0816 18:14:34.086123   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:14:34.088785   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.089107   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:34.089132   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.089285   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHPort
	I0816 18:14:34.089527   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:34.089684   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:34.089828   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHUsername
	I0816 18:14:34.089997   75402 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:34.090152   75402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0816 18:14:34.090168   75402 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-783465' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-783465/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-783465' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 18:14:34.200744   75402 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 18:14:34.200779   75402 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19461-9545/.minikube CaCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19461-9545/.minikube}
	I0816 18:14:34.200834   75402 buildroot.go:174] setting up certificates
	I0816 18:14:34.200848   75402 provision.go:84] configureAuth start
	I0816 18:14:34.200862   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetMachineName
	I0816 18:14:34.201175   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetIP
	I0816 18:14:34.203868   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.204297   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:34.204344   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.204506   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:14:34.207067   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.207441   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:34.207464   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.207810   75402 provision.go:143] copyHostCerts
	I0816 18:14:34.207869   75402 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem, removing ...
	I0816 18:14:34.207892   75402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem
	I0816 18:14:34.207951   75402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem (1082 bytes)
	I0816 18:14:34.208058   75402 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem, removing ...
	I0816 18:14:34.208069   75402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem
	I0816 18:14:34.208103   75402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem (1123 bytes)
	I0816 18:14:34.208180   75402 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem, removing ...
	I0816 18:14:34.208192   75402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem
	I0816 18:14:34.208220   75402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem (1679 bytes)
	I0816 18:14:34.208291   75402 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-783465 san=[127.0.0.1 192.168.39.211 localhost minikube old-k8s-version-783465]
	I0816 18:14:34.413800   75402 provision.go:177] copyRemoteCerts
	I0816 18:14:34.413857   75402 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 18:14:34.413881   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:14:34.416724   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.417138   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:34.417173   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.417345   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHPort
	I0816 18:14:34.417673   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:34.417894   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHUsername
	I0816 18:14:34.418089   75402 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/old-k8s-version-783465/id_rsa Username:docker}
	I0816 18:14:34.495519   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 18:14:34.517414   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0816 18:14:34.540423   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 18:14:34.563983   75402 provision.go:87] duration metric: took 363.122639ms to configureAuth
	I0816 18:14:34.564019   75402 buildroot.go:189] setting minikube options for container-runtime
	I0816 18:14:34.564229   75402 config.go:182] Loaded profile config "old-k8s-version-783465": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0816 18:14:34.564299   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:14:34.567149   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.567550   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:34.567580   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.567753   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHPort
	I0816 18:14:34.567935   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:34.568098   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:34.568255   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHUsername
	I0816 18:14:34.568448   75402 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:34.568659   75402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0816 18:14:34.568680   75402 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 18:14:34.824064   75402 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 18:14:34.824091   75402 machine.go:96] duration metric: took 958.278616ms to provisionDockerMachine
	I0816 18:14:34.824106   75402 start.go:293] postStartSetup for "old-k8s-version-783465" (driver="kvm2")
	I0816 18:14:34.824120   75402 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 18:14:34.824169   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	I0816 18:14:34.824556   75402 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 18:14:34.824599   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:14:34.827203   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.827517   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:34.827547   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.827677   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHPort
	I0816 18:14:34.827869   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:34.828033   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHUsername
	I0816 18:14:34.828171   75402 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/old-k8s-version-783465/id_rsa Username:docker}
	I0816 18:14:34.912148   75402 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 18:14:34.916652   75402 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 18:14:34.916681   75402 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/addons for local assets ...
	I0816 18:14:34.916755   75402 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/files for local assets ...
	I0816 18:14:34.916864   75402 filesync.go:149] local asset: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem -> 167532.pem in /etc/ssl/certs
	I0816 18:14:34.916989   75402 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 18:14:34.927061   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /etc/ssl/certs/167532.pem (1708 bytes)
	I0816 18:14:34.949703   75402 start.go:296] duration metric: took 125.581331ms for postStartSetup
	I0816 18:14:34.949743   75402 fix.go:56] duration metric: took 19.13519024s for fixHost
	I0816 18:14:34.949763   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:14:34.952740   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.953090   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:34.953124   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.953307   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHPort
	I0816 18:14:34.953532   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:34.953715   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:34.953861   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHUsername
	I0816 18:14:34.954029   75402 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:34.954229   75402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0816 18:14:34.954242   75402 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 18:14:35.053143   75402 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723832075.025252523
	
	I0816 18:14:35.053171   75402 fix.go:216] guest clock: 1723832075.025252523
	I0816 18:14:35.053180   75402 fix.go:229] Guest: 2024-08-16 18:14:35.025252523 +0000 UTC Remote: 2024-08-16 18:14:34.949747236 +0000 UTC m=+221.880938919 (delta=75.505287ms)
	I0816 18:14:35.053204   75402 fix.go:200] guest clock delta is within tolerance: 75.505287ms
	I0816 18:14:35.053211   75402 start.go:83] releasing machines lock for "old-k8s-version-783465", held for 19.238692888s
	I0816 18:14:35.053243   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	I0816 18:14:35.053549   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetIP
	I0816 18:14:35.056365   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:35.056792   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:35.056823   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:35.057009   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	I0816 18:14:35.057509   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	I0816 18:14:35.057731   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	I0816 18:14:35.057831   75402 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 18:14:35.057892   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:14:35.057951   75402 ssh_runner.go:195] Run: cat /version.json
	I0816 18:14:35.057972   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:14:35.060543   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:35.060733   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:35.060987   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:35.061016   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:35.061126   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:35.061148   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:35.061154   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHPort
	I0816 18:14:35.061319   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHPort
	I0816 18:14:35.061339   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:35.061456   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:35.061518   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHUsername
	I0816 18:14:35.061639   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHUsername
	I0816 18:14:35.061720   75402 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/old-k8s-version-783465/id_rsa Username:docker}
	I0816 18:14:35.061773   75402 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/old-k8s-version-783465/id_rsa Username:docker}
	I0816 18:14:35.174137   75402 ssh_runner.go:195] Run: systemctl --version
	I0816 18:14:35.181704   75402 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 18:14:35.323490   75402 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 18:14:35.330733   75402 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 18:14:35.330807   75402 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 18:14:35.350653   75402 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 18:14:35.350679   75402 start.go:495] detecting cgroup driver to use...
	I0816 18:14:35.350763   75402 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 18:14:35.372307   75402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 18:14:35.386513   75402 docker.go:217] disabling cri-docker service (if available) ...
	I0816 18:14:35.386598   75402 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 18:14:35.400406   75402 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 18:14:35.414761   75402 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 18:14:35.540356   75402 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 18:14:35.675726   75402 docker.go:233] disabling docker service ...
	I0816 18:14:35.675793   75402 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 18:14:35.691169   75402 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 18:14:35.707288   75402 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 18:14:35.858149   75402 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 18:14:35.981654   75402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 18:14:35.996396   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 18:14:36.013656   75402 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0816 18:14:36.013711   75402 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:36.023839   75402 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 18:14:36.023907   75402 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:36.033889   75402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:36.043727   75402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:36.053496   75402 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 18:14:36.063694   75402 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 18:14:36.072919   75402 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 18:14:36.072979   75402 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 18:14:36.085707   75402 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 18:14:36.095377   75402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:14:36.219235   75402 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 18:14:36.384915   75402 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 18:14:36.384990   75402 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 18:14:36.392122   75402 start.go:563] Will wait 60s for crictl version
	I0816 18:14:36.392196   75402 ssh_runner.go:195] Run: which crictl
	I0816 18:14:36.397589   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 18:14:36.443581   75402 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 18:14:36.443710   75402 ssh_runner.go:195] Run: crio --version
	I0816 18:14:36.473740   75402 ssh_runner.go:195] Run: crio --version
	I0816 18:14:36.512542   75402 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0816 18:14:36.513678   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetIP
	I0816 18:14:36.517404   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:36.517912   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:36.517948   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:36.518190   75402 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0816 18:14:36.523577   75402 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 18:14:36.536188   75402 kubeadm.go:883] updating cluster {Name:old-k8s-version-783465 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-783465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 18:14:36.536361   75402 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0816 18:14:36.536425   75402 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 18:14:36.587027   75402 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0816 18:14:36.587085   75402 ssh_runner.go:195] Run: which lz4
	I0816 18:14:36.590780   75402 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 18:14:36.594635   75402 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 18:14:36.594673   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0816 18:14:35.080033   74510 main.go:141] libmachine: (embed-certs-777541) Calling .Start
	I0816 18:14:35.080220   74510 main.go:141] libmachine: (embed-certs-777541) Ensuring networks are active...
	I0816 18:14:35.080971   74510 main.go:141] libmachine: (embed-certs-777541) Ensuring network default is active
	I0816 18:14:35.081366   74510 main.go:141] libmachine: (embed-certs-777541) Ensuring network mk-embed-certs-777541 is active
	I0816 18:14:35.081887   74510 main.go:141] libmachine: (embed-certs-777541) Getting domain xml...
	I0816 18:14:35.082634   74510 main.go:141] libmachine: (embed-certs-777541) Creating domain...
	I0816 18:14:36.459300   74510 main.go:141] libmachine: (embed-certs-777541) Waiting to get IP...
	I0816 18:14:36.460282   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:36.460801   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:36.460883   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:36.460778   76422 retry.go:31] will retry after 291.491491ms: waiting for machine to come up
	I0816 18:14:36.754548   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:36.755372   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:36.755412   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:36.755313   76422 retry.go:31] will retry after 356.347467ms: waiting for machine to come up
	I0816 18:14:37.113124   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:37.113704   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:37.113739   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:37.113676   76422 retry.go:31] will retry after 386.244375ms: waiting for machine to come up
	I0816 18:14:37.502241   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:37.502796   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:37.502826   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:37.502750   76422 retry.go:31] will retry after 437.69847ms: waiting for machine to come up
	I0816 18:14:37.942667   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:37.943423   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:37.943456   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:37.943378   76422 retry.go:31] will retry after 709.064032ms: waiting for machine to come up
	I0816 18:14:38.653840   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:38.654349   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:38.654386   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:38.654297   76422 retry.go:31] will retry after 594.417028ms: waiting for machine to come up
	I0816 18:14:34.700134   74828 pod_ready.go:93] pod "kube-scheduler-no-preload-864476" in "kube-system" namespace has status "Ready":"True"
	I0816 18:14:34.700158   74828 pod_ready.go:82] duration metric: took 13.007571631s for pod "kube-scheduler-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:34.700171   74828 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:36.707977   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:38.708527   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:35.790842   75006 pod_ready.go:103] pod "coredns-6f6b679f8f-2sgmk" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:37.791236   75006 pod_ready.go:93] pod "coredns-6f6b679f8f-2sgmk" in "kube-system" namespace has status "Ready":"True"
	I0816 18:14:37.791278   75006 pod_ready.go:82] duration metric: took 6.008656328s for pod "coredns-6f6b679f8f-2sgmk" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:37.791294   75006 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:39.798513   75006 pod_ready.go:93] pod "etcd-default-k8s-diff-port-256678" in "kube-system" namespace has status "Ready":"True"
	I0816 18:14:39.798543   75006 pod_ready.go:82] duration metric: took 2.007240233s for pod "etcd-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:39.798557   75006 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:38.127403   75402 crio.go:462] duration metric: took 1.536659915s to copy over tarball
	I0816 18:14:38.127504   75402 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 18:14:41.109575   75402 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.982013621s)
	I0816 18:14:41.109639   75402 crio.go:469] duration metric: took 2.982198625s to extract the tarball
	I0816 18:14:41.109650   75402 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 18:14:41.152940   75402 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 18:14:41.185863   75402 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0816 18:14:41.185892   75402 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0816 18:14:41.185982   75402 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:14:41.186003   75402 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0816 18:14:41.186036   75402 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0816 18:14:41.186044   75402 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 18:14:41.186103   75402 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 18:14:41.185993   75402 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0816 18:14:41.186171   75402 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 18:14:41.185993   75402 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 18:14:41.187521   75402 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0816 18:14:41.187532   75402 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 18:14:41.187542   75402 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0816 18:14:41.187527   75402 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 18:14:41.187595   75402 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:14:41.187605   75402 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 18:14:41.187688   75402 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 18:14:41.187840   75402 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0816 18:14:41.421551   75402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0816 18:14:41.462506   75402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0816 18:14:41.467716   75402 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0816 18:14:41.467758   75402 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0816 18:14:41.467810   75402 ssh_runner.go:195] Run: which crictl
	I0816 18:14:41.508571   75402 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0816 18:14:41.508638   75402 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 18:14:41.508687   75402 ssh_runner.go:195] Run: which crictl
	I0816 18:14:41.508691   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 18:14:41.514560   75402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 18:14:41.520003   75402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0816 18:14:41.526475   75402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0816 18:14:41.526892   75402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0816 18:14:41.533271   75402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0816 18:14:41.569269   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 18:14:41.569426   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 18:14:41.694043   75402 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0816 18:14:41.694100   75402 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 18:14:41.694049   75402 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0816 18:14:41.694210   75402 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 18:14:41.694173   75402 ssh_runner.go:195] Run: which crictl
	I0816 18:14:41.694268   75402 ssh_runner.go:195] Run: which crictl
	I0816 18:14:41.701292   75402 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0816 18:14:41.701337   75402 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0816 18:14:41.701389   75402 ssh_runner.go:195] Run: which crictl
	I0816 18:14:41.707345   75402 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0816 18:14:41.707415   75402 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 18:14:41.707467   75402 ssh_runner.go:195] Run: which crictl
	I0816 18:14:41.711820   75402 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0816 18:14:41.711854   75402 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0816 18:14:41.711896   75402 ssh_runner.go:195] Run: which crictl
	I0816 18:14:41.723813   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 18:14:41.723850   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 18:14:41.723814   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 18:14:41.723939   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 18:14:41.723951   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 18:14:41.724003   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 18:14:41.724060   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 18:14:41.872645   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 18:14:41.872674   75402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0816 18:14:41.873747   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 18:14:41.873786   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 18:14:41.873891   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 18:14:41.873899   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 18:14:41.873960   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 18:14:41.997519   75402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0816 18:14:42.002048   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 18:14:42.002091   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 18:14:42.002140   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 18:14:42.002178   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 18:14:42.002218   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 18:14:42.070993   75402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:14:42.115418   75402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0816 18:14:42.115527   75402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0816 18:14:42.115623   75402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0816 18:14:42.115631   75402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0816 18:14:42.115689   75402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0816 18:14:42.235706   75402 cache_images.go:92] duration metric: took 1.049784726s to LoadCachedImages
	W0816 18:14:42.235807   75402 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0816 18:14:42.235821   75402 kubeadm.go:934] updating node { 192.168.39.211 8443 v1.20.0 crio true true} ...
	I0816 18:14:42.235939   75402 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-783465 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.211
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-783465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 18:14:42.236024   75402 ssh_runner.go:195] Run: crio config
	I0816 18:14:42.286742   75402 cni.go:84] Creating CNI manager for ""
	I0816 18:14:42.286763   75402 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 18:14:42.286771   75402 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 18:14:42.286789   75402 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.211 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-783465 NodeName:old-k8s-version-783465 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.211"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.211 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0816 18:14:42.286904   75402 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.211
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-783465"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.211
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.211"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 18:14:42.286961   75402 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0816 18:14:42.297015   75402 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 18:14:42.297098   75402 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 18:14:42.306400   75402 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0816 18:14:42.322812   75402 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 18:14:42.339791   75402 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0816 18:14:42.356930   75402 ssh_runner.go:195] Run: grep 192.168.39.211	control-plane.minikube.internal$ /etc/hosts
	I0816 18:14:42.360578   75402 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.211	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 18:14:42.373248   75402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:14:42.495499   75402 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 18:14:42.511910   75402 certs.go:68] Setting up /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465 for IP: 192.168.39.211
	I0816 18:14:42.511942   75402 certs.go:194] generating shared ca certs ...
	I0816 18:14:42.511964   75402 certs.go:226] acquiring lock for ca certs: {Name:mk1e000575c94c138513704c2900b8a68810eb65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:14:42.512147   75402 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key
	I0816 18:14:42.512206   75402 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key
	I0816 18:14:42.512220   75402 certs.go:256] generating profile certs ...
	I0816 18:14:42.512361   75402 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/client.key
	I0816 18:14:42.512431   75402 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/apiserver.key.94c45fb6
	I0816 18:14:42.512483   75402 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/proxy-client.key
	I0816 18:14:42.512664   75402 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem (1338 bytes)
	W0816 18:14:42.512709   75402 certs.go:480] ignoring /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753_empty.pem, impossibly tiny 0 bytes
	I0816 18:14:42.512724   75402 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 18:14:42.512754   75402 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem (1082 bytes)
	I0816 18:14:42.512794   75402 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem (1123 bytes)
	I0816 18:14:42.512825   75402 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem (1679 bytes)
	I0816 18:14:42.512881   75402 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem (1708 bytes)
	I0816 18:14:42.513660   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 18:14:42.552291   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 18:14:42.585617   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 18:14:42.611017   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 18:14:42.638092   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0816 18:14:42.676877   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 18:14:42.710091   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 18:14:42.743734   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 18:14:42.779905   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /usr/share/ca-certificates/167532.pem (1708 bytes)
	I0816 18:14:42.802779   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 18:14:42.826432   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem --> /usr/share/ca-certificates/16753.pem (1338 bytes)
	I0816 18:14:42.849286   75402 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 18:14:42.866901   75402 ssh_runner.go:195] Run: openssl version
	I0816 18:14:42.872283   75402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167532.pem && ln -fs /usr/share/ca-certificates/167532.pem /etc/ssl/certs/167532.pem"
	I0816 18:14:42.882976   75402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167532.pem
	I0816 18:14:42.887432   75402 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 17:00 /usr/share/ca-certificates/167532.pem
	I0816 18:14:42.887504   75402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167532.pem
	I0816 18:14:42.893275   75402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167532.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 18:14:42.903687   75402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 18:14:42.915232   75402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:14:42.919669   75402 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:49 /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:14:42.919735   75402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:14:42.925282   75402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 18:14:42.937888   75402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16753.pem && ln -fs /usr/share/ca-certificates/16753.pem /etc/ssl/certs/16753.pem"
	I0816 18:14:42.949994   75402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16753.pem
	I0816 18:14:42.954495   75402 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 17:00 /usr/share/ca-certificates/16753.pem
	I0816 18:14:42.954548   75402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16753.pem
	I0816 18:14:42.960295   75402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16753.pem /etc/ssl/certs/51391683.0"
	I0816 18:14:42.972006   75402 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 18:14:42.976450   75402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 18:14:42.982741   75402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 18:14:42.988649   75402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 18:14:42.995021   75402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 18:14:43.000965   75402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 18:14:43.007030   75402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 18:14:43.012891   75402 kubeadm.go:392] StartCluster: {Name:old-k8s-version-783465 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-783465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 18:14:43.012983   75402 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 18:14:43.013071   75402 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 18:14:43.050670   75402 cri.go:89] found id: ""
	I0816 18:14:43.050741   75402 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 18:14:43.060748   75402 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 18:14:43.060773   75402 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 18:14:43.060825   75402 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 18:14:43.070299   75402 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 18:14:43.071251   75402 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-783465" does not appear in /home/jenkins/minikube-integration/19461-9545/kubeconfig
	I0816 18:14:43.071945   75402 kubeconfig.go:62] /home/jenkins/minikube-integration/19461-9545/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-783465" cluster setting kubeconfig missing "old-k8s-version-783465" context setting]
	I0816 18:14:43.072870   75402 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/kubeconfig: {Name:mk7d5abb3a1b9b654c5d7490d32aeae2c9a1bdd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:14:39.250064   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:39.250979   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:39.251028   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:39.250914   76422 retry.go:31] will retry after 1.014851653s: waiting for machine to come up
	I0816 18:14:40.266811   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:40.267287   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:40.267323   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:40.267238   76422 retry.go:31] will retry after 1.333311972s: waiting for machine to come up
	I0816 18:14:41.602031   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:41.602532   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:41.602565   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:41.602480   76422 retry.go:31] will retry after 1.525496469s: waiting for machine to come up
	I0816 18:14:43.130136   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:43.130620   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:43.130661   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:43.130563   76422 retry.go:31] will retry after 2.206344656s: waiting for machine to come up
	I0816 18:14:41.206879   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:43.706278   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:41.806382   75006 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-256678" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:43.927145   75006 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-256678" in "kube-system" namespace has status "Ready":"True"
	I0816 18:14:43.927173   75006 pod_ready.go:82] duration metric: took 4.128607781s for pod "kube-apiserver-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:43.927182   75006 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:43.932293   75006 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-256678" in "kube-system" namespace has status "Ready":"True"
	I0816 18:14:43.932314   75006 pod_ready.go:82] duration metric: took 5.122737ms for pod "kube-controller-manager-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:43.932326   75006 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-l4lr2" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:43.937128   75006 pod_ready.go:93] pod "kube-proxy-l4lr2" in "kube-system" namespace has status "Ready":"True"
	I0816 18:14:43.937146   75006 pod_ready.go:82] duration metric: took 4.812798ms for pod "kube-proxy-l4lr2" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:43.937154   75006 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:43.941992   75006 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-256678" in "kube-system" namespace has status "Ready":"True"
	I0816 18:14:43.942018   75006 pod_ready.go:82] duration metric: took 4.856588ms for pod "kube-scheduler-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:43.942030   75006 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:43.141753   75402 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 18:14:43.154269   75402 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.211
	I0816 18:14:43.154324   75402 kubeadm.go:1160] stopping kube-system containers ...
	I0816 18:14:43.154341   75402 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 18:14:43.154404   75402 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 18:14:43.192966   75402 cri.go:89] found id: ""
	I0816 18:14:43.193035   75402 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 18:14:43.213101   75402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 18:14:43.222811   75402 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 18:14:43.222826   75402 kubeadm.go:157] found existing configuration files:
	
	I0816 18:14:43.222870   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 18:14:43.232196   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 18:14:43.232261   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 18:14:43.241633   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 18:14:43.250751   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 18:14:43.250800   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 18:14:43.260197   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 18:14:43.268943   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 18:14:43.269000   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 18:14:43.277887   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 18:14:43.286281   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 18:14:43.286391   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 18:14:43.295899   75402 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 18:14:43.306026   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:43.441487   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:44.213457   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:44.431649   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:44.553955   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:44.646817   75402 api_server.go:52] waiting for apiserver process to appear ...
	I0816 18:14:44.646923   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:45.147202   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:45.648050   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:46.147958   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:46.647398   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:47.147403   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:47.646992   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:45.338228   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:45.338729   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:45.338763   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:45.338660   76422 retry.go:31] will retry after 2.526891535s: waiting for machine to come up
	I0816 18:14:47.868326   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:47.868821   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:47.868853   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:47.868774   76422 retry.go:31] will retry after 2.866643935s: waiting for machine to come up
	I0816 18:14:45.706669   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:47.707062   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:45.948791   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:48.447930   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:48.147987   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:48.646974   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:49.147114   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:49.647020   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:50.147765   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:50.647135   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:51.147506   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:51.647568   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:52.147648   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:52.647865   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:50.736760   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:50.737295   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:50.737331   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:50.737245   76422 retry.go:31] will retry after 3.824271015s: waiting for machine to come up
	I0816 18:14:50.206249   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:52.206435   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:50.449586   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:52.948577   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:54.566285   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:54.566784   74510 main.go:141] libmachine: (embed-certs-777541) Found IP for machine: 192.168.61.218
	I0816 18:14:54.566809   74510 main.go:141] libmachine: (embed-certs-777541) Reserving static IP address...
	I0816 18:14:54.566825   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has current primary IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:54.567171   74510 main.go:141] libmachine: (embed-certs-777541) Reserved static IP address: 192.168.61.218
	I0816 18:14:54.567193   74510 main.go:141] libmachine: (embed-certs-777541) Waiting for SSH to be available...
	I0816 18:14:54.567211   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "embed-certs-777541", mac: "52:54:00:54:9a:0c", ip: "192.168.61.218"} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:54.567231   74510 main.go:141] libmachine: (embed-certs-777541) DBG | skip adding static IP to network mk-embed-certs-777541 - found existing host DHCP lease matching {name: "embed-certs-777541", mac: "52:54:00:54:9a:0c", ip: "192.168.61.218"}
	I0816 18:14:54.567245   74510 main.go:141] libmachine: (embed-certs-777541) DBG | Getting to WaitForSSH function...
	I0816 18:14:54.569546   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:54.569864   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:54.569890   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:54.570019   74510 main.go:141] libmachine: (embed-certs-777541) DBG | Using SSH client type: external
	I0816 18:14:54.570046   74510 main.go:141] libmachine: (embed-certs-777541) DBG | Using SSH private key: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/embed-certs-777541/id_rsa (-rw-------)
	I0816 18:14:54.570073   74510 main.go:141] libmachine: (embed-certs-777541) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.218 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19461-9545/.minikube/machines/embed-certs-777541/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 18:14:54.570082   74510 main.go:141] libmachine: (embed-certs-777541) DBG | About to run SSH command:
	I0816 18:14:54.570109   74510 main.go:141] libmachine: (embed-certs-777541) DBG | exit 0
	I0816 18:14:54.692450   74510 main.go:141] libmachine: (embed-certs-777541) DBG | SSH cmd err, output: <nil>: 
	I0816 18:14:54.692828   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetConfigRaw
	I0816 18:14:54.693486   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetIP
	I0816 18:14:54.696565   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:54.696943   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:54.696987   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:54.697248   74510 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/embed-certs-777541/config.json ...
	I0816 18:14:54.697455   74510 machine.go:93] provisionDockerMachine start ...
	I0816 18:14:54.697475   74510 main.go:141] libmachine: (embed-certs-777541) Calling .DriverName
	I0816 18:14:54.697686   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:14:54.700172   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:54.700491   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:54.700520   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:54.700716   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHPort
	I0816 18:14:54.700906   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:54.701102   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:54.701239   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHUsername
	I0816 18:14:54.701440   74510 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:54.701650   74510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.218 22 <nil> <nil>}
	I0816 18:14:54.701662   74510 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 18:14:54.800770   74510 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 18:14:54.800805   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetMachineName
	I0816 18:14:54.801047   74510 buildroot.go:166] provisioning hostname "embed-certs-777541"
	I0816 18:14:54.801079   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetMachineName
	I0816 18:14:54.801264   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:14:54.804313   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:54.804734   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:54.804761   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:54.804940   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHPort
	I0816 18:14:54.805132   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:54.805322   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:54.805485   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHUsername
	I0816 18:14:54.805711   74510 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:54.805869   74510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.218 22 <nil> <nil>}
	I0816 18:14:54.805886   74510 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-777541 && echo "embed-certs-777541" | sudo tee /etc/hostname
	I0816 18:14:54.918908   74510 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-777541
	
	I0816 18:14:54.918949   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:14:54.921760   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:54.922117   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:54.922146   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:54.922338   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHPort
	I0816 18:14:54.922511   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:54.922681   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:54.922843   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHUsername
	I0816 18:14:54.923033   74510 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:54.923243   74510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.218 22 <nil> <nil>}
	I0816 18:14:54.923261   74510 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-777541' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-777541/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-777541' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 18:14:55.028983   74510 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 18:14:55.029016   74510 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19461-9545/.minikube CaCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19461-9545/.minikube}
	I0816 18:14:55.029040   74510 buildroot.go:174] setting up certificates
	I0816 18:14:55.029051   74510 provision.go:84] configureAuth start
	I0816 18:14:55.029064   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetMachineName
	I0816 18:14:55.029320   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetIP
	I0816 18:14:55.032273   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.032693   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:55.032743   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.032983   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:14:55.035257   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.035581   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:55.035606   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.035742   74510 provision.go:143] copyHostCerts
	I0816 18:14:55.035797   74510 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem, removing ...
	I0816 18:14:55.035814   74510 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem
	I0816 18:14:55.035899   74510 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem (1123 bytes)
	I0816 18:14:55.035996   74510 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem, removing ...
	I0816 18:14:55.036004   74510 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem
	I0816 18:14:55.036024   74510 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem (1679 bytes)
	I0816 18:14:55.036081   74510 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem, removing ...
	I0816 18:14:55.036087   74510 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem
	I0816 18:14:55.036106   74510 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem (1082 bytes)
	I0816 18:14:55.036155   74510 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem org=jenkins.embed-certs-777541 san=[127.0.0.1 192.168.61.218 embed-certs-777541 localhost minikube]
	I0816 18:14:55.182540   74510 provision.go:177] copyRemoteCerts
	I0816 18:14:55.182606   74510 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 18:14:55.182633   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:14:55.185807   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.186179   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:55.186229   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.186429   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHPort
	I0816 18:14:55.186619   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:55.186770   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHUsername
	I0816 18:14:55.186884   74510 sshutil.go:53] new ssh client: &{IP:192.168.61.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/embed-certs-777541/id_rsa Username:docker}
	I0816 18:14:55.262494   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0816 18:14:55.285186   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 18:14:55.307082   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 18:14:55.328912   74510 provision.go:87] duration metric: took 299.848734ms to configureAuth
	I0816 18:14:55.328934   74510 buildroot.go:189] setting minikube options for container-runtime
	I0816 18:14:55.329140   74510 config.go:182] Loaded profile config "embed-certs-777541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 18:14:55.329215   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:14:55.331989   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.332366   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:55.332414   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.332594   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHPort
	I0816 18:14:55.332801   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:55.333006   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:55.333158   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHUsername
	I0816 18:14:55.333312   74510 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:55.333501   74510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.218 22 <nil> <nil>}
	I0816 18:14:55.333522   74510 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 18:14:55.579734   74510 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 18:14:55.579765   74510 machine.go:96] duration metric: took 882.296402ms to provisionDockerMachine
	I0816 18:14:55.579781   74510 start.go:293] postStartSetup for "embed-certs-777541" (driver="kvm2")
	I0816 18:14:55.579793   74510 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 18:14:55.579814   74510 main.go:141] libmachine: (embed-certs-777541) Calling .DriverName
	I0816 18:14:55.580182   74510 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 18:14:55.580216   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:14:55.582826   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.583250   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:55.583285   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.583374   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHPort
	I0816 18:14:55.583574   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:55.583739   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHUsername
	I0816 18:14:55.583972   74510 sshutil.go:53] new ssh client: &{IP:192.168.61.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/embed-certs-777541/id_rsa Username:docker}
	I0816 18:14:55.663379   74510 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 18:14:55.667205   74510 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 18:14:55.667231   74510 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/addons for local assets ...
	I0816 18:14:55.667321   74510 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/files for local assets ...
	I0816 18:14:55.667426   74510 filesync.go:149] local asset: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem -> 167532.pem in /etc/ssl/certs
	I0816 18:14:55.667560   74510 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 18:14:55.676427   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /etc/ssl/certs/167532.pem (1708 bytes)
	I0816 18:14:55.698188   74510 start.go:296] duration metric: took 118.396211ms for postStartSetup
	I0816 18:14:55.698226   74510 fix.go:56] duration metric: took 20.644852989s for fixHost
	I0816 18:14:55.698245   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:14:55.701014   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.701359   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:55.701390   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.701587   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHPort
	I0816 18:14:55.701755   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:55.701924   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:55.702070   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHUsername
	I0816 18:14:55.702241   74510 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:55.702452   74510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.218 22 <nil> <nil>}
	I0816 18:14:55.702464   74510 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 18:14:55.801397   74510 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723832095.756052952
	
	I0816 18:14:55.801431   74510 fix.go:216] guest clock: 1723832095.756052952
	I0816 18:14:55.801443   74510 fix.go:229] Guest: 2024-08-16 18:14:55.756052952 +0000 UTC Remote: 2024-08-16 18:14:55.698231489 +0000 UTC m=+357.018707788 (delta=57.821463ms)
	I0816 18:14:55.801492   74510 fix.go:200] guest clock delta is within tolerance: 57.821463ms
	I0816 18:14:55.801504   74510 start.go:83] releasing machines lock for "embed-certs-777541", held for 20.74815396s
	I0816 18:14:55.801528   74510 main.go:141] libmachine: (embed-certs-777541) Calling .DriverName
	I0816 18:14:55.801781   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetIP
	I0816 18:14:55.804216   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.804617   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:55.804659   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.804795   74510 main.go:141] libmachine: (embed-certs-777541) Calling .DriverName
	I0816 18:14:55.805395   74510 main.go:141] libmachine: (embed-certs-777541) Calling .DriverName
	I0816 18:14:55.805622   74510 main.go:141] libmachine: (embed-certs-777541) Calling .DriverName
	I0816 18:14:55.805730   74510 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 18:14:55.805781   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:14:55.805849   74510 ssh_runner.go:195] Run: cat /version.json
	I0816 18:14:55.805877   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:14:55.808587   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.808946   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:55.808978   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.809080   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.809249   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHPort
	I0816 18:14:55.809415   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:55.809417   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:55.809442   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.809575   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHPort
	I0816 18:14:55.809597   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHUsername
	I0816 18:14:55.809720   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:55.809766   74510 sshutil.go:53] new ssh client: &{IP:192.168.61.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/embed-certs-777541/id_rsa Username:docker}
	I0816 18:14:55.809857   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHUsername
	I0816 18:14:55.809970   74510 sshutil.go:53] new ssh client: &{IP:192.168.61.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/embed-certs-777541/id_rsa Username:docker}
	I0816 18:14:55.885026   74510 ssh_runner.go:195] Run: systemctl --version
	I0816 18:14:55.927940   74510 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 18:14:56.072936   74510 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 18:14:56.080952   74510 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 18:14:56.081029   74510 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 18:14:56.100709   74510 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 18:14:56.100734   74510 start.go:495] detecting cgroup driver to use...
	I0816 18:14:56.100791   74510 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 18:14:56.115759   74510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 18:14:56.129714   74510 docker.go:217] disabling cri-docker service (if available) ...
	I0816 18:14:56.129774   74510 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 18:14:56.142909   74510 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 18:14:56.156413   74510 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 18:14:56.268818   74510 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 18:14:56.424536   74510 docker.go:233] disabling docker service ...
	I0816 18:14:56.424612   74510 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 18:14:56.438033   74510 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 18:14:56.450479   74510 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 18:14:56.560132   74510 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 18:14:56.683671   74510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 18:14:56.697636   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 18:14:56.716486   74510 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 18:14:56.716560   74510 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:56.726082   74510 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 18:14:56.726144   74510 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:56.735971   74510 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:56.745410   74510 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:56.754952   74510 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 18:14:56.764717   74510 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:56.774153   74510 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:56.789843   74510 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:56.799399   74510 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 18:14:56.807679   74510 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 18:14:56.807743   74510 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 18:14:56.819873   74510 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 18:14:56.829921   74510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:14:56.936372   74510 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 18:14:57.073931   74510 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 18:14:57.073998   74510 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 18:14:57.078254   74510 start.go:563] Will wait 60s for crictl version
	I0816 18:14:57.078327   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:14:57.081833   74510 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 18:14:57.121402   74510 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 18:14:57.121476   74510 ssh_runner.go:195] Run: crio --version
	I0816 18:14:57.149262   74510 ssh_runner.go:195] Run: crio --version
	I0816 18:14:57.183015   74510 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 18:14:53.146986   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:53.647279   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:54.147587   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:54.647911   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:55.147322   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:55.647765   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:56.147695   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:56.647296   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:57.147031   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:57.647108   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:57.184157   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetIP
	I0816 18:14:57.186758   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:57.187177   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:57.187206   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:57.187439   74510 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0816 18:14:57.191152   74510 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 18:14:57.203073   74510 kubeadm.go:883] updating cluster {Name:embed-certs-777541 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-777541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.218 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 18:14:57.203240   74510 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 18:14:57.203332   74510 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 18:14:57.238289   74510 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0816 18:14:57.238348   74510 ssh_runner.go:195] Run: which lz4
	I0816 18:14:57.242251   74510 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 18:14:57.246081   74510 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 18:14:57.246124   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0816 18:14:58.459887   74510 crio.go:462] duration metric: took 1.217672418s to copy over tarball
	I0816 18:14:58.459960   74510 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 18:14:54.707069   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:57.206750   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:55.449391   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:57.449830   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:59.451338   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:58.147661   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:58.647270   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:59.147355   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:59.647821   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:00.148023   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:00.647165   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:01.147669   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:01.647960   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:02.147721   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:02.647932   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:00.545989   74510 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.085985152s)
	I0816 18:15:00.546028   74510 crio.go:469] duration metric: took 2.086110527s to extract the tarball
	I0816 18:15:00.546039   74510 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 18:15:00.587096   74510 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 18:15:00.630366   74510 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 18:15:00.630394   74510 cache_images.go:84] Images are preloaded, skipping loading
	I0816 18:15:00.630405   74510 kubeadm.go:934] updating node { 192.168.61.218 8443 v1.31.0 crio true true} ...
	I0816 18:15:00.630540   74510 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-777541 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.218
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-777541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 18:15:00.630630   74510 ssh_runner.go:195] Run: crio config
	I0816 18:15:00.681196   74510 cni.go:84] Creating CNI manager for ""
	I0816 18:15:00.681224   74510 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 18:15:00.681235   74510 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 18:15:00.681262   74510 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.218 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-777541 NodeName:embed-certs-777541 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.218"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.218 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 18:15:00.681439   74510 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.218
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-777541"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.218
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.218"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 18:15:00.681534   74510 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 18:15:00.691239   74510 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 18:15:00.691294   74510 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 18:15:00.700059   74510 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0816 18:15:00.717826   74510 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 18:15:00.733475   74510 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0816 18:15:00.750175   74510 ssh_runner.go:195] Run: grep 192.168.61.218	control-plane.minikube.internal$ /etc/hosts
	I0816 18:15:00.753865   74510 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.218	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 18:15:00.765531   74510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:15:00.875234   74510 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 18:15:00.893095   74510 certs.go:68] Setting up /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/embed-certs-777541 for IP: 192.168.61.218
	I0816 18:15:00.893115   74510 certs.go:194] generating shared ca certs ...
	I0816 18:15:00.893131   74510 certs.go:226] acquiring lock for ca certs: {Name:mk1e000575c94c138513704c2900b8a68810eb65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:15:00.893274   74510 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key
	I0816 18:15:00.893318   74510 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key
	I0816 18:15:00.893327   74510 certs.go:256] generating profile certs ...
	I0816 18:15:00.893403   74510 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/embed-certs-777541/client.key
	I0816 18:15:00.893459   74510 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/embed-certs-777541/apiserver.key.dd0c1a01
	I0816 18:15:00.893503   74510 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/embed-certs-777541/proxy-client.key
	I0816 18:15:00.893617   74510 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem (1338 bytes)
	W0816 18:15:00.893645   74510 certs.go:480] ignoring /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753_empty.pem, impossibly tiny 0 bytes
	I0816 18:15:00.893655   74510 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 18:15:00.893675   74510 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem (1082 bytes)
	I0816 18:15:00.893698   74510 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem (1123 bytes)
	I0816 18:15:00.893721   74510 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem (1679 bytes)
	I0816 18:15:00.893759   74510 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem (1708 bytes)
	I0816 18:15:00.894445   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 18:15:00.936535   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 18:15:00.969775   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 18:15:01.013053   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 18:15:01.046087   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/embed-certs-777541/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0816 18:15:01.073290   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/embed-certs-777541/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 18:15:01.097033   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/embed-certs-777541/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 18:15:01.119859   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/embed-certs-777541/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 18:15:01.141943   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem --> /usr/share/ca-certificates/16753.pem (1338 bytes)
	I0816 18:15:01.168752   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /usr/share/ca-certificates/167532.pem (1708 bytes)
	I0816 18:15:01.191193   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 18:15:01.213691   74510 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 18:15:01.229374   74510 ssh_runner.go:195] Run: openssl version
	I0816 18:15:01.234563   74510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16753.pem && ln -fs /usr/share/ca-certificates/16753.pem /etc/ssl/certs/16753.pem"
	I0816 18:15:01.244301   74510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16753.pem
	I0816 18:15:01.248156   74510 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 17:00 /usr/share/ca-certificates/16753.pem
	I0816 18:15:01.248220   74510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16753.pem
	I0816 18:15:01.253468   74510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16753.pem /etc/ssl/certs/51391683.0"
	I0816 18:15:01.262917   74510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167532.pem && ln -fs /usr/share/ca-certificates/167532.pem /etc/ssl/certs/167532.pem"
	I0816 18:15:01.272577   74510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167532.pem
	I0816 18:15:01.276790   74510 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 17:00 /usr/share/ca-certificates/167532.pem
	I0816 18:15:01.276841   74510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167532.pem
	I0816 18:15:01.281847   74510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167532.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 18:15:01.291789   74510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 18:15:01.302422   74510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:15:01.306320   74510 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:49 /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:15:01.306364   74510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:15:01.311335   74510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 18:15:01.320713   74510 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 18:15:01.324442   74510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 18:15:01.330137   74510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 18:15:01.335693   74510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 18:15:01.340987   74510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 18:15:01.346071   74510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 18:15:01.351280   74510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 18:15:01.357275   74510 kubeadm.go:392] StartCluster: {Name:embed-certs-777541 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-777541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.218 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 18:15:01.357388   74510 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 18:15:01.357427   74510 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 18:15:01.400422   74510 cri.go:89] found id: ""
	I0816 18:15:01.400497   74510 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 18:15:01.410142   74510 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 18:15:01.410162   74510 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 18:15:01.410211   74510 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 18:15:01.419129   74510 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 18:15:01.420130   74510 kubeconfig.go:125] found "embed-certs-777541" server: "https://192.168.61.218:8443"
	I0816 18:15:01.422036   74510 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 18:15:01.430665   74510 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.218
	I0816 18:15:01.430694   74510 kubeadm.go:1160] stopping kube-system containers ...
	I0816 18:15:01.430705   74510 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 18:15:01.430762   74510 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 18:15:01.469108   74510 cri.go:89] found id: ""
	I0816 18:15:01.469182   74510 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 18:15:01.486125   74510 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 18:15:01.495311   74510 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 18:15:01.495335   74510 kubeadm.go:157] found existing configuration files:
	
	I0816 18:15:01.495384   74510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 18:15:01.504066   74510 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 18:15:01.504128   74510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 18:15:01.513222   74510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 18:15:01.521593   74510 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 18:15:01.521692   74510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 18:15:01.530413   74510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 18:15:01.539027   74510 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 18:15:01.539101   74510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 18:15:01.547802   74510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 18:15:01.557143   74510 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 18:15:01.557203   74510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 18:15:01.568616   74510 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 18:15:01.578091   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:15:01.700661   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:15:02.631047   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:15:02.833132   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:15:02.900476   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:15:02.972431   74510 api_server.go:52] waiting for apiserver process to appear ...
	I0816 18:15:02.972514   74510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:03.473296   74510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:59.707731   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:02.206825   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:01.948070   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:03.948398   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:03.147098   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:03.646983   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:04.147320   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:04.647649   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:05.147258   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:05.647999   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:06.147901   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:06.647340   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:07.147339   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:07.648033   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:03.973603   74510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:04.472779   74510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:04.972846   74510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:05.473594   74510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:05.487878   74510 api_server.go:72] duration metric: took 2.51545841s to wait for apiserver process to appear ...
	I0816 18:15:05.487914   74510 api_server.go:88] waiting for apiserver healthz status ...
	I0816 18:15:05.487937   74510 api_server.go:253] Checking apiserver healthz at https://192.168.61.218:8443/healthz ...
	I0816 18:15:08.450583   74510 api_server.go:279] https://192.168.61.218:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 18:15:08.450618   74510 api_server.go:103] status: https://192.168.61.218:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 18:15:08.450635   74510 api_server.go:253] Checking apiserver healthz at https://192.168.61.218:8443/healthz ...
	I0816 18:15:08.495625   74510 api_server.go:279] https://192.168.61.218:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 18:15:08.495656   74510 api_server.go:103] status: https://192.168.61.218:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 18:15:08.495669   74510 api_server.go:253] Checking apiserver healthz at https://192.168.61.218:8443/healthz ...
	I0816 18:15:08.516711   74510 api_server.go:279] https://192.168.61.218:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:15:08.516744   74510 api_server.go:103] status: https://192.168.61.218:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:15:04.836663   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:07.206999   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:06.447839   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:08.449939   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:08.988897   74510 api_server.go:253] Checking apiserver healthz at https://192.168.61.218:8443/healthz ...
	I0816 18:15:08.996347   74510 api_server.go:279] https://192.168.61.218:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:15:08.996374   74510 api_server.go:103] status: https://192.168.61.218:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:15:09.488013   74510 api_server.go:253] Checking apiserver healthz at https://192.168.61.218:8443/healthz ...
	I0816 18:15:09.499514   74510 api_server.go:279] https://192.168.61.218:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:15:09.499559   74510 api_server.go:103] status: https://192.168.61.218:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:15:09.988080   74510 api_server.go:253] Checking apiserver healthz at https://192.168.61.218:8443/healthz ...
	I0816 18:15:09.992106   74510 api_server.go:279] https://192.168.61.218:8443/healthz returned 200:
	ok
	I0816 18:15:09.998515   74510 api_server.go:141] control plane version: v1.31.0
	I0816 18:15:09.998542   74510 api_server.go:131] duration metric: took 4.510619176s to wait for apiserver health ...
	I0816 18:15:09.998555   74510 cni.go:84] Creating CNI manager for ""
	I0816 18:15:09.998563   74510 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 18:15:10.000470   74510 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 18:15:10.001870   74510 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 18:15:10.011805   74510 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 18:15:10.032349   74510 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 18:15:10.046765   74510 system_pods.go:59] 8 kube-system pods found
	I0816 18:15:10.046798   74510 system_pods.go:61] "coredns-6f6b679f8f-8njs2" [f29c31e1-4c2a-4dd8-ba60-62998504c55e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 18:15:10.046808   74510 system_pods.go:61] "etcd-embed-certs-777541" [9cad9a1c-cea5-4271-9a68-3689ecdad607] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 18:15:10.046817   74510 system_pods.go:61] "kube-apiserver-embed-certs-777541" [a5105f98-368f-4687-8eca-ecc66ae59b42] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 18:15:10.046829   74510 system_pods.go:61] "kube-controller-manager-embed-certs-777541" [63cb72fb-167b-446b-874d-6ee665ad8a55] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 18:15:10.046838   74510 system_pods.go:61] "kube-proxy-j5rl7" [fcbc8903-6fa2-4f55-9ec0-92b77e21fb08] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0816 18:15:10.046847   74510 system_pods.go:61] "kube-scheduler-embed-certs-777541" [f2224375-d2f9-4e85-9eb8-26070a90642f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 18:15:10.046855   74510 system_pods.go:61] "metrics-server-6867b74b74-6hkzb" [3e01da8d-7ddf-47cc-9079-5162cf2c2b53] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 18:15:10.046867   74510 system_pods.go:61] "storage-provisioner" [6fc6c4da-0e0f-45cc-84a6-bd4907f5e852] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0816 18:15:10.046876   74510 system_pods.go:74] duration metric: took 14.506593ms to wait for pod list to return data ...
	I0816 18:15:10.046889   74510 node_conditions.go:102] verifying NodePressure condition ...
	I0816 18:15:10.050663   74510 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 18:15:10.050686   74510 node_conditions.go:123] node cpu capacity is 2
	I0816 18:15:10.050699   74510 node_conditions.go:105] duration metric: took 3.805313ms to run NodePressure ...
	I0816 18:15:10.050717   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:15:10.344177   74510 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0816 18:15:10.348795   74510 kubeadm.go:739] kubelet initialised
	I0816 18:15:10.348820   74510 kubeadm.go:740] duration metric: took 4.612695ms waiting for restarted kubelet to initialise ...
	I0816 18:15:10.348830   74510 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:15:10.355270   74510 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-8njs2" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:10.361564   74510 pod_ready.go:98] node "embed-certs-777541" hosting pod "coredns-6f6b679f8f-8njs2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:10.361584   74510 pod_ready.go:82] duration metric: took 6.2936ms for pod "coredns-6f6b679f8f-8njs2" in "kube-system" namespace to be "Ready" ...
	E0816 18:15:10.361592   74510 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-777541" hosting pod "coredns-6f6b679f8f-8njs2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:10.361598   74510 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:10.367126   74510 pod_ready.go:98] node "embed-certs-777541" hosting pod "etcd-embed-certs-777541" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:10.367149   74510 pod_ready.go:82] duration metric: took 5.542782ms for pod "etcd-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	E0816 18:15:10.367159   74510 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-777541" hosting pod "etcd-embed-certs-777541" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:10.367166   74510 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:10.372241   74510 pod_ready.go:98] node "embed-certs-777541" hosting pod "kube-apiserver-embed-certs-777541" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:10.372262   74510 pod_ready.go:82] duration metric: took 5.086551ms for pod "kube-apiserver-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	E0816 18:15:10.372273   74510 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-777541" hosting pod "kube-apiserver-embed-certs-777541" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:10.372301   74510 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:10.436397   74510 pod_ready.go:98] node "embed-certs-777541" hosting pod "kube-controller-manager-embed-certs-777541" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:10.436423   74510 pod_ready.go:82] duration metric: took 64.108858ms for pod "kube-controller-manager-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	E0816 18:15:10.436432   74510 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-777541" hosting pod "kube-controller-manager-embed-certs-777541" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:10.436443   74510 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-j5rl7" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:10.836116   74510 pod_ready.go:98] node "embed-certs-777541" hosting pod "kube-proxy-j5rl7" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:10.836146   74510 pod_ready.go:82] duration metric: took 399.693364ms for pod "kube-proxy-j5rl7" in "kube-system" namespace to be "Ready" ...
	E0816 18:15:10.836158   74510 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-777541" hosting pod "kube-proxy-j5rl7" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:10.836165   74510 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:11.235403   74510 pod_ready.go:98] node "embed-certs-777541" hosting pod "kube-scheduler-embed-certs-777541" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:11.235426   74510 pod_ready.go:82] duration metric: took 399.255693ms for pod "kube-scheduler-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	E0816 18:15:11.235439   74510 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-777541" hosting pod "kube-scheduler-embed-certs-777541" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:11.235445   74510 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:11.635717   74510 pod_ready.go:98] node "embed-certs-777541" hosting pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:11.635746   74510 pod_ready.go:82] duration metric: took 400.29283ms for pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace to be "Ready" ...
	E0816 18:15:11.635756   74510 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-777541" hosting pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:11.635762   74510 pod_ready.go:39] duration metric: took 1.286923943s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:15:11.635784   74510 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 18:15:11.646221   74510 ops.go:34] apiserver oom_adj: -16
	I0816 18:15:11.646248   74510 kubeadm.go:597] duration metric: took 10.23607804s to restartPrimaryControlPlane
	I0816 18:15:11.646269   74510 kubeadm.go:394] duration metric: took 10.288999278s to StartCluster
	I0816 18:15:11.646322   74510 settings.go:142] acquiring lock: {Name:mk7d6bbff611e99a9222156e58c7ea3b0a5541f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:15:11.646405   74510 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19461-9545/kubeconfig
	I0816 18:15:11.648652   74510 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/kubeconfig: {Name:mk7d5abb3a1b9b654c5d7490d32aeae2c9a1bdd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:15:11.648939   74510 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.218 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 18:15:11.649056   74510 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 18:15:11.649124   74510 config.go:182] Loaded profile config "embed-certs-777541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 18:15:11.649155   74510 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-777541"
	I0816 18:15:11.649165   74510 addons.go:69] Setting metrics-server=true in profile "embed-certs-777541"
	I0816 18:15:11.649192   74510 addons.go:234] Setting addon metrics-server=true in "embed-certs-777541"
	I0816 18:15:11.649201   74510 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-777541"
	W0816 18:15:11.649205   74510 addons.go:243] addon metrics-server should already be in state true
	I0816 18:15:11.649193   74510 addons.go:69] Setting default-storageclass=true in profile "embed-certs-777541"
	I0816 18:15:11.649252   74510 host.go:66] Checking if "embed-certs-777541" exists ...
	I0816 18:15:11.649254   74510 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-777541"
	W0816 18:15:11.649209   74510 addons.go:243] addon storage-provisioner should already be in state true
	I0816 18:15:11.649332   74510 host.go:66] Checking if "embed-certs-777541" exists ...
	I0816 18:15:11.649702   74510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:15:11.649706   74510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:15:11.649742   74510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:15:11.649772   74510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:15:11.649877   74510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:15:11.649930   74510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:15:11.651580   74510 out.go:177] * Verifying Kubernetes components...
	I0816 18:15:11.652903   74510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:15:11.665975   74510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33631
	I0816 18:15:11.666041   74510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44231
	I0816 18:15:11.666404   74510 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:15:11.666439   74510 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:15:11.666986   74510 main.go:141] libmachine: Using API Version  1
	I0816 18:15:11.667005   74510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:15:11.667051   74510 main.go:141] libmachine: Using API Version  1
	I0816 18:15:11.667085   74510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:15:11.667312   74510 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:15:11.667517   74510 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:15:11.667846   74510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:15:11.667899   74510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:15:11.668039   74510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:15:11.668077   74510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:15:11.669328   74510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38849
	I0816 18:15:11.669765   74510 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:15:11.670270   74510 main.go:141] libmachine: Using API Version  1
	I0816 18:15:11.670301   74510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:15:11.670658   74510 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:15:11.670896   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetState
	I0816 18:15:11.674148   74510 addons.go:234] Setting addon default-storageclass=true in "embed-certs-777541"
	W0816 18:15:11.674165   74510 addons.go:243] addon default-storageclass should already be in state true
	I0816 18:15:11.674184   74510 host.go:66] Checking if "embed-certs-777541" exists ...
	I0816 18:15:11.674448   74510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:15:11.674482   74510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:15:11.683629   74510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39851
	I0816 18:15:11.683637   74510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42943
	I0816 18:15:11.684040   74510 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:15:11.684048   74510 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:15:11.684499   74510 main.go:141] libmachine: Using API Version  1
	I0816 18:15:11.684516   74510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:15:11.684653   74510 main.go:141] libmachine: Using API Version  1
	I0816 18:15:11.684670   74510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:15:11.684968   74510 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:15:11.685114   74510 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:15:11.685136   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetState
	I0816 18:15:11.685329   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetState
	I0816 18:15:11.687030   74510 main.go:141] libmachine: (embed-certs-777541) Calling .DriverName
	I0816 18:15:11.687130   74510 main.go:141] libmachine: (embed-certs-777541) Calling .DriverName
	I0816 18:15:11.688852   74510 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0816 18:15:11.688855   74510 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:15:08.147308   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:08.647669   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:09.147149   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:09.647072   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:10.147381   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:10.647567   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:11.147101   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:11.647587   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:12.146972   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:12.647842   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:11.689590   74510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46491
	I0816 18:15:11.690041   74510 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:15:11.690152   74510 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 18:15:11.690170   74510 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0816 18:15:11.690186   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:15:11.690223   74510 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 18:15:11.690238   74510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 18:15:11.690253   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:15:11.690606   74510 main.go:141] libmachine: Using API Version  1
	I0816 18:15:11.690627   74510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:15:11.691006   74510 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:15:11.691543   74510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:15:11.691575   74510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:15:11.693646   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:15:11.693669   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:15:11.693988   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:15:11.694007   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:15:11.694051   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:15:11.694064   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:15:11.694275   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHPort
	I0816 18:15:11.694322   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHPort
	I0816 18:15:11.694436   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:15:11.694468   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:15:11.694545   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHUsername
	I0816 18:15:11.694602   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHUsername
	I0816 18:15:11.694677   74510 sshutil.go:53] new ssh client: &{IP:192.168.61.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/embed-certs-777541/id_rsa Username:docker}
	I0816 18:15:11.694885   74510 sshutil.go:53] new ssh client: &{IP:192.168.61.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/embed-certs-777541/id_rsa Username:docker}
	I0816 18:15:11.709409   74510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36055
	I0816 18:15:11.709800   74510 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:15:11.710343   74510 main.go:141] libmachine: Using API Version  1
	I0816 18:15:11.710363   74510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:15:11.710700   74510 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:15:11.710874   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetState
	I0816 18:15:11.712484   74510 main.go:141] libmachine: (embed-certs-777541) Calling .DriverName
	I0816 18:15:11.712691   74510 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 18:15:11.712706   74510 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 18:15:11.712723   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:15:11.715590   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:15:11.716017   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:15:11.716050   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:15:11.716167   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHPort
	I0816 18:15:11.716379   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:15:11.716572   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHUsername
	I0816 18:15:11.716737   74510 sshutil.go:53] new ssh client: &{IP:192.168.61.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/embed-certs-777541/id_rsa Username:docker}
	I0816 18:15:11.864710   74510 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 18:15:11.885871   74510 node_ready.go:35] waiting up to 6m0s for node "embed-certs-777541" to be "Ready" ...
	I0816 18:15:11.985725   74510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 18:15:12.007635   74510 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 18:15:12.007669   74510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0816 18:15:12.040044   74510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 18:15:12.059661   74510 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 18:15:12.059687   74510 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0816 18:15:12.123787   74510 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 18:15:12.123812   74510 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0816 18:15:12.167249   74510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 18:15:12.457960   74510 main.go:141] libmachine: Making call to close driver server
	I0816 18:15:12.457985   74510 main.go:141] libmachine: (embed-certs-777541) Calling .Close
	I0816 18:15:12.458264   74510 main.go:141] libmachine: (embed-certs-777541) DBG | Closing plugin on server side
	I0816 18:15:12.458315   74510 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:15:12.458334   74510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:15:12.458348   74510 main.go:141] libmachine: Making call to close driver server
	I0816 18:15:12.458360   74510 main.go:141] libmachine: (embed-certs-777541) Calling .Close
	I0816 18:15:12.458577   74510 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:15:12.458590   74510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:15:12.468651   74510 main.go:141] libmachine: Making call to close driver server
	I0816 18:15:12.468675   74510 main.go:141] libmachine: (embed-certs-777541) Calling .Close
	I0816 18:15:12.468921   74510 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:15:12.468940   74510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:15:12.468963   74510 main.go:141] libmachine: (embed-certs-777541) DBG | Closing plugin on server side
	I0816 18:15:13.203995   74510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.163904081s)
	I0816 18:15:13.204048   74510 main.go:141] libmachine: Making call to close driver server
	I0816 18:15:13.204060   74510 main.go:141] libmachine: (embed-certs-777541) Calling .Close
	I0816 18:15:13.204309   74510 main.go:141] libmachine: (embed-certs-777541) DBG | Closing plugin on server side
	I0816 18:15:13.204350   74510 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:15:13.204359   74510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:15:13.204368   74510 main.go:141] libmachine: Making call to close driver server
	I0816 18:15:13.204376   74510 main.go:141] libmachine: (embed-certs-777541) Calling .Close
	I0816 18:15:13.204562   74510 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:15:13.204589   74510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:15:13.213068   74510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.045790147s)
	I0816 18:15:13.213101   74510 main.go:141] libmachine: Making call to close driver server
	I0816 18:15:13.213115   74510 main.go:141] libmachine: (embed-certs-777541) Calling .Close
	I0816 18:15:13.213533   74510 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:15:13.213551   74510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:15:13.213555   74510 main.go:141] libmachine: (embed-certs-777541) DBG | Closing plugin on server side
	I0816 18:15:13.213560   74510 main.go:141] libmachine: Making call to close driver server
	I0816 18:15:13.213595   74510 main.go:141] libmachine: (embed-certs-777541) Calling .Close
	I0816 18:15:13.213869   74510 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:15:13.213887   74510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:15:13.213897   74510 addons.go:475] Verifying addon metrics-server=true in "embed-certs-777541"
	I0816 18:15:13.213901   74510 main.go:141] libmachine: (embed-certs-777541) DBG | Closing plugin on server side
	I0816 18:15:13.215724   74510 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0816 18:15:13.217031   74510 addons.go:510] duration metric: took 1.567977779s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0816 18:15:09.706813   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:11.708577   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:10.947986   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:12.949227   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:13.147558   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:13.647755   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:14.147408   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:14.647810   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:15.147888   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:15.647476   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:16.147258   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:16.647785   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:17.147086   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:17.647852   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:13.889379   74510 node_ready.go:53] node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:15.889764   74510 node_ready.go:53] node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:18.390031   74510 node_ready.go:53] node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:14.207743   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:16.705831   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:15.448826   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:17.950756   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:18.147086   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:18.647013   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:19.147027   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:19.647100   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:20.147070   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:20.647097   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:21.147251   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:21.647856   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:22.147427   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:22.647231   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:18.890110   74510 node_ready.go:49] node "embed-certs-777541" has status "Ready":"True"
	I0816 18:15:18.890138   74510 node_ready.go:38] duration metric: took 7.004237799s for node "embed-certs-777541" to be "Ready" ...
	I0816 18:15:18.890156   74510 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:15:18.897124   74510 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-8njs2" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:18.902860   74510 pod_ready.go:93] pod "coredns-6f6b679f8f-8njs2" in "kube-system" namespace has status "Ready":"True"
	I0816 18:15:18.902878   74510 pod_ready.go:82] duration metric: took 5.73242ms for pod "coredns-6f6b679f8f-8njs2" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:18.902886   74510 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:20.909185   74510 pod_ready.go:103] pod "etcd-embed-certs-777541" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:21.909629   74510 pod_ready.go:93] pod "etcd-embed-certs-777541" in "kube-system" namespace has status "Ready":"True"
	I0816 18:15:21.909660   74510 pod_ready.go:82] duration metric: took 3.006768325s for pod "etcd-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:21.909670   74510 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:22.916066   74510 pod_ready.go:93] pod "kube-apiserver-embed-certs-777541" in "kube-system" namespace has status "Ready":"True"
	I0816 18:15:22.916090   74510 pod_ready.go:82] duration metric: took 1.006414177s for pod "kube-apiserver-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:22.916099   74510 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:22.920882   74510 pod_ready.go:93] pod "kube-controller-manager-embed-certs-777541" in "kube-system" namespace has status "Ready":"True"
	I0816 18:15:22.920908   74510 pod_ready.go:82] duration metric: took 4.802561ms for pod "kube-controller-manager-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:22.920918   74510 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-j5rl7" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:22.926952   74510 pod_ready.go:93] pod "kube-proxy-j5rl7" in "kube-system" namespace has status "Ready":"True"
	I0816 18:15:22.926975   74510 pod_ready.go:82] duration metric: took 6.0498ms for pod "kube-proxy-j5rl7" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:22.926984   74510 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:19.206127   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:21.206280   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:23.705588   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:20.448793   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:22.948798   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:23.147403   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:23.647030   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:24.147677   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:24.647324   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:25.147973   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:25.647097   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:26.147160   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:26.646963   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:27.147620   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:27.647918   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:24.933953   74510 pod_ready.go:103] pod "kube-scheduler-embed-certs-777541" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:25.433826   74510 pod_ready.go:93] pod "kube-scheduler-embed-certs-777541" in "kube-system" namespace has status "Ready":"True"
	I0816 18:15:25.433846   74510 pod_ready.go:82] duration metric: took 2.506855714s for pod "kube-scheduler-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:25.433855   74510 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:27.440119   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:25.707915   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:28.206580   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:25.447687   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:27.948700   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:28.146994   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:28.647364   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:29.147332   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:29.647773   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:30.147276   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:30.647794   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:31.147398   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:31.647565   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:32.147139   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:32.647961   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:29.440564   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:31.940747   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:30.706544   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:32.706852   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:29.948982   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:32.447920   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:34.448186   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:33.147648   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:33.647087   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:34.147881   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:34.646988   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:35.147118   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:35.647978   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:36.147541   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:36.647423   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:37.147051   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:37.647726   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:34.439692   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:36.439956   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:38.440315   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:35.206291   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:37.206902   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:36.948416   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:39.447952   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:38.147192   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:38.647318   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:39.147186   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:39.647662   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:40.147044   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:40.647787   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:41.147638   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:41.647490   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:42.147787   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:42.647959   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:40.440405   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:42.440727   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:39.207086   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:41.706048   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:43.706585   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:41.450069   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:43.948101   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:43.147938   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:43.647855   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:44.147781   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:44.647710   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:15:44.647796   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:15:44.682176   75402 cri.go:89] found id: ""
	I0816 18:15:44.682207   75402 logs.go:276] 0 containers: []
	W0816 18:15:44.682218   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:15:44.682226   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:15:44.682285   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:15:44.717500   75402 cri.go:89] found id: ""
	I0816 18:15:44.717530   75402 logs.go:276] 0 containers: []
	W0816 18:15:44.717540   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:15:44.717552   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:15:44.717620   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:15:44.751816   75402 cri.go:89] found id: ""
	I0816 18:15:44.751847   75402 logs.go:276] 0 containers: []
	W0816 18:15:44.751858   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:15:44.751865   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:15:44.751942   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:15:44.783236   75402 cri.go:89] found id: ""
	I0816 18:15:44.783260   75402 logs.go:276] 0 containers: []
	W0816 18:15:44.783267   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:15:44.783272   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:15:44.783337   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:15:44.813087   75402 cri.go:89] found id: ""
	I0816 18:15:44.813110   75402 logs.go:276] 0 containers: []
	W0816 18:15:44.813116   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:15:44.813122   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:15:44.813166   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:15:44.843568   75402 cri.go:89] found id: ""
	I0816 18:15:44.843599   75402 logs.go:276] 0 containers: []
	W0816 18:15:44.843609   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:15:44.843616   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:15:44.843679   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:15:44.873694   75402 cri.go:89] found id: ""
	I0816 18:15:44.873723   75402 logs.go:276] 0 containers: []
	W0816 18:15:44.873734   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:15:44.873741   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:15:44.873808   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:15:44.906183   75402 cri.go:89] found id: ""
	I0816 18:15:44.906212   75402 logs.go:276] 0 containers: []
	W0816 18:15:44.906222   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:15:44.906231   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:15:44.906241   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:15:44.958963   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:15:44.958993   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:15:44.972390   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:15:44.972415   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:15:45.091624   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:15:45.091645   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:15:45.091661   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:15:45.159927   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:15:45.159963   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:15:47.698398   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:47.711848   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:15:47.711917   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:15:47.744247   75402 cri.go:89] found id: ""
	I0816 18:15:47.744278   75402 logs.go:276] 0 containers: []
	W0816 18:15:47.744288   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:15:47.744295   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:15:47.744374   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:15:47.783188   75402 cri.go:89] found id: ""
	I0816 18:15:47.783211   75402 logs.go:276] 0 containers: []
	W0816 18:15:47.783219   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:15:47.783224   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:15:47.783270   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:15:47.829284   75402 cri.go:89] found id: ""
	I0816 18:15:47.829320   75402 logs.go:276] 0 containers: []
	W0816 18:15:47.829333   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:15:47.829341   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:15:47.829413   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:15:47.879482   75402 cri.go:89] found id: ""
	I0816 18:15:47.879514   75402 logs.go:276] 0 containers: []
	W0816 18:15:47.879525   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:15:47.879532   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:15:47.879606   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:15:47.913766   75402 cri.go:89] found id: ""
	I0816 18:15:47.913797   75402 logs.go:276] 0 containers: []
	W0816 18:15:47.913808   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:15:47.913815   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:15:47.913880   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:15:47.947262   75402 cri.go:89] found id: ""
	I0816 18:15:47.947340   75402 logs.go:276] 0 containers: []
	W0816 18:15:47.947353   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:15:47.947362   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:15:47.947427   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:15:47.979638   75402 cri.go:89] found id: ""
	I0816 18:15:47.979667   75402 logs.go:276] 0 containers: []
	W0816 18:15:47.979678   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:15:47.979685   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:15:47.979741   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:15:48.010246   75402 cri.go:89] found id: ""
	I0816 18:15:48.010277   75402 logs.go:276] 0 containers: []
	W0816 18:15:48.010288   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:15:48.010296   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:15:48.010310   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:15:48.083916   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:15:48.083953   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:15:44.940775   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:47.440356   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:46.207236   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:48.705791   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:45.948300   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:47.948501   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:48.120254   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:15:48.120285   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:15:48.169590   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:15:48.169628   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:15:48.182821   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:15:48.182850   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:15:48.254088   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:15:50.755114   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:50.768167   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:15:50.768250   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:15:50.800881   75402 cri.go:89] found id: ""
	I0816 18:15:50.800906   75402 logs.go:276] 0 containers: []
	W0816 18:15:50.800913   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:15:50.800918   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:15:50.800969   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:15:50.833538   75402 cri.go:89] found id: ""
	I0816 18:15:50.833567   75402 logs.go:276] 0 containers: []
	W0816 18:15:50.833578   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:15:50.833586   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:15:50.833649   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:15:50.867306   75402 cri.go:89] found id: ""
	I0816 18:15:50.867336   75402 logs.go:276] 0 containers: []
	W0816 18:15:50.867347   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:15:50.867353   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:15:50.867400   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:15:50.900029   75402 cri.go:89] found id: ""
	I0816 18:15:50.900055   75402 logs.go:276] 0 containers: []
	W0816 18:15:50.900064   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:15:50.900072   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:15:50.900135   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:15:50.933604   75402 cri.go:89] found id: ""
	I0816 18:15:50.933630   75402 logs.go:276] 0 containers: []
	W0816 18:15:50.933638   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:15:50.933643   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:15:50.933707   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:15:50.966102   75402 cri.go:89] found id: ""
	I0816 18:15:50.966131   75402 logs.go:276] 0 containers: []
	W0816 18:15:50.966141   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:15:50.966149   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:15:50.966210   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:15:50.998007   75402 cri.go:89] found id: ""
	I0816 18:15:50.998036   75402 logs.go:276] 0 containers: []
	W0816 18:15:50.998047   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:15:50.998054   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:15:50.998115   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:15:51.032306   75402 cri.go:89] found id: ""
	I0816 18:15:51.032342   75402 logs.go:276] 0 containers: []
	W0816 18:15:51.032349   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:15:51.032357   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:15:51.032369   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:15:51.083186   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:15:51.083222   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:15:51.096072   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:15:51.096153   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:15:51.162667   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:15:51.162693   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:15:51.162709   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:15:51.241913   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:15:51.241954   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:15:49.440546   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:51.940026   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:50.706662   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:53.206075   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:50.447947   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:52.448340   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:54.448431   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:53.779323   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:53.793358   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:15:53.793433   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:15:53.827380   75402 cri.go:89] found id: ""
	I0816 18:15:53.827414   75402 logs.go:276] 0 containers: []
	W0816 18:15:53.827424   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:15:53.827430   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:15:53.827489   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:15:53.867331   75402 cri.go:89] found id: ""
	I0816 18:15:53.867370   75402 logs.go:276] 0 containers: []
	W0816 18:15:53.867380   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:15:53.867386   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:15:53.867438   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:15:53.899445   75402 cri.go:89] found id: ""
	I0816 18:15:53.899477   75402 logs.go:276] 0 containers: []
	W0816 18:15:53.899489   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:15:53.899498   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:15:53.899588   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:15:53.936527   75402 cri.go:89] found id: ""
	I0816 18:15:53.936556   75402 logs.go:276] 0 containers: []
	W0816 18:15:53.936568   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:15:53.936576   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:15:53.936653   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:15:53.970739   75402 cri.go:89] found id: ""
	I0816 18:15:53.970765   75402 logs.go:276] 0 containers: []
	W0816 18:15:53.970773   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:15:53.970780   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:15:53.970842   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:15:54.004119   75402 cri.go:89] found id: ""
	I0816 18:15:54.004150   75402 logs.go:276] 0 containers: []
	W0816 18:15:54.004159   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:15:54.004164   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:15:54.004217   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:15:54.038370   75402 cri.go:89] found id: ""
	I0816 18:15:54.038400   75402 logs.go:276] 0 containers: []
	W0816 18:15:54.038411   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:15:54.038416   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:15:54.038472   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:15:54.079346   75402 cri.go:89] found id: ""
	I0816 18:15:54.079375   75402 logs.go:276] 0 containers: []
	W0816 18:15:54.079383   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:15:54.079392   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:15:54.079403   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:15:54.116551   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:15:54.116586   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:15:54.169930   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:15:54.169970   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:15:54.182416   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:15:54.182448   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:15:54.253516   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:15:54.253539   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:15:54.253559   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:15:56.833124   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:56.846139   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:15:56.846211   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:15:56.880899   75402 cri.go:89] found id: ""
	I0816 18:15:56.880928   75402 logs.go:276] 0 containers: []
	W0816 18:15:56.880939   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:15:56.880945   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:15:56.880994   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:15:56.913362   75402 cri.go:89] found id: ""
	I0816 18:15:56.913393   75402 logs.go:276] 0 containers: []
	W0816 18:15:56.913406   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:15:56.913415   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:15:56.913507   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:15:56.951876   75402 cri.go:89] found id: ""
	I0816 18:15:56.951904   75402 logs.go:276] 0 containers: []
	W0816 18:15:56.951914   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:15:56.951919   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:15:56.951988   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:15:56.986335   75402 cri.go:89] found id: ""
	I0816 18:15:56.986358   75402 logs.go:276] 0 containers: []
	W0816 18:15:56.986366   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:15:56.986372   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:15:56.986423   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:15:57.022485   75402 cri.go:89] found id: ""
	I0816 18:15:57.022511   75402 logs.go:276] 0 containers: []
	W0816 18:15:57.022522   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:15:57.022529   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:15:57.022641   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:15:57.055436   75402 cri.go:89] found id: ""
	I0816 18:15:57.055463   75402 logs.go:276] 0 containers: []
	W0816 18:15:57.055470   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:15:57.055476   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:15:57.055536   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:15:57.085930   75402 cri.go:89] found id: ""
	I0816 18:15:57.085965   75402 logs.go:276] 0 containers: []
	W0816 18:15:57.085975   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:15:57.085981   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:15:57.086032   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:15:57.120436   75402 cri.go:89] found id: ""
	I0816 18:15:57.120466   75402 logs.go:276] 0 containers: []
	W0816 18:15:57.120477   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:15:57.120488   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:15:57.120501   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:15:57.202161   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:15:57.202218   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:15:57.243766   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:15:57.243805   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:15:57.295552   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:15:57.295585   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:15:57.307769   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:15:57.307802   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:15:57.390480   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:15:53.941399   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:56.439763   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:58.440357   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:55.206970   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:57.207312   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:56.948085   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:59.448174   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:59.891480   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:59.904766   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:15:59.904836   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:15:59.939209   75402 cri.go:89] found id: ""
	I0816 18:15:59.939241   75402 logs.go:276] 0 containers: []
	W0816 18:15:59.939252   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:15:59.939260   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:15:59.939324   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:15:59.971782   75402 cri.go:89] found id: ""
	I0816 18:15:59.971812   75402 logs.go:276] 0 containers: []
	W0816 18:15:59.971822   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:15:59.971832   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:15:59.971894   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:00.018585   75402 cri.go:89] found id: ""
	I0816 18:16:00.018630   75402 logs.go:276] 0 containers: []
	W0816 18:16:00.018643   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:00.018654   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:00.018722   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:00.050484   75402 cri.go:89] found id: ""
	I0816 18:16:00.050520   75402 logs.go:276] 0 containers: []
	W0816 18:16:00.050532   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:00.050540   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:00.050603   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:00.082900   75402 cri.go:89] found id: ""
	I0816 18:16:00.082930   75402 logs.go:276] 0 containers: []
	W0816 18:16:00.082942   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:00.082951   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:00.083025   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:00.115330   75402 cri.go:89] found id: ""
	I0816 18:16:00.115363   75402 logs.go:276] 0 containers: []
	W0816 18:16:00.115372   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:00.115378   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:00.115442   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:00.150898   75402 cri.go:89] found id: ""
	I0816 18:16:00.150935   75402 logs.go:276] 0 containers: []
	W0816 18:16:00.150952   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:00.150960   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:00.151033   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:00.193304   75402 cri.go:89] found id: ""
	I0816 18:16:00.193338   75402 logs.go:276] 0 containers: []
	W0816 18:16:00.193349   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:00.193359   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:00.193370   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:00.247340   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:00.247376   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:00.260470   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:00.260500   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:00.336483   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:00.336506   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:00.336521   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:00.421251   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:00.421289   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:02.964042   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:02.977284   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:02.977381   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:03.009533   75402 cri.go:89] found id: ""
	I0816 18:16:03.009574   75402 logs.go:276] 0 containers: []
	W0816 18:16:03.009586   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:03.009594   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:03.009673   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:03.043756   75402 cri.go:89] found id: ""
	I0816 18:16:03.043784   75402 logs.go:276] 0 containers: []
	W0816 18:16:03.043794   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:03.043802   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:03.043867   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:03.078817   75402 cri.go:89] found id: ""
	I0816 18:16:03.078840   75402 logs.go:276] 0 containers: []
	W0816 18:16:03.078848   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:03.078853   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:03.078906   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:00.440728   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:02.440788   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:59.706129   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:01.707967   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:01.948193   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:04.448504   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:03.112874   75402 cri.go:89] found id: ""
	I0816 18:16:03.112903   75402 logs.go:276] 0 containers: []
	W0816 18:16:03.112912   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:03.112918   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:03.112985   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:03.152008   75402 cri.go:89] found id: ""
	I0816 18:16:03.152040   75402 logs.go:276] 0 containers: []
	W0816 18:16:03.152052   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:03.152059   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:03.152125   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:03.187353   75402 cri.go:89] found id: ""
	I0816 18:16:03.187386   75402 logs.go:276] 0 containers: []
	W0816 18:16:03.187396   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:03.187404   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:03.187467   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:03.220860   75402 cri.go:89] found id: ""
	I0816 18:16:03.220895   75402 logs.go:276] 0 containers: []
	W0816 18:16:03.220903   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:03.220909   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:03.220958   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:03.252202   75402 cri.go:89] found id: ""
	I0816 18:16:03.252240   75402 logs.go:276] 0 containers: []
	W0816 18:16:03.252247   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:03.252256   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:03.252268   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:03.286907   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:03.286934   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:03.338212   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:03.338249   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:03.352548   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:03.352585   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:03.427580   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:03.427610   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:03.427626   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:06.011792   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:06.024201   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:06.024277   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:06.058328   75402 cri.go:89] found id: ""
	I0816 18:16:06.058356   75402 logs.go:276] 0 containers: []
	W0816 18:16:06.058367   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:06.058373   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:06.058433   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:06.091262   75402 cri.go:89] found id: ""
	I0816 18:16:06.091298   75402 logs.go:276] 0 containers: []
	W0816 18:16:06.091311   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:06.091318   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:06.091382   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:06.124114   75402 cri.go:89] found id: ""
	I0816 18:16:06.124146   75402 logs.go:276] 0 containers: []
	W0816 18:16:06.124154   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:06.124159   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:06.124220   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:06.155379   75402 cri.go:89] found id: ""
	I0816 18:16:06.155406   75402 logs.go:276] 0 containers: []
	W0816 18:16:06.155416   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:06.155422   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:06.155471   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:06.189442   75402 cri.go:89] found id: ""
	I0816 18:16:06.189472   75402 logs.go:276] 0 containers: []
	W0816 18:16:06.189480   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:06.189485   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:06.189538   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:06.228881   75402 cri.go:89] found id: ""
	I0816 18:16:06.228910   75402 logs.go:276] 0 containers: []
	W0816 18:16:06.228921   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:06.228929   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:06.229003   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:06.262272   75402 cri.go:89] found id: ""
	I0816 18:16:06.262299   75402 logs.go:276] 0 containers: []
	W0816 18:16:06.262310   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:06.262317   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:06.262386   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:06.295427   75402 cri.go:89] found id: ""
	I0816 18:16:06.295456   75402 logs.go:276] 0 containers: []
	W0816 18:16:06.295468   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:06.295478   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:06.295492   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:06.347569   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:06.347608   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:06.362786   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:06.362825   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:06.432020   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:06.432044   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:06.432059   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:06.512085   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:06.512120   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:04.940128   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:07.439708   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:04.206477   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:06.208125   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:08.706765   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:06.947599   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:08.948183   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:09.051957   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:09.066630   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:09.066690   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:09.101484   75402 cri.go:89] found id: ""
	I0816 18:16:09.101515   75402 logs.go:276] 0 containers: []
	W0816 18:16:09.101526   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:09.101536   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:09.101614   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:09.140645   75402 cri.go:89] found id: ""
	I0816 18:16:09.140677   75402 logs.go:276] 0 containers: []
	W0816 18:16:09.140689   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:09.140696   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:09.140758   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:09.174666   75402 cri.go:89] found id: ""
	I0816 18:16:09.174698   75402 logs.go:276] 0 containers: []
	W0816 18:16:09.174708   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:09.174717   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:09.174780   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:09.209715   75402 cri.go:89] found id: ""
	I0816 18:16:09.209748   75402 logs.go:276] 0 containers: []
	W0816 18:16:09.209758   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:09.209767   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:09.209845   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:09.243681   75402 cri.go:89] found id: ""
	I0816 18:16:09.243712   75402 logs.go:276] 0 containers: []
	W0816 18:16:09.243720   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:09.243726   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:09.243781   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:09.278058   75402 cri.go:89] found id: ""
	I0816 18:16:09.278090   75402 logs.go:276] 0 containers: []
	W0816 18:16:09.278102   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:09.278111   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:09.278178   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:09.313092   75402 cri.go:89] found id: ""
	I0816 18:16:09.313122   75402 logs.go:276] 0 containers: []
	W0816 18:16:09.313132   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:09.313137   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:09.313201   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:09.345203   75402 cri.go:89] found id: ""
	I0816 18:16:09.345229   75402 logs.go:276] 0 containers: []
	W0816 18:16:09.345236   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:09.345245   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:09.345259   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:09.358198   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:09.358225   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:09.422024   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:09.422047   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:09.422059   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:09.498684   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:09.498717   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:09.535349   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:09.535382   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:12.087472   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:12.100412   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:12.100477   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:12.133982   75402 cri.go:89] found id: ""
	I0816 18:16:12.134018   75402 logs.go:276] 0 containers: []
	W0816 18:16:12.134030   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:12.134038   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:12.134100   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:12.166466   75402 cri.go:89] found id: ""
	I0816 18:16:12.166497   75402 logs.go:276] 0 containers: []
	W0816 18:16:12.166507   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:12.166514   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:12.166589   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:12.197752   75402 cri.go:89] found id: ""
	I0816 18:16:12.197779   75402 logs.go:276] 0 containers: []
	W0816 18:16:12.197790   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:12.197797   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:12.197856   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:12.239759   75402 cri.go:89] found id: ""
	I0816 18:16:12.239789   75402 logs.go:276] 0 containers: []
	W0816 18:16:12.239801   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:12.239810   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:12.239871   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:12.273263   75402 cri.go:89] found id: ""
	I0816 18:16:12.273292   75402 logs.go:276] 0 containers: []
	W0816 18:16:12.273302   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:12.273310   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:12.273370   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:12.308788   75402 cri.go:89] found id: ""
	I0816 18:16:12.308820   75402 logs.go:276] 0 containers: []
	W0816 18:16:12.308831   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:12.308839   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:12.308897   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:12.345243   75402 cri.go:89] found id: ""
	I0816 18:16:12.345274   75402 logs.go:276] 0 containers: []
	W0816 18:16:12.345281   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:12.345288   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:12.345341   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:12.379939   75402 cri.go:89] found id: ""
	I0816 18:16:12.379968   75402 logs.go:276] 0 containers: []
	W0816 18:16:12.379978   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:12.379989   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:12.380004   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:12.436097   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:12.436130   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:12.449328   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:12.449357   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:12.518723   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:12.518749   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:12.518764   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:12.600228   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:12.600268   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:09.441051   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:11.441097   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:11.206853   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:13.705328   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:11.449793   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:13.948517   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:15.137940   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:15.150617   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:15.150694   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:15.186029   75402 cri.go:89] found id: ""
	I0816 18:16:15.186057   75402 logs.go:276] 0 containers: []
	W0816 18:16:15.186067   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:15.186074   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:15.186134   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:15.219812   75402 cri.go:89] found id: ""
	I0816 18:16:15.219840   75402 logs.go:276] 0 containers: []
	W0816 18:16:15.219851   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:15.219864   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:15.219927   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:15.253434   75402 cri.go:89] found id: ""
	I0816 18:16:15.253462   75402 logs.go:276] 0 containers: []
	W0816 18:16:15.253472   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:15.253479   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:15.253542   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:15.286697   75402 cri.go:89] found id: ""
	I0816 18:16:15.286729   75402 logs.go:276] 0 containers: []
	W0816 18:16:15.286745   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:15.286751   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:15.286810   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:15.319363   75402 cri.go:89] found id: ""
	I0816 18:16:15.319405   75402 logs.go:276] 0 containers: []
	W0816 18:16:15.319415   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:15.319422   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:15.319506   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:15.353900   75402 cri.go:89] found id: ""
	I0816 18:16:15.353924   75402 logs.go:276] 0 containers: []
	W0816 18:16:15.353931   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:15.353937   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:15.353991   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:15.389086   75402 cri.go:89] found id: ""
	I0816 18:16:15.389114   75402 logs.go:276] 0 containers: []
	W0816 18:16:15.389122   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:15.389127   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:15.389184   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:15.424069   75402 cri.go:89] found id: ""
	I0816 18:16:15.424099   75402 logs.go:276] 0 containers: []
	W0816 18:16:15.424110   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:15.424121   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:15.424136   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:15.482703   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:15.482738   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:15.496859   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:15.496886   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:15.562178   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:15.562196   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:15.562212   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:15.643484   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:15.643521   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:13.944174   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:16.439987   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:18.442569   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:15.706743   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:18.206088   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:16.448775   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:18.948447   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:18.180963   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:18.194705   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:18.194783   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:18.231302   75402 cri.go:89] found id: ""
	I0816 18:16:18.231337   75402 logs.go:276] 0 containers: []
	W0816 18:16:18.231348   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:18.231355   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:18.231413   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:18.264098   75402 cri.go:89] found id: ""
	I0816 18:16:18.264124   75402 logs.go:276] 0 containers: []
	W0816 18:16:18.264135   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:18.264155   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:18.264228   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:18.298133   75402 cri.go:89] found id: ""
	I0816 18:16:18.298165   75402 logs.go:276] 0 containers: []
	W0816 18:16:18.298178   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:18.298186   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:18.298252   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:18.331323   75402 cri.go:89] found id: ""
	I0816 18:16:18.331354   75402 logs.go:276] 0 containers: []
	W0816 18:16:18.331362   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:18.331367   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:18.331416   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:18.365677   75402 cri.go:89] found id: ""
	I0816 18:16:18.365709   75402 logs.go:276] 0 containers: []
	W0816 18:16:18.365718   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:18.365724   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:18.365774   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:18.399801   75402 cri.go:89] found id: ""
	I0816 18:16:18.399835   75402 logs.go:276] 0 containers: []
	W0816 18:16:18.399844   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:18.399850   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:18.399908   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:18.438148   75402 cri.go:89] found id: ""
	I0816 18:16:18.438179   75402 logs.go:276] 0 containers: []
	W0816 18:16:18.438189   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:18.438197   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:18.438257   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:18.472185   75402 cri.go:89] found id: ""
	I0816 18:16:18.472215   75402 logs.go:276] 0 containers: []
	W0816 18:16:18.472223   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:18.472232   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:18.472243   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:18.523369   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:18.523400   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:18.536152   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:18.536179   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:18.611539   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:18.611560   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:18.611571   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:18.688043   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:18.688079   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:21.229163   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:21.242641   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:21.242717   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:21.275188   75402 cri.go:89] found id: ""
	I0816 18:16:21.275213   75402 logs.go:276] 0 containers: []
	W0816 18:16:21.275220   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:21.275226   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:21.275275   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:21.308377   75402 cri.go:89] found id: ""
	I0816 18:16:21.308406   75402 logs.go:276] 0 containers: []
	W0816 18:16:21.308417   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:21.308424   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:21.308475   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:21.341067   75402 cri.go:89] found id: ""
	I0816 18:16:21.341098   75402 logs.go:276] 0 containers: []
	W0816 18:16:21.341106   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:21.341112   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:21.341170   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:21.372707   75402 cri.go:89] found id: ""
	I0816 18:16:21.372743   75402 logs.go:276] 0 containers: []
	W0816 18:16:21.372756   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:21.372763   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:21.372847   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:21.410210   75402 cri.go:89] found id: ""
	I0816 18:16:21.410241   75402 logs.go:276] 0 containers: []
	W0816 18:16:21.410252   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:21.410259   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:21.410323   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:21.444840   75402 cri.go:89] found id: ""
	I0816 18:16:21.444863   75402 logs.go:276] 0 containers: []
	W0816 18:16:21.444872   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:21.444879   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:21.444942   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:21.478278   75402 cri.go:89] found id: ""
	I0816 18:16:21.478319   75402 logs.go:276] 0 containers: []
	W0816 18:16:21.478327   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:21.478333   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:21.478395   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:21.512026   75402 cri.go:89] found id: ""
	I0816 18:16:21.512063   75402 logs.go:276] 0 containers: []
	W0816 18:16:21.512073   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:21.512090   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:21.512111   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:21.564800   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:21.564834   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:21.577343   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:21.577368   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:21.663216   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:21.663238   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:21.663251   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:21.741960   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:21.741994   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:20.939740   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:22.942844   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:20.706032   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:22.707112   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:21.449404   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:23.454804   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:24.282136   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:24.296452   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:24.296513   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:24.337173   75402 cri.go:89] found id: ""
	I0816 18:16:24.337200   75402 logs.go:276] 0 containers: []
	W0816 18:16:24.337210   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:24.337218   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:24.337282   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:24.374163   75402 cri.go:89] found id: ""
	I0816 18:16:24.374200   75402 logs.go:276] 0 containers: []
	W0816 18:16:24.374213   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:24.374222   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:24.374287   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:24.407823   75402 cri.go:89] found id: ""
	I0816 18:16:24.407854   75402 logs.go:276] 0 containers: []
	W0816 18:16:24.407866   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:24.407881   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:24.407953   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:24.444006   75402 cri.go:89] found id: ""
	I0816 18:16:24.444032   75402 logs.go:276] 0 containers: []
	W0816 18:16:24.444042   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:24.444049   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:24.444113   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:24.479082   75402 cri.go:89] found id: ""
	I0816 18:16:24.479110   75402 logs.go:276] 0 containers: []
	W0816 18:16:24.479119   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:24.479125   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:24.479174   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:24.524738   75402 cri.go:89] found id: ""
	I0816 18:16:24.524764   75402 logs.go:276] 0 containers: []
	W0816 18:16:24.524775   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:24.524782   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:24.524842   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:24.560298   75402 cri.go:89] found id: ""
	I0816 18:16:24.560326   75402 logs.go:276] 0 containers: []
	W0816 18:16:24.560335   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:24.560343   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:24.560406   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:24.597182   75402 cri.go:89] found id: ""
	I0816 18:16:24.597214   75402 logs.go:276] 0 containers: []
	W0816 18:16:24.597227   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:24.597239   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:24.597254   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:24.653063   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:24.653106   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:24.665940   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:24.665972   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:24.736599   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:24.736639   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:24.736657   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:24.821883   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:24.821939   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:27.359558   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:27.382980   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:27.383053   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:27.416766   75402 cri.go:89] found id: ""
	I0816 18:16:27.416793   75402 logs.go:276] 0 containers: []
	W0816 18:16:27.416802   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:27.416811   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:27.416873   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:27.452966   75402 cri.go:89] found id: ""
	I0816 18:16:27.452988   75402 logs.go:276] 0 containers: []
	W0816 18:16:27.452995   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:27.453001   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:27.453050   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:27.485850   75402 cri.go:89] found id: ""
	I0816 18:16:27.485885   75402 logs.go:276] 0 containers: []
	W0816 18:16:27.485896   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:27.485903   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:27.485960   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:27.517667   75402 cri.go:89] found id: ""
	I0816 18:16:27.517694   75402 logs.go:276] 0 containers: []
	W0816 18:16:27.517704   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:27.517711   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:27.517774   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:27.553547   75402 cri.go:89] found id: ""
	I0816 18:16:27.553574   75402 logs.go:276] 0 containers: []
	W0816 18:16:27.553582   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:27.553593   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:27.553653   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:27.586857   75402 cri.go:89] found id: ""
	I0816 18:16:27.586884   75402 logs.go:276] 0 containers: []
	W0816 18:16:27.586893   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:27.586898   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:27.586957   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:27.621739   75402 cri.go:89] found id: ""
	I0816 18:16:27.621766   75402 logs.go:276] 0 containers: []
	W0816 18:16:27.621776   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:27.621784   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:27.621844   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:27.657772   75402 cri.go:89] found id: ""
	I0816 18:16:27.657797   75402 logs.go:276] 0 containers: []
	W0816 18:16:27.657805   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:27.657819   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:27.657831   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:27.729769   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:27.729796   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:27.729810   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:27.813351   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:27.813403   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:27.852985   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:27.853010   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:27.908434   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:27.908476   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:25.439828   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:27.440749   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:25.207590   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:27.706496   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:25.948579   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:28.448590   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:30.422781   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:30.435987   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:30.436070   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:30.470878   75402 cri.go:89] found id: ""
	I0816 18:16:30.470907   75402 logs.go:276] 0 containers: []
	W0816 18:16:30.470918   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:30.470926   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:30.470983   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:30.504940   75402 cri.go:89] found id: ""
	I0816 18:16:30.504969   75402 logs.go:276] 0 containers: []
	W0816 18:16:30.504980   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:30.504988   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:30.505058   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:30.538680   75402 cri.go:89] found id: ""
	I0816 18:16:30.538708   75402 logs.go:276] 0 containers: []
	W0816 18:16:30.538716   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:30.538722   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:30.538788   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:30.574757   75402 cri.go:89] found id: ""
	I0816 18:16:30.574782   75402 logs.go:276] 0 containers: []
	W0816 18:16:30.574791   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:30.574797   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:30.574853   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:30.612500   75402 cri.go:89] found id: ""
	I0816 18:16:30.612529   75402 logs.go:276] 0 containers: []
	W0816 18:16:30.612539   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:30.612547   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:30.612613   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:30.644572   75402 cri.go:89] found id: ""
	I0816 18:16:30.644595   75402 logs.go:276] 0 containers: []
	W0816 18:16:30.644603   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:30.644609   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:30.644678   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:30.678199   75402 cri.go:89] found id: ""
	I0816 18:16:30.678232   75402 logs.go:276] 0 containers: []
	W0816 18:16:30.678243   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:30.678252   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:30.678331   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:30.709435   75402 cri.go:89] found id: ""
	I0816 18:16:30.709470   75402 logs.go:276] 0 containers: []
	W0816 18:16:30.709482   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:30.709494   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:30.709511   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:30.723430   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:30.723464   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:30.800340   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:30.800374   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:30.800390   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:30.883945   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:30.883986   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:30.922107   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:30.922139   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:29.940430   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:32.440198   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:29.706649   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:32.205271   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:30.949515   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:33.448456   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:33.480016   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:33.494178   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:33.494241   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:33.529497   75402 cri.go:89] found id: ""
	I0816 18:16:33.529527   75402 logs.go:276] 0 containers: []
	W0816 18:16:33.529546   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:33.529554   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:33.529614   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:33.566670   75402 cri.go:89] found id: ""
	I0816 18:16:33.566700   75402 logs.go:276] 0 containers: []
	W0816 18:16:33.566711   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:33.566718   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:33.566781   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:33.603898   75402 cri.go:89] found id: ""
	I0816 18:16:33.603926   75402 logs.go:276] 0 containers: []
	W0816 18:16:33.603937   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:33.603944   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:33.604003   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:33.636077   75402 cri.go:89] found id: ""
	I0816 18:16:33.636111   75402 logs.go:276] 0 containers: []
	W0816 18:16:33.636125   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:33.636134   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:33.636200   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:33.668974   75402 cri.go:89] found id: ""
	I0816 18:16:33.669002   75402 logs.go:276] 0 containers: []
	W0816 18:16:33.669011   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:33.669017   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:33.669070   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:33.700981   75402 cri.go:89] found id: ""
	I0816 18:16:33.701010   75402 logs.go:276] 0 containers: []
	W0816 18:16:33.701019   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:33.701026   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:33.701088   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:33.735430   75402 cri.go:89] found id: ""
	I0816 18:16:33.735463   75402 logs.go:276] 0 containers: []
	W0816 18:16:33.735474   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:33.735481   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:33.735539   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:33.779797   75402 cri.go:89] found id: ""
	I0816 18:16:33.779829   75402 logs.go:276] 0 containers: []
	W0816 18:16:33.779840   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:33.779851   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:33.779865   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:33.824873   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:33.824908   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:33.874177   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:33.874217   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:33.888535   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:33.888561   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:33.957590   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:33.957608   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:33.957627   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:36.533660   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:36.546542   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:36.546606   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:36.584056   75402 cri.go:89] found id: ""
	I0816 18:16:36.584085   75402 logs.go:276] 0 containers: []
	W0816 18:16:36.584094   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:36.584099   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:36.584149   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:36.622143   75402 cri.go:89] found id: ""
	I0816 18:16:36.622172   75402 logs.go:276] 0 containers: []
	W0816 18:16:36.622184   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:36.622193   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:36.622262   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:36.655479   75402 cri.go:89] found id: ""
	I0816 18:16:36.655509   75402 logs.go:276] 0 containers: []
	W0816 18:16:36.655520   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:36.655528   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:36.655603   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:36.688044   75402 cri.go:89] found id: ""
	I0816 18:16:36.688076   75402 logs.go:276] 0 containers: []
	W0816 18:16:36.688088   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:36.688096   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:36.688161   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:36.725831   75402 cri.go:89] found id: ""
	I0816 18:16:36.725861   75402 logs.go:276] 0 containers: []
	W0816 18:16:36.725868   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:36.725874   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:36.725925   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:36.758398   75402 cri.go:89] found id: ""
	I0816 18:16:36.758433   75402 logs.go:276] 0 containers: []
	W0816 18:16:36.758444   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:36.758453   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:36.758517   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:36.791097   75402 cri.go:89] found id: ""
	I0816 18:16:36.791126   75402 logs.go:276] 0 containers: []
	W0816 18:16:36.791136   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:36.791144   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:36.791207   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:36.829337   75402 cri.go:89] found id: ""
	I0816 18:16:36.829369   75402 logs.go:276] 0 containers: []
	W0816 18:16:36.829380   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:36.829391   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:36.829405   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:36.881898   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:36.881932   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:36.895584   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:36.895618   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:36.967175   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:36.967197   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:36.967213   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:37.046993   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:37.047025   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:34.440475   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:36.946369   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:34.206677   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:36.207893   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:38.706193   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:35.449611   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:37.947527   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:39.588683   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:39.607205   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:39.607287   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:39.640517   75402 cri.go:89] found id: ""
	I0816 18:16:39.640541   75402 logs.go:276] 0 containers: []
	W0816 18:16:39.640549   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:39.640554   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:39.640604   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:39.673777   75402 cri.go:89] found id: ""
	I0816 18:16:39.673805   75402 logs.go:276] 0 containers: []
	W0816 18:16:39.673813   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:39.673818   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:39.673899   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:39.709574   75402 cri.go:89] found id: ""
	I0816 18:16:39.709598   75402 logs.go:276] 0 containers: []
	W0816 18:16:39.709606   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:39.709611   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:39.709666   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:39.743946   75402 cri.go:89] found id: ""
	I0816 18:16:39.743971   75402 logs.go:276] 0 containers: []
	W0816 18:16:39.743979   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:39.743985   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:39.744041   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:39.776140   75402 cri.go:89] found id: ""
	I0816 18:16:39.776171   75402 logs.go:276] 0 containers: []
	W0816 18:16:39.776181   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:39.776187   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:39.776254   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:39.808697   75402 cri.go:89] found id: ""
	I0816 18:16:39.808719   75402 logs.go:276] 0 containers: []
	W0816 18:16:39.808728   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:39.808734   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:39.808793   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:39.840163   75402 cri.go:89] found id: ""
	I0816 18:16:39.840190   75402 logs.go:276] 0 containers: []
	W0816 18:16:39.840200   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:39.840206   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:39.840270   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:39.874396   75402 cri.go:89] found id: ""
	I0816 18:16:39.874419   75402 logs.go:276] 0 containers: []
	W0816 18:16:39.874426   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:39.874434   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:39.874448   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:39.927922   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:39.927963   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:39.942048   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:39.942076   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:40.012143   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:40.012166   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:40.012181   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:40.088798   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:40.088844   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:42.625875   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:42.640386   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:42.640448   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:42.675201   75402 cri.go:89] found id: ""
	I0816 18:16:42.675224   75402 logs.go:276] 0 containers: []
	W0816 18:16:42.675231   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:42.675236   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:42.675293   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:42.705156   75402 cri.go:89] found id: ""
	I0816 18:16:42.705182   75402 logs.go:276] 0 containers: []
	W0816 18:16:42.705192   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:42.705199   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:42.705258   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:42.738921   75402 cri.go:89] found id: ""
	I0816 18:16:42.738948   75402 logs.go:276] 0 containers: []
	W0816 18:16:42.738956   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:42.738962   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:42.739013   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:42.771130   75402 cri.go:89] found id: ""
	I0816 18:16:42.771160   75402 logs.go:276] 0 containers: []
	W0816 18:16:42.771168   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:42.771175   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:42.771231   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:42.805774   75402 cri.go:89] found id: ""
	I0816 18:16:42.805803   75402 logs.go:276] 0 containers: []
	W0816 18:16:42.805811   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:42.805817   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:42.805879   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:42.840248   75402 cri.go:89] found id: ""
	I0816 18:16:42.840277   75402 logs.go:276] 0 containers: []
	W0816 18:16:42.840293   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:42.840302   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:42.840360   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:42.873260   75402 cri.go:89] found id: ""
	I0816 18:16:42.873287   75402 logs.go:276] 0 containers: []
	W0816 18:16:42.873297   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:42.873322   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:42.873383   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:42.906205   75402 cri.go:89] found id: ""
	I0816 18:16:42.906230   75402 logs.go:276] 0 containers: []
	W0816 18:16:42.906238   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:42.906247   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:42.906257   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:42.959235   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:42.959272   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:42.972063   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:42.972090   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:43.039530   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:43.039558   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:43.039569   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:39.440219   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:41.441052   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:40.707059   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:43.210643   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:39.948907   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:42.448534   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:43.115486   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:43.115519   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:45.651040   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:45.663718   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:45.663812   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:45.696548   75402 cri.go:89] found id: ""
	I0816 18:16:45.696578   75402 logs.go:276] 0 containers: []
	W0816 18:16:45.696586   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:45.696591   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:45.696663   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:45.731032   75402 cri.go:89] found id: ""
	I0816 18:16:45.731059   75402 logs.go:276] 0 containers: []
	W0816 18:16:45.731068   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:45.731073   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:45.731126   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:45.764801   75402 cri.go:89] found id: ""
	I0816 18:16:45.764829   75402 logs.go:276] 0 containers: []
	W0816 18:16:45.764840   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:45.764846   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:45.764908   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:45.800768   75402 cri.go:89] found id: ""
	I0816 18:16:45.800795   75402 logs.go:276] 0 containers: []
	W0816 18:16:45.800803   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:45.800809   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:45.800858   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:45.841460   75402 cri.go:89] found id: ""
	I0816 18:16:45.841486   75402 logs.go:276] 0 containers: []
	W0816 18:16:45.841493   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:45.841505   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:45.841566   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:45.875230   75402 cri.go:89] found id: ""
	I0816 18:16:45.875254   75402 logs.go:276] 0 containers: []
	W0816 18:16:45.875261   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:45.875266   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:45.875319   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:45.907711   75402 cri.go:89] found id: ""
	I0816 18:16:45.907739   75402 logs.go:276] 0 containers: []
	W0816 18:16:45.907747   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:45.907753   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:45.907804   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:45.943147   75402 cri.go:89] found id: ""
	I0816 18:16:45.943171   75402 logs.go:276] 0 containers: []
	W0816 18:16:45.943182   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:45.943192   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:45.943206   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:45.998459   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:45.998491   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:46.013237   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:46.013267   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:46.079248   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:46.079273   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:46.079288   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:46.158842   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:46.158874   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:43.939212   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:45.939893   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:47.940331   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:45.706588   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:48.206342   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:44.948046   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:46.948752   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:49.448263   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:48.696728   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:48.710946   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:48.711041   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:48.746696   75402 cri.go:89] found id: ""
	I0816 18:16:48.746727   75402 logs.go:276] 0 containers: []
	W0816 18:16:48.746735   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:48.746741   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:48.746803   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:48.781496   75402 cri.go:89] found id: ""
	I0816 18:16:48.781522   75402 logs.go:276] 0 containers: []
	W0816 18:16:48.781532   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:48.781539   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:48.781603   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:48.815628   75402 cri.go:89] found id: ""
	I0816 18:16:48.815654   75402 logs.go:276] 0 containers: []
	W0816 18:16:48.815665   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:48.815673   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:48.815736   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:48.848990   75402 cri.go:89] found id: ""
	I0816 18:16:48.849018   75402 logs.go:276] 0 containers: []
	W0816 18:16:48.849030   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:48.849040   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:48.849098   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:48.886924   75402 cri.go:89] found id: ""
	I0816 18:16:48.886949   75402 logs.go:276] 0 containers: []
	W0816 18:16:48.886960   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:48.886968   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:48.887022   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:48.923989   75402 cri.go:89] found id: ""
	I0816 18:16:48.924018   75402 logs.go:276] 0 containers: []
	W0816 18:16:48.924030   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:48.924038   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:48.924102   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:48.959513   75402 cri.go:89] found id: ""
	I0816 18:16:48.959546   75402 logs.go:276] 0 containers: []
	W0816 18:16:48.959556   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:48.959562   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:48.959614   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:48.995615   75402 cri.go:89] found id: ""
	I0816 18:16:48.995651   75402 logs.go:276] 0 containers: []
	W0816 18:16:48.995662   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:48.995673   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:48.995688   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:49.008440   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:49.008468   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:49.076761   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:49.076780   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:49.076797   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:49.152855   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:49.152893   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:49.190857   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:49.190887   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:51.745344   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:51.759552   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:51.759628   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:51.795494   75402 cri.go:89] found id: ""
	I0816 18:16:51.795520   75402 logs.go:276] 0 containers: []
	W0816 18:16:51.795531   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:51.795539   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:51.795600   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:51.833162   75402 cri.go:89] found id: ""
	I0816 18:16:51.833188   75402 logs.go:276] 0 containers: []
	W0816 18:16:51.833198   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:51.833205   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:51.833265   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:51.866940   75402 cri.go:89] found id: ""
	I0816 18:16:51.866968   75402 logs.go:276] 0 containers: []
	W0816 18:16:51.866979   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:51.866986   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:51.867051   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:51.899824   75402 cri.go:89] found id: ""
	I0816 18:16:51.899857   75402 logs.go:276] 0 containers: []
	W0816 18:16:51.899867   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:51.899874   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:51.899937   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:51.932273   75402 cri.go:89] found id: ""
	I0816 18:16:51.932297   75402 logs.go:276] 0 containers: []
	W0816 18:16:51.932312   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:51.932320   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:51.932390   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:51.966885   75402 cri.go:89] found id: ""
	I0816 18:16:51.966911   75402 logs.go:276] 0 containers: []
	W0816 18:16:51.966922   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:51.966930   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:51.966996   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:52.002988   75402 cri.go:89] found id: ""
	I0816 18:16:52.003020   75402 logs.go:276] 0 containers: []
	W0816 18:16:52.003029   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:52.003035   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:52.003098   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:52.038858   75402 cri.go:89] found id: ""
	I0816 18:16:52.038894   75402 logs.go:276] 0 containers: []
	W0816 18:16:52.038909   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:52.038919   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:52.038933   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:52.076404   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:52.076431   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:52.127735   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:52.127767   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:52.140657   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:52.140680   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:52.202961   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:52.202989   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:52.203008   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:50.440577   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:52.441865   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:50.705618   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:52.706795   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:51.448948   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:53.947907   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:54.787095   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:54.801258   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:54.801332   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:54.837987   75402 cri.go:89] found id: ""
	I0816 18:16:54.838018   75402 logs.go:276] 0 containers: []
	W0816 18:16:54.838028   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:54.838034   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:54.838118   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:54.872439   75402 cri.go:89] found id: ""
	I0816 18:16:54.872466   75402 logs.go:276] 0 containers: []
	W0816 18:16:54.872477   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:54.872490   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:54.872554   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:54.904676   75402 cri.go:89] found id: ""
	I0816 18:16:54.904706   75402 logs.go:276] 0 containers: []
	W0816 18:16:54.904717   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:54.904724   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:54.904783   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:54.938101   75402 cri.go:89] found id: ""
	I0816 18:16:54.938134   75402 logs.go:276] 0 containers: []
	W0816 18:16:54.938145   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:54.938154   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:54.938218   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:54.977409   75402 cri.go:89] found id: ""
	I0816 18:16:54.977442   75402 logs.go:276] 0 containers: []
	W0816 18:16:54.977453   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:54.977460   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:54.977521   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:55.013248   75402 cri.go:89] found id: ""
	I0816 18:16:55.013275   75402 logs.go:276] 0 containers: []
	W0816 18:16:55.013286   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:55.013294   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:55.013363   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:55.044555   75402 cri.go:89] found id: ""
	I0816 18:16:55.044588   75402 logs.go:276] 0 containers: []
	W0816 18:16:55.044597   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:55.044603   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:55.044690   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:55.075970   75402 cri.go:89] found id: ""
	I0816 18:16:55.075997   75402 logs.go:276] 0 containers: []
	W0816 18:16:55.076006   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:55.076014   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:55.076025   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:55.149982   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:55.150017   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:55.190160   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:55.190194   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:55.242629   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:55.242660   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:55.255229   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:55.255254   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:55.324775   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:57.824996   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:57.838666   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:57.838740   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:57.872828   75402 cri.go:89] found id: ""
	I0816 18:16:57.872861   75402 logs.go:276] 0 containers: []
	W0816 18:16:57.872869   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:57.872875   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:57.872927   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:57.907324   75402 cri.go:89] found id: ""
	I0816 18:16:57.907354   75402 logs.go:276] 0 containers: []
	W0816 18:16:57.907366   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:57.907373   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:57.907433   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:57.941657   75402 cri.go:89] found id: ""
	I0816 18:16:57.941682   75402 logs.go:276] 0 containers: []
	W0816 18:16:57.941689   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:57.941695   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:57.941746   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:57.981424   75402 cri.go:89] found id: ""
	I0816 18:16:57.981466   75402 logs.go:276] 0 containers: []
	W0816 18:16:57.981480   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:57.981489   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:57.981562   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:58.015534   75402 cri.go:89] found id: ""
	I0816 18:16:58.015587   75402 logs.go:276] 0 containers: []
	W0816 18:16:58.015598   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:58.015606   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:58.015669   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:58.047875   75402 cri.go:89] found id: ""
	I0816 18:16:58.047908   75402 logs.go:276] 0 containers: []
	W0816 18:16:58.047917   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:58.047923   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:58.047976   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:58.079294   75402 cri.go:89] found id: ""
	I0816 18:16:58.079324   75402 logs.go:276] 0 containers: []
	W0816 18:16:58.079334   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:58.079342   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:58.079406   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:54.940977   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:57.439254   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:55.208298   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:57.706380   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:55.948080   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:57.949589   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:58.112357   75402 cri.go:89] found id: ""
	I0816 18:16:58.112389   75402 logs.go:276] 0 containers: []
	W0816 18:16:58.112401   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:58.112413   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:58.112428   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:58.159903   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:58.159934   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:58.172763   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:58.172789   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:58.245827   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:58.245856   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:58.245872   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:58.325008   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:58.325049   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:00.864354   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:00.877517   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:00.877593   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:00.915396   75402 cri.go:89] found id: ""
	I0816 18:17:00.915428   75402 logs.go:276] 0 containers: []
	W0816 18:17:00.915438   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:00.915446   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:00.915611   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:00.953950   75402 cri.go:89] found id: ""
	I0816 18:17:00.953977   75402 logs.go:276] 0 containers: []
	W0816 18:17:00.953987   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:00.953993   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:00.954051   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:00.987673   75402 cri.go:89] found id: ""
	I0816 18:17:00.987703   75402 logs.go:276] 0 containers: []
	W0816 18:17:00.987713   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:00.987721   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:00.987784   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:01.021230   75402 cri.go:89] found id: ""
	I0816 18:17:01.021277   75402 logs.go:276] 0 containers: []
	W0816 18:17:01.021308   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:01.021315   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:01.021388   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:01.057087   75402 cri.go:89] found id: ""
	I0816 18:17:01.057117   75402 logs.go:276] 0 containers: []
	W0816 18:17:01.057127   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:01.057135   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:01.057207   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:01.094142   75402 cri.go:89] found id: ""
	I0816 18:17:01.094168   75402 logs.go:276] 0 containers: []
	W0816 18:17:01.094176   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:01.094183   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:01.094233   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:01.132799   75402 cri.go:89] found id: ""
	I0816 18:17:01.132824   75402 logs.go:276] 0 containers: []
	W0816 18:17:01.132831   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:01.132837   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:01.132888   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:01.173367   75402 cri.go:89] found id: ""
	I0816 18:17:01.173402   75402 logs.go:276] 0 containers: []
	W0816 18:17:01.173414   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:01.173425   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:01.173443   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:01.186856   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:01.186896   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:01.259913   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:01.259941   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:01.259955   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:01.340914   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:01.340947   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:01.381023   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:01.381058   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:59.440314   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:01.440377   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:59.706750   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:01.707186   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:00.448182   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:02.448773   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:03.933420   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:03.946940   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:03.947008   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:03.984529   75402 cri.go:89] found id: ""
	I0816 18:17:03.984560   75402 logs.go:276] 0 containers: []
	W0816 18:17:03.984571   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:03.984581   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:03.984668   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:04.017900   75402 cri.go:89] found id: ""
	I0816 18:17:04.017929   75402 logs.go:276] 0 containers: []
	W0816 18:17:04.017940   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:04.017948   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:04.018009   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:04.050837   75402 cri.go:89] found id: ""
	I0816 18:17:04.050871   75402 logs.go:276] 0 containers: []
	W0816 18:17:04.050888   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:04.050896   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:04.050959   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:04.085448   75402 cri.go:89] found id: ""
	I0816 18:17:04.085477   75402 logs.go:276] 0 containers: []
	W0816 18:17:04.085487   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:04.085495   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:04.085564   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:04.118177   75402 cri.go:89] found id: ""
	I0816 18:17:04.118203   75402 logs.go:276] 0 containers: []
	W0816 18:17:04.118213   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:04.118220   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:04.118284   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:04.150289   75402 cri.go:89] found id: ""
	I0816 18:17:04.150317   75402 logs.go:276] 0 containers: []
	W0816 18:17:04.150330   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:04.150338   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:04.150404   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:04.184258   75402 cri.go:89] found id: ""
	I0816 18:17:04.184282   75402 logs.go:276] 0 containers: []
	W0816 18:17:04.184290   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:04.184295   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:04.184347   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:04.217142   75402 cri.go:89] found id: ""
	I0816 18:17:04.217174   75402 logs.go:276] 0 containers: []
	W0816 18:17:04.217184   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:04.217192   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:04.217204   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:04.253000   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:04.253034   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:04.304978   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:04.305018   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:04.320210   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:04.320241   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:04.396146   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:04.396169   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:04.396184   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:06.980747   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:06.992944   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:06.993006   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:07.026303   75402 cri.go:89] found id: ""
	I0816 18:17:07.026356   75402 logs.go:276] 0 containers: []
	W0816 18:17:07.026368   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:07.026376   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:07.026443   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:07.059226   75402 cri.go:89] found id: ""
	I0816 18:17:07.059257   75402 logs.go:276] 0 containers: []
	W0816 18:17:07.059268   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:07.059277   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:07.059339   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:07.092142   75402 cri.go:89] found id: ""
	I0816 18:17:07.092171   75402 logs.go:276] 0 containers: []
	W0816 18:17:07.092182   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:07.092188   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:07.092248   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:07.125284   75402 cri.go:89] found id: ""
	I0816 18:17:07.125330   75402 logs.go:276] 0 containers: []
	W0816 18:17:07.125347   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:07.125355   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:07.125420   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:07.163890   75402 cri.go:89] found id: ""
	I0816 18:17:07.163919   75402 logs.go:276] 0 containers: []
	W0816 18:17:07.163930   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:07.163938   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:07.164002   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:07.197988   75402 cri.go:89] found id: ""
	I0816 18:17:07.198014   75402 logs.go:276] 0 containers: []
	W0816 18:17:07.198025   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:07.198033   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:07.198116   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:07.232709   75402 cri.go:89] found id: ""
	I0816 18:17:07.232738   75402 logs.go:276] 0 containers: []
	W0816 18:17:07.232749   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:07.232756   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:07.232817   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:07.264514   75402 cri.go:89] found id: ""
	I0816 18:17:07.264548   75402 logs.go:276] 0 containers: []
	W0816 18:17:07.264558   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:07.264569   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:07.264583   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:07.316138   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:07.316173   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:07.329659   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:07.329688   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:07.397345   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:07.397380   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:07.397397   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:07.481245   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:07.481280   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:03.940100   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:05.940355   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:07.940821   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:04.207253   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:06.705745   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:08.706828   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:04.949027   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:07.447957   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:10.024405   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:10.036860   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:10.036927   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:10.069402   75402 cri.go:89] found id: ""
	I0816 18:17:10.069436   75402 logs.go:276] 0 containers: []
	W0816 18:17:10.069448   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:10.069458   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:10.069511   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:10.101480   75402 cri.go:89] found id: ""
	I0816 18:17:10.101508   75402 logs.go:276] 0 containers: []
	W0816 18:17:10.101518   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:10.101529   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:10.101601   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:10.131673   75402 cri.go:89] found id: ""
	I0816 18:17:10.131708   75402 logs.go:276] 0 containers: []
	W0816 18:17:10.131719   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:10.131726   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:10.131821   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:10.166476   75402 cri.go:89] found id: ""
	I0816 18:17:10.166508   75402 logs.go:276] 0 containers: []
	W0816 18:17:10.166518   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:10.166525   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:10.166590   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:10.199296   75402 cri.go:89] found id: ""
	I0816 18:17:10.199321   75402 logs.go:276] 0 containers: []
	W0816 18:17:10.199332   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:10.199340   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:10.199406   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:10.232640   75402 cri.go:89] found id: ""
	I0816 18:17:10.232672   75402 logs.go:276] 0 containers: []
	W0816 18:17:10.232683   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:10.232691   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:10.232775   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:10.263958   75402 cri.go:89] found id: ""
	I0816 18:17:10.263988   75402 logs.go:276] 0 containers: []
	W0816 18:17:10.263998   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:10.264003   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:10.264052   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:10.295904   75402 cri.go:89] found id: ""
	I0816 18:17:10.295929   75402 logs.go:276] 0 containers: []
	W0816 18:17:10.295937   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:10.295946   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:10.295957   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:10.344874   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:10.344909   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:10.358523   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:10.358552   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:10.433311   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:10.433334   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:10.433351   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:10.514580   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:10.514620   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:13.053815   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:13.068517   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:13.068597   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:10.440472   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:12.939209   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:10.707438   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:13.207630   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:09.947889   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:11.949408   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:14.447906   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:13.104251   75402 cri.go:89] found id: ""
	I0816 18:17:13.104279   75402 logs.go:276] 0 containers: []
	W0816 18:17:13.104313   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:13.104321   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:13.104375   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:13.137415   75402 cri.go:89] found id: ""
	I0816 18:17:13.137442   75402 logs.go:276] 0 containers: []
	W0816 18:17:13.137453   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:13.137461   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:13.137510   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:13.174165   75402 cri.go:89] found id: ""
	I0816 18:17:13.174191   75402 logs.go:276] 0 containers: []
	W0816 18:17:13.174203   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:13.174210   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:13.174271   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:13.206789   75402 cri.go:89] found id: ""
	I0816 18:17:13.206814   75402 logs.go:276] 0 containers: []
	W0816 18:17:13.206823   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:13.206831   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:13.206892   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:13.238950   75402 cri.go:89] found id: ""
	I0816 18:17:13.238975   75402 logs.go:276] 0 containers: []
	W0816 18:17:13.238984   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:13.238990   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:13.239037   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:13.271485   75402 cri.go:89] found id: ""
	I0816 18:17:13.271518   75402 logs.go:276] 0 containers: []
	W0816 18:17:13.271535   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:13.271544   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:13.271612   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:13.307576   75402 cri.go:89] found id: ""
	I0816 18:17:13.307610   75402 logs.go:276] 0 containers: []
	W0816 18:17:13.307622   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:13.307632   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:13.307698   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:13.339746   75402 cri.go:89] found id: ""
	I0816 18:17:13.339792   75402 logs.go:276] 0 containers: []
	W0816 18:17:13.339802   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:13.339813   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:13.339827   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:13.352847   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:13.352875   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:13.440397   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:13.440418   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:13.440432   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:13.514879   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:13.514916   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:13.553848   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:13.553882   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:16.103318   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:16.115837   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:16.115922   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:16.147079   75402 cri.go:89] found id: ""
	I0816 18:17:16.147108   75402 logs.go:276] 0 containers: []
	W0816 18:17:16.147119   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:16.147127   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:16.147189   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:16.184207   75402 cri.go:89] found id: ""
	I0816 18:17:16.184233   75402 logs.go:276] 0 containers: []
	W0816 18:17:16.184241   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:16.184247   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:16.184295   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:16.219036   75402 cri.go:89] found id: ""
	I0816 18:17:16.219065   75402 logs.go:276] 0 containers: []
	W0816 18:17:16.219072   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:16.219078   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:16.219163   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:16.251269   75402 cri.go:89] found id: ""
	I0816 18:17:16.251307   75402 logs.go:276] 0 containers: []
	W0816 18:17:16.251320   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:16.251329   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:16.251394   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:16.286549   75402 cri.go:89] found id: ""
	I0816 18:17:16.286576   75402 logs.go:276] 0 containers: []
	W0816 18:17:16.286585   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:16.286591   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:16.286647   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:16.322017   75402 cri.go:89] found id: ""
	I0816 18:17:16.322045   75402 logs.go:276] 0 containers: []
	W0816 18:17:16.322055   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:16.322063   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:16.322128   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:16.353606   75402 cri.go:89] found id: ""
	I0816 18:17:16.353636   75402 logs.go:276] 0 containers: []
	W0816 18:17:16.353646   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:16.353653   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:16.353719   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:16.386973   75402 cri.go:89] found id: ""
	I0816 18:17:16.387005   75402 logs.go:276] 0 containers: []
	W0816 18:17:16.387016   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:16.387027   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:16.387039   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:16.437031   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:16.437066   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:16.451258   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:16.451292   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:16.519130   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:16.519155   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:16.519170   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:16.598591   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:16.598626   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:14.939993   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:17.440655   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:15.705969   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:17.706271   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:16.449266   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:18.948220   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:19.147916   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:19.160525   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:19.160600   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:19.193494   75402 cri.go:89] found id: ""
	I0816 18:17:19.193520   75402 logs.go:276] 0 containers: []
	W0816 18:17:19.193527   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:19.193533   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:19.193599   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:19.230936   75402 cri.go:89] found id: ""
	I0816 18:17:19.230963   75402 logs.go:276] 0 containers: []
	W0816 18:17:19.230971   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:19.230976   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:19.231029   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:19.263713   75402 cri.go:89] found id: ""
	I0816 18:17:19.263735   75402 logs.go:276] 0 containers: []
	W0816 18:17:19.263742   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:19.263748   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:19.263794   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:19.294609   75402 cri.go:89] found id: ""
	I0816 18:17:19.294635   75402 logs.go:276] 0 containers: []
	W0816 18:17:19.294642   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:19.294647   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:19.294698   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:19.329278   75402 cri.go:89] found id: ""
	I0816 18:17:19.329303   75402 logs.go:276] 0 containers: []
	W0816 18:17:19.329313   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:19.329319   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:19.329368   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:19.362007   75402 cri.go:89] found id: ""
	I0816 18:17:19.362043   75402 logs.go:276] 0 containers: []
	W0816 18:17:19.362052   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:19.362067   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:19.362120   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:19.395190   75402 cri.go:89] found id: ""
	I0816 18:17:19.395217   75402 logs.go:276] 0 containers: []
	W0816 18:17:19.395248   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:19.395255   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:19.395302   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:19.426962   75402 cri.go:89] found id: ""
	I0816 18:17:19.426991   75402 logs.go:276] 0 containers: []
	W0816 18:17:19.427002   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:19.427012   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:19.427027   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:19.441319   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:19.441346   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:19.511390   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:19.511409   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:19.511425   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:19.590897   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:19.590935   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:19.628753   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:19.628781   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:22.182534   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:22.194844   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:22.194917   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:22.228225   75402 cri.go:89] found id: ""
	I0816 18:17:22.228247   75402 logs.go:276] 0 containers: []
	W0816 18:17:22.228269   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:22.228276   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:22.228325   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:22.258614   75402 cri.go:89] found id: ""
	I0816 18:17:22.258646   75402 logs.go:276] 0 containers: []
	W0816 18:17:22.258654   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:22.258660   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:22.258708   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:22.289103   75402 cri.go:89] found id: ""
	I0816 18:17:22.289136   75402 logs.go:276] 0 containers: []
	W0816 18:17:22.289147   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:22.289154   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:22.289215   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:22.321828   75402 cri.go:89] found id: ""
	I0816 18:17:22.321857   75402 logs.go:276] 0 containers: []
	W0816 18:17:22.321869   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:22.321877   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:22.321942   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:22.353557   75402 cri.go:89] found id: ""
	I0816 18:17:22.353588   75402 logs.go:276] 0 containers: []
	W0816 18:17:22.353597   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:22.353602   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:22.353660   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:22.385078   75402 cri.go:89] found id: ""
	I0816 18:17:22.385103   75402 logs.go:276] 0 containers: []
	W0816 18:17:22.385110   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:22.385116   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:22.385189   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:22.415864   75402 cri.go:89] found id: ""
	I0816 18:17:22.415900   75402 logs.go:276] 0 containers: []
	W0816 18:17:22.415913   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:22.415922   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:22.415990   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:22.449895   75402 cri.go:89] found id: ""
	I0816 18:17:22.449922   75402 logs.go:276] 0 containers: []
	W0816 18:17:22.449942   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:22.449957   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:22.449974   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:22.523055   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:22.523073   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:22.523084   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:22.599680   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:22.599719   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:22.638021   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:22.638057   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:22.688970   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:22.689010   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:19.941154   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:22.440580   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:20.207713   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:22.706805   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:21.448399   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:23.448444   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:25.202748   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:25.217316   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:25.217388   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:25.249528   75402 cri.go:89] found id: ""
	I0816 18:17:25.249558   75402 logs.go:276] 0 containers: []
	W0816 18:17:25.249566   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:25.249578   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:25.249625   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:25.282667   75402 cri.go:89] found id: ""
	I0816 18:17:25.282696   75402 logs.go:276] 0 containers: []
	W0816 18:17:25.282706   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:25.282712   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:25.282764   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:25.314061   75402 cri.go:89] found id: ""
	I0816 18:17:25.314091   75402 logs.go:276] 0 containers: []
	W0816 18:17:25.314101   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:25.314108   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:25.314161   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:25.351260   75402 cri.go:89] found id: ""
	I0816 18:17:25.351287   75402 logs.go:276] 0 containers: []
	W0816 18:17:25.351296   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:25.351301   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:25.351352   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:25.388303   75402 cri.go:89] found id: ""
	I0816 18:17:25.388334   75402 logs.go:276] 0 containers: []
	W0816 18:17:25.388345   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:25.388352   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:25.388412   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:25.422133   75402 cri.go:89] found id: ""
	I0816 18:17:25.422161   75402 logs.go:276] 0 containers: []
	W0816 18:17:25.422169   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:25.422175   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:25.422232   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:25.456749   75402 cri.go:89] found id: ""
	I0816 18:17:25.456775   75402 logs.go:276] 0 containers: []
	W0816 18:17:25.456783   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:25.456789   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:25.456836   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:25.494783   75402 cri.go:89] found id: ""
	I0816 18:17:25.494809   75402 logs.go:276] 0 containers: []
	W0816 18:17:25.494817   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:25.494825   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:25.494836   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:25.561253   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:25.561290   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:25.580349   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:25.580383   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:25.656333   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:25.656361   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:25.656378   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:25.733479   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:25.733515   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:24.444069   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:26.939743   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:24.707849   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:26.709711   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:25.448555   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:27.449070   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:28.272217   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:28.285750   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:28.285822   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:28.318230   75402 cri.go:89] found id: ""
	I0816 18:17:28.318260   75402 logs.go:276] 0 containers: []
	W0816 18:17:28.318268   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:28.318275   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:28.318344   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:28.351766   75402 cri.go:89] found id: ""
	I0816 18:17:28.351798   75402 logs.go:276] 0 containers: []
	W0816 18:17:28.351808   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:28.351814   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:28.351872   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:28.385543   75402 cri.go:89] found id: ""
	I0816 18:17:28.385572   75402 logs.go:276] 0 containers: []
	W0816 18:17:28.385581   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:28.385588   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:28.385653   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:28.418808   75402 cri.go:89] found id: ""
	I0816 18:17:28.418837   75402 logs.go:276] 0 containers: []
	W0816 18:17:28.418846   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:28.418852   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:28.418900   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:28.453883   75402 cri.go:89] found id: ""
	I0816 18:17:28.453911   75402 logs.go:276] 0 containers: []
	W0816 18:17:28.453922   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:28.453929   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:28.453996   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:28.486261   75402 cri.go:89] found id: ""
	I0816 18:17:28.486291   75402 logs.go:276] 0 containers: []
	W0816 18:17:28.486304   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:28.486310   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:28.486366   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:28.520617   75402 cri.go:89] found id: ""
	I0816 18:17:28.520658   75402 logs.go:276] 0 containers: []
	W0816 18:17:28.520670   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:28.520678   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:28.520731   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:28.552996   75402 cri.go:89] found id: ""
	I0816 18:17:28.553026   75402 logs.go:276] 0 containers: []
	W0816 18:17:28.553036   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:28.553046   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:28.553061   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:28.604149   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:28.604192   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:28.617393   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:28.617421   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:28.683258   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:28.683279   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:28.683294   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:28.766933   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:28.766977   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:31.305897   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:31.326070   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:31.326143   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:31.375314   75402 cri.go:89] found id: ""
	I0816 18:17:31.375350   75402 logs.go:276] 0 containers: []
	W0816 18:17:31.375361   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:31.375369   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:31.375429   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:31.407372   75402 cri.go:89] found id: ""
	I0816 18:17:31.407398   75402 logs.go:276] 0 containers: []
	W0816 18:17:31.407406   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:31.407411   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:31.407459   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:31.445679   75402 cri.go:89] found id: ""
	I0816 18:17:31.445706   75402 logs.go:276] 0 containers: []
	W0816 18:17:31.445714   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:31.445720   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:31.445781   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:31.480040   75402 cri.go:89] found id: ""
	I0816 18:17:31.480072   75402 logs.go:276] 0 containers: []
	W0816 18:17:31.480080   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:31.480085   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:31.480145   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:31.511143   75402 cri.go:89] found id: ""
	I0816 18:17:31.511171   75402 logs.go:276] 0 containers: []
	W0816 18:17:31.511182   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:31.511188   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:31.511252   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:31.544254   75402 cri.go:89] found id: ""
	I0816 18:17:31.544282   75402 logs.go:276] 0 containers: []
	W0816 18:17:31.544293   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:31.544300   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:31.544363   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:31.579007   75402 cri.go:89] found id: ""
	I0816 18:17:31.579033   75402 logs.go:276] 0 containers: []
	W0816 18:17:31.579041   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:31.579046   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:31.579108   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:31.619966   75402 cri.go:89] found id: ""
	I0816 18:17:31.619995   75402 logs.go:276] 0 containers: []
	W0816 18:17:31.620005   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:31.620018   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:31.620035   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:31.657784   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:31.657815   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:31.706824   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:31.706853   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:31.719696   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:31.719721   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:31.786096   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:31.786124   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:31.786142   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:28.940711   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:31.440514   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:29.206929   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:31.706188   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:33.706244   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:29.948053   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:32.448453   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:34.363862   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:34.377365   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:34.377430   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:34.414191   75402 cri.go:89] found id: ""
	I0816 18:17:34.414216   75402 logs.go:276] 0 containers: []
	W0816 18:17:34.414223   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:34.414229   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:34.414285   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:34.446811   75402 cri.go:89] found id: ""
	I0816 18:17:34.446836   75402 logs.go:276] 0 containers: []
	W0816 18:17:34.446843   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:34.446848   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:34.446905   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:34.477582   75402 cri.go:89] found id: ""
	I0816 18:17:34.477615   75402 logs.go:276] 0 containers: []
	W0816 18:17:34.477627   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:34.477634   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:34.477695   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:34.507868   75402 cri.go:89] found id: ""
	I0816 18:17:34.507901   75402 logs.go:276] 0 containers: []
	W0816 18:17:34.507912   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:34.507921   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:34.507984   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:34.538719   75402 cri.go:89] found id: ""
	I0816 18:17:34.538754   75402 logs.go:276] 0 containers: []
	W0816 18:17:34.538765   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:34.538772   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:34.538826   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:34.571445   75402 cri.go:89] found id: ""
	I0816 18:17:34.571468   75402 logs.go:276] 0 containers: []
	W0816 18:17:34.571477   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:34.571484   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:34.571557   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:34.601587   75402 cri.go:89] found id: ""
	I0816 18:17:34.601611   75402 logs.go:276] 0 containers: []
	W0816 18:17:34.601618   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:34.601624   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:34.601669   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:34.634850   75402 cri.go:89] found id: ""
	I0816 18:17:34.634878   75402 logs.go:276] 0 containers: []
	W0816 18:17:34.634892   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:34.634906   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:34.634920   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:34.682828   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:34.682859   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:34.695796   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:34.695820   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:34.762100   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:34.762121   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:34.762133   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:34.845329   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:34.845359   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:37.386266   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:37.398940   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:37.399005   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:37.433072   75402 cri.go:89] found id: ""
	I0816 18:17:37.433099   75402 logs.go:276] 0 containers: []
	W0816 18:17:37.433112   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:37.433118   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:37.433169   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:37.466968   75402 cri.go:89] found id: ""
	I0816 18:17:37.467001   75402 logs.go:276] 0 containers: []
	W0816 18:17:37.467012   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:37.467021   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:37.467086   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:37.509268   75402 cri.go:89] found id: ""
	I0816 18:17:37.509291   75402 logs.go:276] 0 containers: []
	W0816 18:17:37.509300   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:37.509306   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:37.509365   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:37.541295   75402 cri.go:89] found id: ""
	I0816 18:17:37.541338   75402 logs.go:276] 0 containers: []
	W0816 18:17:37.541350   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:37.541357   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:37.541421   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:37.575423   75402 cri.go:89] found id: ""
	I0816 18:17:37.575453   75402 logs.go:276] 0 containers: []
	W0816 18:17:37.575464   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:37.575472   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:37.575540   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:37.614787   75402 cri.go:89] found id: ""
	I0816 18:17:37.614817   75402 logs.go:276] 0 containers: []
	W0816 18:17:37.614828   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:37.614835   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:37.614896   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:37.646396   75402 cri.go:89] found id: ""
	I0816 18:17:37.646430   75402 logs.go:276] 0 containers: []
	W0816 18:17:37.646441   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:37.646449   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:37.646517   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:37.679383   75402 cri.go:89] found id: ""
	I0816 18:17:37.679414   75402 logs.go:276] 0 containers: []
	W0816 18:17:37.679423   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:37.679431   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:37.679442   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:37.729641   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:37.729673   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:37.742420   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:37.742448   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:37.812572   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:37.812600   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:37.812615   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:37.887100   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:37.887137   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:33.940380   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:35.941055   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:38.440700   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:35.706903   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:38.207115   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:34.947638   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:37.448511   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:39.448944   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:40.424202   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:40.438231   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:40.438337   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:40.474614   75402 cri.go:89] found id: ""
	I0816 18:17:40.474639   75402 logs.go:276] 0 containers: []
	W0816 18:17:40.474648   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:40.474653   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:40.474701   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:40.510123   75402 cri.go:89] found id: ""
	I0816 18:17:40.510154   75402 logs.go:276] 0 containers: []
	W0816 18:17:40.510162   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:40.510167   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:40.510217   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:40.548971   75402 cri.go:89] found id: ""
	I0816 18:17:40.549000   75402 logs.go:276] 0 containers: []
	W0816 18:17:40.549008   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:40.549013   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:40.549069   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:40.595126   75402 cri.go:89] found id: ""
	I0816 18:17:40.595158   75402 logs.go:276] 0 containers: []
	W0816 18:17:40.595167   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:40.595174   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:40.595220   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:40.629769   75402 cri.go:89] found id: ""
	I0816 18:17:40.629793   75402 logs.go:276] 0 containers: []
	W0816 18:17:40.629801   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:40.629807   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:40.629871   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:40.661889   75402 cri.go:89] found id: ""
	I0816 18:17:40.661922   75402 logs.go:276] 0 containers: []
	W0816 18:17:40.661932   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:40.661939   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:40.662001   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:40.697764   75402 cri.go:89] found id: ""
	I0816 18:17:40.697790   75402 logs.go:276] 0 containers: []
	W0816 18:17:40.697801   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:40.697808   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:40.697867   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:40.734825   75402 cri.go:89] found id: ""
	I0816 18:17:40.734852   75402 logs.go:276] 0 containers: []
	W0816 18:17:40.734862   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:40.734872   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:40.734939   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:40.787975   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:40.788015   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:40.800817   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:40.800843   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:40.874182   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:40.874205   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:40.874219   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:40.960032   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:40.960066   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:40.940284   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:42.943218   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:40.207943   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:42.707356   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:41.947437   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:43.947887   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:43.499770   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:43.513726   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:43.513806   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:43.548368   75402 cri.go:89] found id: ""
	I0816 18:17:43.548396   75402 logs.go:276] 0 containers: []
	W0816 18:17:43.548406   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:43.548413   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:43.548474   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:43.581177   75402 cri.go:89] found id: ""
	I0816 18:17:43.581205   75402 logs.go:276] 0 containers: []
	W0816 18:17:43.581216   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:43.581223   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:43.581291   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:43.614315   75402 cri.go:89] found id: ""
	I0816 18:17:43.614354   75402 logs.go:276] 0 containers: []
	W0816 18:17:43.614367   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:43.614374   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:43.614437   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:43.648608   75402 cri.go:89] found id: ""
	I0816 18:17:43.648645   75402 logs.go:276] 0 containers: []
	W0816 18:17:43.648658   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:43.648669   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:43.648722   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:43.680549   75402 cri.go:89] found id: ""
	I0816 18:17:43.680586   75402 logs.go:276] 0 containers: []
	W0816 18:17:43.680597   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:43.680604   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:43.680686   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:43.710473   75402 cri.go:89] found id: ""
	I0816 18:17:43.710497   75402 logs.go:276] 0 containers: []
	W0816 18:17:43.710506   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:43.710514   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:43.710576   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:43.741415   75402 cri.go:89] found id: ""
	I0816 18:17:43.741442   75402 logs.go:276] 0 containers: []
	W0816 18:17:43.741450   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:43.741456   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:43.741505   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:43.775018   75402 cri.go:89] found id: ""
	I0816 18:17:43.775051   75402 logs.go:276] 0 containers: []
	W0816 18:17:43.775063   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:43.775074   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:43.775087   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:43.825596   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:43.825630   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:43.839133   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:43.839161   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:43.905645   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:43.905667   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:43.905679   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:43.988860   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:43.988901   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:46.525896   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:46.539147   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:46.539229   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:46.570703   75402 cri.go:89] found id: ""
	I0816 18:17:46.570726   75402 logs.go:276] 0 containers: []
	W0816 18:17:46.570734   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:46.570740   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:46.570785   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:46.605909   75402 cri.go:89] found id: ""
	I0816 18:17:46.605939   75402 logs.go:276] 0 containers: []
	W0816 18:17:46.605954   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:46.605961   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:46.606013   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:46.638865   75402 cri.go:89] found id: ""
	I0816 18:17:46.638899   75402 logs.go:276] 0 containers: []
	W0816 18:17:46.638911   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:46.638919   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:46.638994   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:46.671869   75402 cri.go:89] found id: ""
	I0816 18:17:46.671904   75402 logs.go:276] 0 containers: []
	W0816 18:17:46.671917   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:46.671926   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:46.671988   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:46.703423   75402 cri.go:89] found id: ""
	I0816 18:17:46.703464   75402 logs.go:276] 0 containers: []
	W0816 18:17:46.703473   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:46.703479   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:46.703545   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:46.735824   75402 cri.go:89] found id: ""
	I0816 18:17:46.735853   75402 logs.go:276] 0 containers: []
	W0816 18:17:46.735864   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:46.735871   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:46.735926   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:46.767122   75402 cri.go:89] found id: ""
	I0816 18:17:46.767146   75402 logs.go:276] 0 containers: []
	W0816 18:17:46.767154   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:46.767160   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:46.767207   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:46.798093   75402 cri.go:89] found id: ""
	I0816 18:17:46.798126   75402 logs.go:276] 0 containers: []
	W0816 18:17:46.798140   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:46.798152   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:46.798167   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:46.832699   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:46.832725   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:46.884212   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:46.884246   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:46.896896   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:46.896921   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:46.968805   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:46.968824   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:46.968838   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:45.440474   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:47.940127   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:45.206534   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:47.206973   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:45.948252   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:48.448086   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:49.552581   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:49.565134   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:49.565212   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:49.597012   75402 cri.go:89] found id: ""
	I0816 18:17:49.597042   75402 logs.go:276] 0 containers: []
	W0816 18:17:49.597057   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:49.597067   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:49.597133   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:49.628902   75402 cri.go:89] found id: ""
	I0816 18:17:49.628935   75402 logs.go:276] 0 containers: []
	W0816 18:17:49.628948   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:49.628957   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:49.629025   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:49.662668   75402 cri.go:89] found id: ""
	I0816 18:17:49.662698   75402 logs.go:276] 0 containers: []
	W0816 18:17:49.662709   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:49.662715   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:49.662778   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:49.696354   75402 cri.go:89] found id: ""
	I0816 18:17:49.696381   75402 logs.go:276] 0 containers: []
	W0816 18:17:49.696389   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:49.696395   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:49.696487   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:49.730801   75402 cri.go:89] found id: ""
	I0816 18:17:49.730838   75402 logs.go:276] 0 containers: []
	W0816 18:17:49.730849   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:49.730856   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:49.730921   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:49.764474   75402 cri.go:89] found id: ""
	I0816 18:17:49.764503   75402 logs.go:276] 0 containers: []
	W0816 18:17:49.764514   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:49.764522   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:49.764585   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:49.798577   75402 cri.go:89] found id: ""
	I0816 18:17:49.798616   75402 logs.go:276] 0 containers: []
	W0816 18:17:49.798627   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:49.798634   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:49.798703   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:49.830987   75402 cri.go:89] found id: ""
	I0816 18:17:49.831016   75402 logs.go:276] 0 containers: []
	W0816 18:17:49.831024   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:49.831032   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:49.831043   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:49.883397   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:49.883433   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:49.897208   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:49.897239   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:49.968363   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:49.968386   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:49.968398   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:50.056552   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:50.056583   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:52.596191   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:52.609592   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:52.609668   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:52.645775   75402 cri.go:89] found id: ""
	I0816 18:17:52.645807   75402 logs.go:276] 0 containers: []
	W0816 18:17:52.645817   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:52.645823   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:52.645869   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:52.677817   75402 cri.go:89] found id: ""
	I0816 18:17:52.677852   75402 logs.go:276] 0 containers: []
	W0816 18:17:52.677862   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:52.677870   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:52.677935   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:52.710618   75402 cri.go:89] found id: ""
	I0816 18:17:52.710648   75402 logs.go:276] 0 containers: []
	W0816 18:17:52.710658   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:52.710664   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:52.710716   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:52.745830   75402 cri.go:89] found id: ""
	I0816 18:17:52.745858   75402 logs.go:276] 0 containers: []
	W0816 18:17:52.745867   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:52.745872   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:52.745929   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:52.778511   75402 cri.go:89] found id: ""
	I0816 18:17:52.778538   75402 logs.go:276] 0 containers: []
	W0816 18:17:52.778548   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:52.778567   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:52.778632   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:52.810759   75402 cri.go:89] found id: ""
	I0816 18:17:52.810788   75402 logs.go:276] 0 containers: []
	W0816 18:17:52.810800   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:52.810807   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:52.810872   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:52.843786   75402 cri.go:89] found id: ""
	I0816 18:17:52.843814   75402 logs.go:276] 0 containers: []
	W0816 18:17:52.843824   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:52.843831   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:52.843886   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:52.876886   75402 cri.go:89] found id: ""
	I0816 18:17:52.876914   75402 logs.go:276] 0 containers: []
	W0816 18:17:52.876924   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:52.876934   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:52.876950   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:52.932519   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:52.932559   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:52.946645   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:52.946671   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:53.018156   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:53.018177   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:53.018190   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:53.095562   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:53.095600   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:49.940263   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:51.940433   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:49.707635   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:52.206027   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:50.449204   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:52.949591   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:55.633820   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:55.646170   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:55.646238   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:55.678147   75402 cri.go:89] found id: ""
	I0816 18:17:55.678181   75402 logs.go:276] 0 containers: []
	W0816 18:17:55.678194   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:55.678202   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:55.678264   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:55.710910   75402 cri.go:89] found id: ""
	I0816 18:17:55.710938   75402 logs.go:276] 0 containers: []
	W0816 18:17:55.710948   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:55.710956   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:55.711012   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:55.744822   75402 cri.go:89] found id: ""
	I0816 18:17:55.744853   75402 logs.go:276] 0 containers: []
	W0816 18:17:55.744863   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:55.744870   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:55.744931   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:55.791677   75402 cri.go:89] found id: ""
	I0816 18:17:55.791708   75402 logs.go:276] 0 containers: []
	W0816 18:17:55.791719   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:55.791727   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:55.791788   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:55.826448   75402 cri.go:89] found id: ""
	I0816 18:17:55.826481   75402 logs.go:276] 0 containers: []
	W0816 18:17:55.826492   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:55.826500   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:55.826564   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:55.861178   75402 cri.go:89] found id: ""
	I0816 18:17:55.861210   75402 logs.go:276] 0 containers: []
	W0816 18:17:55.861219   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:55.861225   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:55.861280   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:55.898073   75402 cri.go:89] found id: ""
	I0816 18:17:55.898099   75402 logs.go:276] 0 containers: []
	W0816 18:17:55.898110   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:55.898117   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:55.898184   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:55.931446   75402 cri.go:89] found id: ""
	I0816 18:17:55.931478   75402 logs.go:276] 0 containers: []
	W0816 18:17:55.931487   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:55.931498   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:55.931514   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:55.999910   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:55.999931   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:55.999943   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:56.077240   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:56.077312   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:56.115479   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:56.115506   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:56.166954   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:56.166989   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:54.440166   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:56.939865   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:54.206368   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:56.206710   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:58.207053   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:55.448566   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:57.948891   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:58.680571   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:58.692824   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:58.692890   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:58.729761   75402 cri.go:89] found id: ""
	I0816 18:17:58.729786   75402 logs.go:276] 0 containers: []
	W0816 18:17:58.729794   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:58.729799   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:58.729857   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:58.764943   75402 cri.go:89] found id: ""
	I0816 18:17:58.765082   75402 logs.go:276] 0 containers: []
	W0816 18:17:58.765113   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:58.765124   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:58.765179   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:58.801314   75402 cri.go:89] found id: ""
	I0816 18:17:58.801345   75402 logs.go:276] 0 containers: []
	W0816 18:17:58.801357   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:58.801365   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:58.801429   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:58.833936   75402 cri.go:89] found id: ""
	I0816 18:17:58.833973   75402 logs.go:276] 0 containers: []
	W0816 18:17:58.833982   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:58.833988   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:58.834046   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:58.870108   75402 cri.go:89] found id: ""
	I0816 18:17:58.870137   75402 logs.go:276] 0 containers: []
	W0816 18:17:58.870148   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:58.870155   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:58.870219   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:58.904157   75402 cri.go:89] found id: ""
	I0816 18:17:58.904184   75402 logs.go:276] 0 containers: []
	W0816 18:17:58.904194   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:58.904201   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:58.904264   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:58.937862   75402 cri.go:89] found id: ""
	I0816 18:17:58.937891   75402 logs.go:276] 0 containers: []
	W0816 18:17:58.937901   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:58.937909   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:58.937972   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:58.972465   75402 cri.go:89] found id: ""
	I0816 18:17:58.972495   75402 logs.go:276] 0 containers: []
	W0816 18:17:58.972506   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:58.972517   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:58.972532   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:59.047197   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:59.047223   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:59.047238   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:59.126634   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:59.126668   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:59.165528   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:59.165562   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:59.214294   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:59.214433   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:01.729662   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:01.742582   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:01.742642   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:01.776148   75402 cri.go:89] found id: ""
	I0816 18:18:01.776180   75402 logs.go:276] 0 containers: []
	W0816 18:18:01.776188   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:01.776197   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:01.776243   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:01.809186   75402 cri.go:89] found id: ""
	I0816 18:18:01.809218   75402 logs.go:276] 0 containers: []
	W0816 18:18:01.809229   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:01.809237   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:01.809307   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:01.842379   75402 cri.go:89] found id: ""
	I0816 18:18:01.842406   75402 logs.go:276] 0 containers: []
	W0816 18:18:01.842417   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:01.842425   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:01.842490   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:01.874706   75402 cri.go:89] found id: ""
	I0816 18:18:01.874739   75402 logs.go:276] 0 containers: []
	W0816 18:18:01.874747   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:01.874753   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:01.874813   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:01.915567   75402 cri.go:89] found id: ""
	I0816 18:18:01.915596   75402 logs.go:276] 0 containers: []
	W0816 18:18:01.915607   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:01.915615   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:01.915675   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:01.951527   75402 cri.go:89] found id: ""
	I0816 18:18:01.951559   75402 logs.go:276] 0 containers: []
	W0816 18:18:01.951569   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:01.951576   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:01.951638   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:01.983822   75402 cri.go:89] found id: ""
	I0816 18:18:01.983848   75402 logs.go:276] 0 containers: []
	W0816 18:18:01.983856   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:01.983861   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:01.983909   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:02.018976   75402 cri.go:89] found id: ""
	I0816 18:18:02.019003   75402 logs.go:276] 0 containers: []
	W0816 18:18:02.019012   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:02.019019   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:02.019033   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:02.071096   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:02.071131   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:02.085163   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:02.085189   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:02.154771   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:02.154789   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:02.154800   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:02.242068   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:02.242105   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:58.941456   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:01.440404   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:00.208085   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:02.705334   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:00.447843   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:02.448334   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:04.790311   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:04.803215   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:04.803298   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:04.835834   75402 cri.go:89] found id: ""
	I0816 18:18:04.835868   75402 logs.go:276] 0 containers: []
	W0816 18:18:04.835879   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:04.835886   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:04.835951   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:04.870000   75402 cri.go:89] found id: ""
	I0816 18:18:04.870032   75402 logs.go:276] 0 containers: []
	W0816 18:18:04.870042   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:04.870049   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:04.870111   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:04.906624   75402 cri.go:89] found id: ""
	I0816 18:18:04.906653   75402 logs.go:276] 0 containers: []
	W0816 18:18:04.906663   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:04.906670   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:04.906730   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:04.940115   75402 cri.go:89] found id: ""
	I0816 18:18:04.940139   75402 logs.go:276] 0 containers: []
	W0816 18:18:04.940148   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:04.940155   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:04.940213   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:04.974461   75402 cri.go:89] found id: ""
	I0816 18:18:04.974493   75402 logs.go:276] 0 containers: []
	W0816 18:18:04.974503   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:04.974510   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:04.974571   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:05.006593   75402 cri.go:89] found id: ""
	I0816 18:18:05.006618   75402 logs.go:276] 0 containers: []
	W0816 18:18:05.006628   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:05.006635   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:05.006691   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:05.040041   75402 cri.go:89] found id: ""
	I0816 18:18:05.040066   75402 logs.go:276] 0 containers: []
	W0816 18:18:05.040082   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:05.040089   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:05.040144   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:05.072968   75402 cri.go:89] found id: ""
	I0816 18:18:05.072996   75402 logs.go:276] 0 containers: []
	W0816 18:18:05.073005   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:05.073014   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:05.073025   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:05.124510   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:05.124543   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:05.145566   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:05.145592   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:05.221874   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:05.221898   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:05.221914   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:05.297283   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:05.297316   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:07.837564   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:07.850372   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:07.850441   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:07.882879   75402 cri.go:89] found id: ""
	I0816 18:18:07.882906   75402 logs.go:276] 0 containers: []
	W0816 18:18:07.882915   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:07.882920   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:07.882978   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:07.916983   75402 cri.go:89] found id: ""
	I0816 18:18:07.917011   75402 logs.go:276] 0 containers: []
	W0816 18:18:07.917019   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:07.917024   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:07.917075   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:07.953864   75402 cri.go:89] found id: ""
	I0816 18:18:07.953886   75402 logs.go:276] 0 containers: []
	W0816 18:18:07.953896   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:07.953903   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:07.953951   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:07.994375   75402 cri.go:89] found id: ""
	I0816 18:18:07.994399   75402 logs.go:276] 0 containers: []
	W0816 18:18:07.994408   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:07.994414   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:07.994472   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:08.029137   75402 cri.go:89] found id: ""
	I0816 18:18:08.029170   75402 logs.go:276] 0 containers: []
	W0816 18:18:08.029182   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:08.029189   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:08.029253   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:08.062331   75402 cri.go:89] found id: ""
	I0816 18:18:08.062358   75402 logs.go:276] 0 containers: []
	W0816 18:18:08.062367   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:08.062373   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:08.062430   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:08.097021   75402 cri.go:89] found id: ""
	I0816 18:18:08.097044   75402 logs.go:276] 0 containers: []
	W0816 18:18:08.097051   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:08.097056   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:08.097112   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:03.940724   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:06.441847   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:04.706298   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:06.707011   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:04.948066   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:06.948125   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:08.948992   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:08.131147   75402 cri.go:89] found id: ""
	I0816 18:18:08.131174   75402 logs.go:276] 0 containers: []
	W0816 18:18:08.131184   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:08.131192   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:08.131203   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:08.182334   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:08.182373   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:08.195459   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:08.195485   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:08.260333   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:08.260351   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:08.260363   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:08.344466   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:08.344506   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:10.881640   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:10.896400   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:10.896482   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:10.934034   75402 cri.go:89] found id: ""
	I0816 18:18:10.934068   75402 logs.go:276] 0 containers: []
	W0816 18:18:10.934076   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:10.934081   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:10.934130   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:10.966697   75402 cri.go:89] found id: ""
	I0816 18:18:10.966724   75402 logs.go:276] 0 containers: []
	W0816 18:18:10.966733   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:10.966741   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:10.966807   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:11.000540   75402 cri.go:89] found id: ""
	I0816 18:18:11.000568   75402 logs.go:276] 0 containers: []
	W0816 18:18:11.000579   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:11.000587   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:11.000665   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:11.034322   75402 cri.go:89] found id: ""
	I0816 18:18:11.034346   75402 logs.go:276] 0 containers: []
	W0816 18:18:11.034354   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:11.034360   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:11.034407   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:11.067081   75402 cri.go:89] found id: ""
	I0816 18:18:11.067108   75402 logs.go:276] 0 containers: []
	W0816 18:18:11.067116   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:11.067122   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:11.067170   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:11.099726   75402 cri.go:89] found id: ""
	I0816 18:18:11.099753   75402 logs.go:276] 0 containers: []
	W0816 18:18:11.099763   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:11.099770   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:11.099834   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:11.133187   75402 cri.go:89] found id: ""
	I0816 18:18:11.133216   75402 logs.go:276] 0 containers: []
	W0816 18:18:11.133226   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:11.133235   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:11.133315   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:11.167121   75402 cri.go:89] found id: ""
	I0816 18:18:11.167157   75402 logs.go:276] 0 containers: []
	W0816 18:18:11.167166   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:11.167177   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:11.167194   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:11.181396   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:11.181424   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:11.248286   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:11.248313   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:11.248325   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:11.328546   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:11.328583   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:11.365534   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:11.365576   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:08.939686   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:10.941097   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:13.440001   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:09.207018   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:11.207677   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:13.706818   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:10.949461   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:13.448057   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:13.919889   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:13.935097   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:13.935178   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:13.973196   75402 cri.go:89] found id: ""
	I0816 18:18:13.973225   75402 logs.go:276] 0 containers: []
	W0816 18:18:13.973236   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:13.973244   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:13.973328   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:14.011913   75402 cri.go:89] found id: ""
	I0816 18:18:14.011936   75402 logs.go:276] 0 containers: []
	W0816 18:18:14.011944   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:14.011950   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:14.012013   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:14.048418   75402 cri.go:89] found id: ""
	I0816 18:18:14.048447   75402 logs.go:276] 0 containers: []
	W0816 18:18:14.048459   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:14.048466   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:14.048515   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:14.082462   75402 cri.go:89] found id: ""
	I0816 18:18:14.082496   75402 logs.go:276] 0 containers: []
	W0816 18:18:14.082506   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:14.082514   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:14.082576   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:14.114958   75402 cri.go:89] found id: ""
	I0816 18:18:14.114986   75402 logs.go:276] 0 containers: []
	W0816 18:18:14.114996   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:14.115005   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:14.115067   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:14.154829   75402 cri.go:89] found id: ""
	I0816 18:18:14.154865   75402 logs.go:276] 0 containers: []
	W0816 18:18:14.154878   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:14.154888   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:14.154957   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:14.190012   75402 cri.go:89] found id: ""
	I0816 18:18:14.190045   75402 logs.go:276] 0 containers: []
	W0816 18:18:14.190053   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:14.190058   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:14.190108   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:14.223314   75402 cri.go:89] found id: ""
	I0816 18:18:14.223341   75402 logs.go:276] 0 containers: []
	W0816 18:18:14.223350   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:14.223360   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:14.223381   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:14.274995   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:14.275035   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:14.288518   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:14.288564   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:14.365668   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:14.365691   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:14.365705   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:14.445828   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:14.445866   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:16.981802   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:16.994729   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:16.994794   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:17.029790   75402 cri.go:89] found id: ""
	I0816 18:18:17.029821   75402 logs.go:276] 0 containers: []
	W0816 18:18:17.029839   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:17.029848   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:17.029912   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:17.063194   75402 cri.go:89] found id: ""
	I0816 18:18:17.063223   75402 logs.go:276] 0 containers: []
	W0816 18:18:17.063233   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:17.063240   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:17.063293   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:17.097808   75402 cri.go:89] found id: ""
	I0816 18:18:17.097831   75402 logs.go:276] 0 containers: []
	W0816 18:18:17.097839   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:17.097844   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:17.097900   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:17.132646   75402 cri.go:89] found id: ""
	I0816 18:18:17.132682   75402 logs.go:276] 0 containers: []
	W0816 18:18:17.132691   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:17.132697   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:17.132751   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:17.164285   75402 cri.go:89] found id: ""
	I0816 18:18:17.164316   75402 logs.go:276] 0 containers: []
	W0816 18:18:17.164328   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:17.164335   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:17.164391   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:17.195642   75402 cri.go:89] found id: ""
	I0816 18:18:17.195672   75402 logs.go:276] 0 containers: []
	W0816 18:18:17.195683   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:17.195691   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:17.195754   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:17.228005   75402 cri.go:89] found id: ""
	I0816 18:18:17.228033   75402 logs.go:276] 0 containers: []
	W0816 18:18:17.228041   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:17.228047   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:17.228107   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:17.279195   75402 cri.go:89] found id: ""
	I0816 18:18:17.279229   75402 logs.go:276] 0 containers: []
	W0816 18:18:17.279241   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:17.279253   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:17.279270   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:17.360084   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:17.360125   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:17.405184   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:17.405210   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:17.457453   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:17.457483   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:17.471472   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:17.471502   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:17.536478   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:15.939660   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:17.940456   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:16.207019   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:18.706191   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:15.450419   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:17.948912   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:20.036644   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:20.050169   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:20.050244   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:20.087943   75402 cri.go:89] found id: ""
	I0816 18:18:20.087971   75402 logs.go:276] 0 containers: []
	W0816 18:18:20.087981   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:20.087988   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:20.088051   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:20.119908   75402 cri.go:89] found id: ""
	I0816 18:18:20.119931   75402 logs.go:276] 0 containers: []
	W0816 18:18:20.119940   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:20.119945   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:20.120013   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:20.152115   75402 cri.go:89] found id: ""
	I0816 18:18:20.152146   75402 logs.go:276] 0 containers: []
	W0816 18:18:20.152156   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:20.152162   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:20.152209   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:20.189464   75402 cri.go:89] found id: ""
	I0816 18:18:20.189488   75402 logs.go:276] 0 containers: []
	W0816 18:18:20.189495   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:20.189500   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:20.189550   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:20.224779   75402 cri.go:89] found id: ""
	I0816 18:18:20.224807   75402 logs.go:276] 0 containers: []
	W0816 18:18:20.224817   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:20.224824   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:20.224888   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:20.257021   75402 cri.go:89] found id: ""
	I0816 18:18:20.257048   75402 logs.go:276] 0 containers: []
	W0816 18:18:20.257059   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:20.257067   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:20.257121   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:20.290991   75402 cri.go:89] found id: ""
	I0816 18:18:20.291023   75402 logs.go:276] 0 containers: []
	W0816 18:18:20.291032   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:20.291039   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:20.291099   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:20.323674   75402 cri.go:89] found id: ""
	I0816 18:18:20.323704   75402 logs.go:276] 0 containers: []
	W0816 18:18:20.323715   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:20.323726   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:20.323742   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:20.373411   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:20.373447   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:20.386954   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:20.386981   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:20.464366   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:20.464384   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:20.464403   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:20.541836   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:20.541881   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:23.085071   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:23.100460   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:23.100524   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:20.440656   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:22.942713   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:20.706771   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:23.207824   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:20.448676   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:22.948907   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:23.141239   75402 cri.go:89] found id: ""
	I0816 18:18:23.141269   75402 logs.go:276] 0 containers: []
	W0816 18:18:23.141280   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:23.141287   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:23.141354   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:23.172914   75402 cri.go:89] found id: ""
	I0816 18:18:23.172941   75402 logs.go:276] 0 containers: []
	W0816 18:18:23.172950   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:23.172958   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:23.173015   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:23.205593   75402 cri.go:89] found id: ""
	I0816 18:18:23.205621   75402 logs.go:276] 0 containers: []
	W0816 18:18:23.205632   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:23.205640   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:23.205706   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:23.239358   75402 cri.go:89] found id: ""
	I0816 18:18:23.239383   75402 logs.go:276] 0 containers: []
	W0816 18:18:23.239392   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:23.239401   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:23.239463   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:23.271798   75402 cri.go:89] found id: ""
	I0816 18:18:23.271828   75402 logs.go:276] 0 containers: []
	W0816 18:18:23.271838   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:23.271844   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:23.271911   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:23.305287   75402 cri.go:89] found id: ""
	I0816 18:18:23.305316   75402 logs.go:276] 0 containers: []
	W0816 18:18:23.305327   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:23.305335   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:23.305397   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:23.344041   75402 cri.go:89] found id: ""
	I0816 18:18:23.344067   75402 logs.go:276] 0 containers: []
	W0816 18:18:23.344075   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:23.344080   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:23.344134   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:23.376540   75402 cri.go:89] found id: ""
	I0816 18:18:23.376571   75402 logs.go:276] 0 containers: []
	W0816 18:18:23.376583   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:23.376601   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:23.376616   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:23.428265   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:23.428301   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:23.441377   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:23.441404   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:23.509219   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:23.509243   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:23.509259   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:23.589151   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:23.589186   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:26.126176   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:26.140228   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:26.140292   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:26.176768   75402 cri.go:89] found id: ""
	I0816 18:18:26.176807   75402 logs.go:276] 0 containers: []
	W0816 18:18:26.176820   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:26.176829   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:26.176887   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:26.212357   75402 cri.go:89] found id: ""
	I0816 18:18:26.212383   75402 logs.go:276] 0 containers: []
	W0816 18:18:26.212390   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:26.212396   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:26.212457   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:26.245256   75402 cri.go:89] found id: ""
	I0816 18:18:26.245290   75402 logs.go:276] 0 containers: []
	W0816 18:18:26.245302   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:26.245309   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:26.245370   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:26.277525   75402 cri.go:89] found id: ""
	I0816 18:18:26.277561   75402 logs.go:276] 0 containers: []
	W0816 18:18:26.277569   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:26.277575   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:26.277627   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:26.310928   75402 cri.go:89] found id: ""
	I0816 18:18:26.310956   75402 logs.go:276] 0 containers: []
	W0816 18:18:26.310967   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:26.310976   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:26.311052   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:26.344595   75402 cri.go:89] found id: ""
	I0816 18:18:26.344647   75402 logs.go:276] 0 containers: []
	W0816 18:18:26.344661   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:26.344669   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:26.344741   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:26.377776   75402 cri.go:89] found id: ""
	I0816 18:18:26.377805   75402 logs.go:276] 0 containers: []
	W0816 18:18:26.377814   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:26.377820   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:26.377872   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:26.411139   75402 cri.go:89] found id: ""
	I0816 18:18:26.411167   75402 logs.go:276] 0 containers: []
	W0816 18:18:26.411179   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:26.411190   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:26.411204   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:26.493802   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:26.493838   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:26.529542   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:26.529576   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:26.583544   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:26.583588   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:26.596429   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:26.596459   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:26.667858   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:25.441062   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:27.940609   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:25.706109   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:28.206196   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:25.448352   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:27.947950   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:29.168766   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:29.182032   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:29.182103   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:29.220213   75402 cri.go:89] found id: ""
	I0816 18:18:29.220239   75402 logs.go:276] 0 containers: []
	W0816 18:18:29.220247   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:29.220253   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:29.220300   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:29.257820   75402 cri.go:89] found id: ""
	I0816 18:18:29.257850   75402 logs.go:276] 0 containers: []
	W0816 18:18:29.257861   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:29.257867   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:29.257933   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:29.290450   75402 cri.go:89] found id: ""
	I0816 18:18:29.290473   75402 logs.go:276] 0 containers: []
	W0816 18:18:29.290480   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:29.290485   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:29.290546   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:29.328032   75402 cri.go:89] found id: ""
	I0816 18:18:29.328061   75402 logs.go:276] 0 containers: []
	W0816 18:18:29.328070   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:29.328076   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:29.328135   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:29.362104   75402 cri.go:89] found id: ""
	I0816 18:18:29.362132   75402 logs.go:276] 0 containers: []
	W0816 18:18:29.362141   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:29.362149   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:29.362218   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:29.395258   75402 cri.go:89] found id: ""
	I0816 18:18:29.395290   75402 logs.go:276] 0 containers: []
	W0816 18:18:29.395301   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:29.395309   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:29.395375   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:29.426617   75402 cri.go:89] found id: ""
	I0816 18:18:29.426646   75402 logs.go:276] 0 containers: []
	W0816 18:18:29.426656   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:29.426663   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:29.426725   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:29.462861   75402 cri.go:89] found id: ""
	I0816 18:18:29.462890   75402 logs.go:276] 0 containers: []
	W0816 18:18:29.462901   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:29.462912   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:29.462928   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:29.514882   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:29.514915   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:29.528101   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:29.528128   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:29.598983   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:29.599005   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:29.599020   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:29.684955   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:29.684991   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:32.230155   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:32.244158   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:32.244226   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:32.281993   75402 cri.go:89] found id: ""
	I0816 18:18:32.282020   75402 logs.go:276] 0 containers: []
	W0816 18:18:32.282031   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:32.282037   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:32.282100   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:32.316870   75402 cri.go:89] found id: ""
	I0816 18:18:32.316896   75402 logs.go:276] 0 containers: []
	W0816 18:18:32.316906   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:32.316914   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:32.316976   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:32.352597   75402 cri.go:89] found id: ""
	I0816 18:18:32.352637   75402 logs.go:276] 0 containers: []
	W0816 18:18:32.352649   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:32.352656   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:32.352722   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:32.387520   75402 cri.go:89] found id: ""
	I0816 18:18:32.387564   75402 logs.go:276] 0 containers: []
	W0816 18:18:32.387576   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:32.387584   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:32.387638   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:32.421499   75402 cri.go:89] found id: ""
	I0816 18:18:32.421526   75402 logs.go:276] 0 containers: []
	W0816 18:18:32.421537   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:32.421544   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:32.421603   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:32.460048   75402 cri.go:89] found id: ""
	I0816 18:18:32.460075   75402 logs.go:276] 0 containers: []
	W0816 18:18:32.460086   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:32.460093   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:32.460151   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:32.498148   75402 cri.go:89] found id: ""
	I0816 18:18:32.498176   75402 logs.go:276] 0 containers: []
	W0816 18:18:32.498184   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:32.498190   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:32.498248   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:32.530683   75402 cri.go:89] found id: ""
	I0816 18:18:32.530717   75402 logs.go:276] 0 containers: []
	W0816 18:18:32.530730   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:32.530741   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:32.530762   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:32.614776   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:32.614820   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:32.655628   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:32.655667   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:32.722763   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:32.722807   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:32.739817   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:32.739847   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:32.819297   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:30.440684   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:32.441210   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:30.206433   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:32.707436   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:30.448781   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:32.457660   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:35.320173   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:35.332427   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:35.332503   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:35.366316   75402 cri.go:89] found id: ""
	I0816 18:18:35.366346   75402 logs.go:276] 0 containers: []
	W0816 18:18:35.366357   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:35.366365   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:35.366433   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:35.399308   75402 cri.go:89] found id: ""
	I0816 18:18:35.399346   75402 logs.go:276] 0 containers: []
	W0816 18:18:35.399357   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:35.399367   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:35.399434   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:35.434926   75402 cri.go:89] found id: ""
	I0816 18:18:35.434958   75402 logs.go:276] 0 containers: []
	W0816 18:18:35.434971   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:35.434980   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:35.435042   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:35.473222   75402 cri.go:89] found id: ""
	I0816 18:18:35.473247   75402 logs.go:276] 0 containers: []
	W0816 18:18:35.473258   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:35.473266   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:35.473343   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:35.505484   75402 cri.go:89] found id: ""
	I0816 18:18:35.505521   75402 logs.go:276] 0 containers: []
	W0816 18:18:35.505533   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:35.505540   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:35.505608   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:35.540532   75402 cri.go:89] found id: ""
	I0816 18:18:35.540573   75402 logs.go:276] 0 containers: []
	W0816 18:18:35.540584   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:35.540590   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:35.540663   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:35.574205   75402 cri.go:89] found id: ""
	I0816 18:18:35.574235   75402 logs.go:276] 0 containers: []
	W0816 18:18:35.574245   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:35.574252   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:35.574343   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:35.614707   75402 cri.go:89] found id: ""
	I0816 18:18:35.614732   75402 logs.go:276] 0 containers: []
	W0816 18:18:35.614739   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:35.614747   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:35.614759   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:35.690830   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:35.690861   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:35.726601   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:35.726627   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:35.774706   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:35.774736   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:35.787557   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:35.787616   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:35.857474   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:34.940337   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:37.440507   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:34.701151   74828 pod_ready.go:82] duration metric: took 4m0.000965442s for pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace to be "Ready" ...
	E0816 18:18:34.701178   74828 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace to be "Ready" (will not retry!)
	I0816 18:18:34.701196   74828 pod_ready.go:39] duration metric: took 4m13.502588966s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:18:34.701228   74828 kubeadm.go:597] duration metric: took 4m21.306103533s to restartPrimaryControlPlane
	W0816 18:18:34.701293   74828 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0816 18:18:34.701330   74828 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 18:18:34.948583   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:37.447544   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:39.448942   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:38.358057   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:38.371128   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:38.371189   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:38.404812   75402 cri.go:89] found id: ""
	I0816 18:18:38.404844   75402 logs.go:276] 0 containers: []
	W0816 18:18:38.404855   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:38.404864   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:38.404926   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:38.437922   75402 cri.go:89] found id: ""
	I0816 18:18:38.437950   75402 logs.go:276] 0 containers: []
	W0816 18:18:38.437960   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:38.437967   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:38.438023   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:38.471474   75402 cri.go:89] found id: ""
	I0816 18:18:38.471509   75402 logs.go:276] 0 containers: []
	W0816 18:18:38.471519   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:38.471525   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:38.471582   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:38.510132   75402 cri.go:89] found id: ""
	I0816 18:18:38.510158   75402 logs.go:276] 0 containers: []
	W0816 18:18:38.510168   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:38.510184   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:38.510246   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:38.542212   75402 cri.go:89] found id: ""
	I0816 18:18:38.542251   75402 logs.go:276] 0 containers: []
	W0816 18:18:38.542262   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:38.542269   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:38.542341   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:38.579037   75402 cri.go:89] found id: ""
	I0816 18:18:38.579068   75402 logs.go:276] 0 containers: []
	W0816 18:18:38.579076   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:38.579082   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:38.579129   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:38.619219   75402 cri.go:89] found id: ""
	I0816 18:18:38.619252   75402 logs.go:276] 0 containers: []
	W0816 18:18:38.619263   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:38.619272   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:38.619335   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:38.655124   75402 cri.go:89] found id: ""
	I0816 18:18:38.655149   75402 logs.go:276] 0 containers: []
	W0816 18:18:38.655169   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:38.655180   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:38.655194   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:38.737857   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:38.737894   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:38.779777   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:38.779806   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:38.831556   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:38.831590   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:38.844496   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:38.844523   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:38.914543   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:41.415612   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:41.428187   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:41.428251   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:41.462932   75402 cri.go:89] found id: ""
	I0816 18:18:41.462964   75402 logs.go:276] 0 containers: []
	W0816 18:18:41.462975   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:41.462983   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:41.463043   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:41.497712   75402 cri.go:89] found id: ""
	I0816 18:18:41.497739   75402 logs.go:276] 0 containers: []
	W0816 18:18:41.497748   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:41.497754   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:41.497804   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:41.528430   75402 cri.go:89] found id: ""
	I0816 18:18:41.528455   75402 logs.go:276] 0 containers: []
	W0816 18:18:41.528463   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:41.528468   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:41.528527   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:41.560048   75402 cri.go:89] found id: ""
	I0816 18:18:41.560071   75402 logs.go:276] 0 containers: []
	W0816 18:18:41.560081   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:41.560088   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:41.560142   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:41.592536   75402 cri.go:89] found id: ""
	I0816 18:18:41.592566   75402 logs.go:276] 0 containers: []
	W0816 18:18:41.592577   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:41.592585   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:41.592663   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:41.626850   75402 cri.go:89] found id: ""
	I0816 18:18:41.626884   75402 logs.go:276] 0 containers: []
	W0816 18:18:41.626894   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:41.626902   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:41.626965   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:41.660452   75402 cri.go:89] found id: ""
	I0816 18:18:41.660478   75402 logs.go:276] 0 containers: []
	W0816 18:18:41.660486   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:41.660491   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:41.660542   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:41.695990   75402 cri.go:89] found id: ""
	I0816 18:18:41.696012   75402 logs.go:276] 0 containers: []
	W0816 18:18:41.696020   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:41.696028   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:41.696039   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:41.733107   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:41.733134   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:41.782812   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:41.782843   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:41.795954   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:41.795984   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:41.867473   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:41.867526   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:41.867545   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:39.442037   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:41.940088   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:41.948682   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:43.942215   75006 pod_ready.go:82] duration metric: took 4m0.000164284s for pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace to be "Ready" ...
	E0816 18:18:43.942239   75006 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace to be "Ready" (will not retry!)
	I0816 18:18:43.942255   75006 pod_ready.go:39] duration metric: took 4m12.163955241s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:18:43.942279   75006 kubeadm.go:597] duration metric: took 4m21.898271101s to restartPrimaryControlPlane
	W0816 18:18:43.942326   75006 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0816 18:18:43.942352   75006 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 18:18:44.450340   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:44.463299   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:44.463361   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:44.495068   75402 cri.go:89] found id: ""
	I0816 18:18:44.495098   75402 logs.go:276] 0 containers: []
	W0816 18:18:44.495108   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:44.495116   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:44.495221   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:44.529615   75402 cri.go:89] found id: ""
	I0816 18:18:44.529638   75402 logs.go:276] 0 containers: []
	W0816 18:18:44.529646   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:44.529651   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:44.529701   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:44.565275   75402 cri.go:89] found id: ""
	I0816 18:18:44.565298   75402 logs.go:276] 0 containers: []
	W0816 18:18:44.565306   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:44.565321   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:44.565384   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:44.598554   75402 cri.go:89] found id: ""
	I0816 18:18:44.598590   75402 logs.go:276] 0 containers: []
	W0816 18:18:44.598601   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:44.598609   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:44.598673   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:44.631389   75402 cri.go:89] found id: ""
	I0816 18:18:44.631422   75402 logs.go:276] 0 containers: []
	W0816 18:18:44.631436   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:44.631446   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:44.631519   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:44.663986   75402 cri.go:89] found id: ""
	I0816 18:18:44.664013   75402 logs.go:276] 0 containers: []
	W0816 18:18:44.664023   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:44.664031   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:44.664095   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:44.700238   75402 cri.go:89] found id: ""
	I0816 18:18:44.700263   75402 logs.go:276] 0 containers: []
	W0816 18:18:44.700272   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:44.700277   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:44.700330   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:44.732737   75402 cri.go:89] found id: ""
	I0816 18:18:44.732766   75402 logs.go:276] 0 containers: []
	W0816 18:18:44.732779   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:44.732790   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:44.732807   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:44.806427   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:44.806462   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:44.842965   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:44.842994   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:44.895745   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:44.895781   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:44.909850   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:44.909885   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:44.979315   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:47.479563   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:47.491876   75402 kubeadm.go:597] duration metric: took 4m4.431091965s to restartPrimaryControlPlane
	W0816 18:18:47.491939   75402 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0816 18:18:47.491962   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 18:18:43.941047   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:46.440592   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:48.441208   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:51.168302   75402 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.676317513s)
	I0816 18:18:51.168387   75402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 18:18:51.182492   75402 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 18:18:51.192403   75402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 18:18:51.202058   75402 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 18:18:51.202075   75402 kubeadm.go:157] found existing configuration files:
	
	I0816 18:18:51.202115   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 18:18:51.210661   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 18:18:51.210721   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 18:18:51.219979   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 18:18:51.228422   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 18:18:51.228488   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 18:18:51.237159   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 18:18:51.245555   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 18:18:51.245622   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 18:18:51.253986   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 18:18:51.261885   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 18:18:51.261927   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 18:18:51.270479   75402 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 18:18:51.335784   75402 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0816 18:18:51.335883   75402 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 18:18:51.482910   75402 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 18:18:51.483069   75402 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 18:18:51.483228   75402 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0816 18:18:51.652730   75402 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 18:18:51.655077   75402 out.go:235]   - Generating certificates and keys ...
	I0816 18:18:51.655185   75402 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 18:18:51.655304   75402 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 18:18:51.655425   75402 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 18:18:51.655521   75402 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 18:18:51.657408   75402 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 18:18:51.657485   75402 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 18:18:51.657561   75402 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 18:18:51.657645   75402 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 18:18:51.657748   75402 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 18:18:51.657854   75402 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 18:18:51.657911   75402 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 18:18:51.657984   75402 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 18:18:51.720786   75402 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 18:18:51.991165   75402 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 18:18:52.140983   75402 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 18:18:52.453361   75402 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 18:18:52.467210   75402 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 18:18:52.469222   75402 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 18:18:52.469338   75402 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 18:18:52.590938   75402 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 18:18:52.592875   75402 out.go:235]   - Booting up control plane ...
	I0816 18:18:52.592987   75402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 18:18:52.602597   75402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 18:18:52.603616   75402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 18:18:52.604417   75402 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 18:18:52.606669   75402 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0816 18:18:50.939639   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:52.940202   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:54.940917   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:57.439382   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:59.443139   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:01.940496   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:00.803654   74828 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.102297191s)
	I0816 18:19:00.803740   74828 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 18:19:00.818126   74828 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 18:19:00.827602   74828 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 18:19:00.836389   74828 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 18:19:00.836410   74828 kubeadm.go:157] found existing configuration files:
	
	I0816 18:19:00.836455   74828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 18:19:00.844830   74828 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 18:19:00.844880   74828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 18:19:00.853736   74828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 18:19:00.862795   74828 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 18:19:00.862855   74828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 18:19:00.872056   74828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 18:19:00.880410   74828 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 18:19:00.880461   74828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 18:19:00.889000   74828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 18:19:00.897508   74828 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 18:19:00.897568   74828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 18:19:00.906256   74828 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 18:19:00.953336   74828 kubeadm.go:310] W0816 18:19:00.929461    3053 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 18:19:00.955337   74828 kubeadm.go:310] W0816 18:19:00.931382    3053 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 18:19:01.068247   74828 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 18:19:03.940545   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:06.439727   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:08.440027   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:09.225829   74828 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0816 18:19:09.225908   74828 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 18:19:09.226014   74828 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 18:19:09.226126   74828 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 18:19:09.226242   74828 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0816 18:19:09.226329   74828 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 18:19:09.228065   74828 out.go:235]   - Generating certificates and keys ...
	I0816 18:19:09.228133   74828 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 18:19:09.228183   74828 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 18:19:09.228252   74828 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 18:19:09.228315   74828 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 18:19:09.228403   74828 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 18:19:09.228489   74828 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 18:19:09.228584   74828 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 18:19:09.228686   74828 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 18:19:09.228787   74828 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 18:19:09.228864   74828 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 18:19:09.228903   74828 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 18:19:09.228983   74828 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 18:19:09.229052   74828 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 18:19:09.229147   74828 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0816 18:19:09.229234   74828 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 18:19:09.229332   74828 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 18:19:09.229410   74828 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 18:19:09.229532   74828 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 18:19:09.229607   74828 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 18:19:09.230874   74828 out.go:235]   - Booting up control plane ...
	I0816 18:19:09.230948   74828 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 18:19:09.231032   74828 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 18:19:09.231090   74828 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 18:19:09.231202   74828 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 18:19:09.231321   74828 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 18:19:09.231381   74828 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 18:19:09.231572   74828 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0816 18:19:09.231662   74828 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0816 18:19:09.231711   74828 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.32263ms
	I0816 18:19:09.231774   74828 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0816 18:19:09.231824   74828 kubeadm.go:310] [api-check] The API server is healthy after 5.002367118s
	I0816 18:19:09.231923   74828 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0816 18:19:09.232091   74828 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0816 18:19:09.232166   74828 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0816 18:19:09.232419   74828 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-864476 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0816 18:19:09.232497   74828 kubeadm.go:310] [bootstrap-token] Using token: 6m1jus.xr9uhx26t28q092p
	I0816 18:19:09.233962   74828 out.go:235]   - Configuring RBAC rules ...
	I0816 18:19:09.234068   74828 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0816 18:19:09.234164   74828 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0816 18:19:09.234315   74828 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0816 18:19:09.234425   74828 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0816 18:19:09.234522   74828 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0816 18:19:09.234615   74828 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0816 18:19:09.234775   74828 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0816 18:19:09.234830   74828 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0816 18:19:09.234892   74828 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0816 18:19:09.234901   74828 kubeadm.go:310] 
	I0816 18:19:09.234971   74828 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0816 18:19:09.234980   74828 kubeadm.go:310] 
	I0816 18:19:09.235067   74828 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0816 18:19:09.235076   74828 kubeadm.go:310] 
	I0816 18:19:09.235115   74828 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0816 18:19:09.235194   74828 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0816 18:19:09.235271   74828 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0816 18:19:09.235280   74828 kubeadm.go:310] 
	I0816 18:19:09.235367   74828 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0816 18:19:09.235376   74828 kubeadm.go:310] 
	I0816 18:19:09.235448   74828 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0816 18:19:09.235459   74828 kubeadm.go:310] 
	I0816 18:19:09.235533   74828 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0816 18:19:09.235607   74828 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0816 18:19:09.235677   74828 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0816 18:19:09.235683   74828 kubeadm.go:310] 
	I0816 18:19:09.235795   74828 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0816 18:19:09.235907   74828 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0816 18:19:09.235916   74828 kubeadm.go:310] 
	I0816 18:19:09.235986   74828 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 6m1jus.xr9uhx26t28q092p \
	I0816 18:19:09.236080   74828 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:340cd7e6f4afd769ffe0b97bb40a910498795aaf29fab06cd6b8c9a3957ccd57 \
	I0816 18:19:09.236099   74828 kubeadm.go:310] 	--control-plane 
	I0816 18:19:09.236105   74828 kubeadm.go:310] 
	I0816 18:19:09.236177   74828 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0816 18:19:09.236185   74828 kubeadm.go:310] 
	I0816 18:19:09.236268   74828 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 6m1jus.xr9uhx26t28q092p \
	I0816 18:19:09.236403   74828 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:340cd7e6f4afd769ffe0b97bb40a910498795aaf29fab06cd6b8c9a3957ccd57 
	I0816 18:19:09.236416   74828 cni.go:84] Creating CNI manager for ""
	I0816 18:19:09.236422   74828 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 18:19:09.237971   74828 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 18:19:10.069497   75006 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.127122656s)
	I0816 18:19:10.069585   75006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 18:19:10.085322   75006 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 18:19:10.098736   75006 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 18:19:10.108163   75006 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 18:19:10.108183   75006 kubeadm.go:157] found existing configuration files:
	
	I0816 18:19:10.108224   75006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0816 18:19:10.117330   75006 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 18:19:10.117382   75006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 18:19:10.127090   75006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0816 18:19:10.135574   75006 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 18:19:10.135648   75006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 18:19:10.146127   75006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0816 18:19:10.154474   75006 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 18:19:10.154533   75006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 18:19:10.163245   75006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0816 18:19:10.171315   75006 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 18:19:10.171375   75006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 18:19:10.181088   75006 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 18:19:10.225495   75006 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0816 18:19:10.225571   75006 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 18:19:10.327332   75006 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 18:19:10.327442   75006 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 18:19:10.327586   75006 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0816 18:19:10.335739   75006 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 18:19:10.337610   75006 out.go:235]   - Generating certificates and keys ...
	I0816 18:19:10.337730   75006 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 18:19:10.337818   75006 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 18:19:10.337935   75006 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 18:19:10.338054   75006 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 18:19:10.338174   75006 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 18:19:10.338254   75006 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 18:19:10.338359   75006 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 18:19:10.338452   75006 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 18:19:10.338562   75006 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 18:19:10.338668   75006 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 18:19:10.338718   75006 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 18:19:10.338796   75006 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 18:19:10.437447   75006 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 18:19:10.868191   75006 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0816 18:19:10.961497   75006 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 18:19:11.363158   75006 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 18:19:11.963929   75006 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 18:19:11.964410   75006 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 18:19:11.967675   75006 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 18:19:09.239250   74828 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 18:19:09.250270   74828 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 18:19:09.267205   74828 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 18:19:09.267346   74828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:09.267366   74828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-864476 minikube.k8s.io/updated_at=2024_08_16T18_19_09_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd minikube.k8s.io/name=no-preload-864476 minikube.k8s.io/primary=true
	I0816 18:19:09.282111   74828 ops.go:34] apiserver oom_adj: -16
	I0816 18:19:09.471160   74828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:09.971453   74828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:10.471576   74828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:10.971748   74828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:11.471954   74828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:11.971371   74828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:12.471626   74828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:12.972021   74828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:13.472254   74828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:13.588350   74828 kubeadm.go:1113] duration metric: took 4.321062687s to wait for elevateKubeSystemPrivileges
	I0816 18:19:13.588392   74828 kubeadm.go:394] duration metric: took 5m0.245036951s to StartCluster
	I0816 18:19:13.588413   74828 settings.go:142] acquiring lock: {Name:mk7d6bbff611e99a9222156e58c7ea3b0a5541f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:19:13.588500   74828 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19461-9545/kubeconfig
	I0816 18:19:13.591118   74828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/kubeconfig: {Name:mk7d5abb3a1b9b654c5d7490d32aeae2c9a1bdd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:19:13.591418   74828 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.50 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 18:19:13.591683   74828 config.go:182] Loaded profile config "no-preload-864476": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 18:19:13.591744   74828 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 18:19:13.591809   74828 addons.go:69] Setting storage-provisioner=true in profile "no-preload-864476"
	I0816 18:19:13.591839   74828 addons.go:234] Setting addon storage-provisioner=true in "no-preload-864476"
	W0816 18:19:13.591851   74828 addons.go:243] addon storage-provisioner should already be in state true
	I0816 18:19:13.591882   74828 host.go:66] Checking if "no-preload-864476" exists ...
	I0816 18:19:13.592025   74828 addons.go:69] Setting default-storageclass=true in profile "no-preload-864476"
	I0816 18:19:13.592070   74828 addons.go:69] Setting metrics-server=true in profile "no-preload-864476"
	I0816 18:19:13.592135   74828 addons.go:234] Setting addon metrics-server=true in "no-preload-864476"
	W0816 18:19:13.592150   74828 addons.go:243] addon metrics-server should already be in state true
	I0816 18:19:13.592073   74828 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-864476"
	I0816 18:19:13.592272   74828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:13.592206   74828 host.go:66] Checking if "no-preload-864476" exists ...
	I0816 18:19:13.592326   74828 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:13.592654   74828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:13.592677   74828 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:13.592731   74828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:13.592753   74828 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:13.592790   74828 out.go:177] * Verifying Kubernetes components...
	I0816 18:19:13.594236   74828 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:19:13.613019   74828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42847
	I0816 18:19:13.613061   74828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44393
	I0816 18:19:13.613087   74828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40547
	I0816 18:19:13.613498   74828 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:13.613552   74828 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:13.613708   74828 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:13.614094   74828 main.go:141] libmachine: Using API Version  1
	I0816 18:19:13.614113   74828 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:13.614198   74828 main.go:141] libmachine: Using API Version  1
	I0816 18:19:13.614222   74828 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:13.614403   74828 main.go:141] libmachine: Using API Version  1
	I0816 18:19:13.614420   74828 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:13.614478   74828 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:13.614675   74828 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:13.614728   74828 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:13.614856   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetState
	I0816 18:19:13.615039   74828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:13.615068   74828 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:13.615401   74828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:13.615442   74828 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:13.619787   74828 addons.go:234] Setting addon default-storageclass=true in "no-preload-864476"
	W0816 18:19:13.619815   74828 addons.go:243] addon default-storageclass should already be in state true
	I0816 18:19:13.619848   74828 host.go:66] Checking if "no-preload-864476" exists ...
	I0816 18:19:13.620274   74828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:13.620438   74828 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:13.642013   74828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43679
	I0816 18:19:13.642196   74828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46207
	I0816 18:19:13.642654   74828 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:13.643201   74828 main.go:141] libmachine: Using API Version  1
	I0816 18:19:13.643227   74828 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:13.643304   74828 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:13.643888   74828 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:13.644065   74828 main.go:141] libmachine: Using API Version  1
	I0816 18:19:13.644086   74828 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:13.644537   74828 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:13.644548   74828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:13.644591   74828 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:13.645002   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetState
	I0816 18:19:13.646881   74828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40749
	I0816 18:19:13.647127   74828 main.go:141] libmachine: (no-preload-864476) Calling .DriverName
	I0816 18:19:13.647406   74828 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:13.648126   74828 main.go:141] libmachine: Using API Version  1
	I0816 18:19:13.648156   74828 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:13.648725   74828 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:13.648935   74828 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0816 18:19:13.649121   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetState
	I0816 18:19:13.649823   74828 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 18:19:13.649840   74828 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0816 18:19:13.649861   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:19:13.651524   74828 main.go:141] libmachine: (no-preload-864476) Calling .DriverName
	I0816 18:19:13.652917   74828 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:19:10.441027   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:12.939870   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:13.653916   74828 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 18:19:13.653933   74828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 18:19:13.653952   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:19:13.654035   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:19:13.654463   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:19:13.654482   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:19:13.654665   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHPort
	I0816 18:19:13.654883   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:19:13.655044   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHUsername
	I0816 18:19:13.655247   74828 sshutil.go:53] new ssh client: &{IP:192.168.50.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/no-preload-864476/id_rsa Username:docker}
	I0816 18:19:13.657315   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:19:13.657699   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:19:13.657783   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:19:13.657974   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHPort
	I0816 18:19:13.658125   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:19:13.658247   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHUsername
	I0816 18:19:13.658362   74828 sshutil.go:53] new ssh client: &{IP:192.168.50.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/no-preload-864476/id_rsa Username:docker}
	I0816 18:19:13.670111   74828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45349
	I0816 18:19:13.670711   74828 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:13.671220   74828 main.go:141] libmachine: Using API Version  1
	I0816 18:19:13.671239   74828 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:13.671585   74828 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:13.671778   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetState
	I0816 18:19:13.673274   74828 main.go:141] libmachine: (no-preload-864476) Calling .DriverName
	I0816 18:19:13.673480   74828 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 18:19:13.673493   74828 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 18:19:13.673511   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:19:13.677160   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:19:13.677542   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:19:13.677564   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:19:13.677854   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHPort
	I0816 18:19:13.678049   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:19:13.678170   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHUsername
	I0816 18:19:13.678263   74828 sshutil.go:53] new ssh client: &{IP:192.168.50.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/no-preload-864476/id_rsa Username:docker}
	I0816 18:19:11.970291   75006 out.go:235]   - Booting up control plane ...
	I0816 18:19:11.970385   75006 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 18:19:11.970516   75006 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 18:19:11.970617   75006 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 18:19:11.988374   75006 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 18:19:11.997980   75006 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 18:19:11.998045   75006 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 18:19:12.132297   75006 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0816 18:19:12.132447   75006 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0816 18:19:13.135489   75006 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.003222114s
	I0816 18:19:13.135584   75006 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0816 18:19:13.840111   74828 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 18:19:13.903130   74828 node_ready.go:35] waiting up to 6m0s for node "no-preload-864476" to be "Ready" ...
	I0816 18:19:13.915130   74828 node_ready.go:49] node "no-preload-864476" has status "Ready":"True"
	I0816 18:19:13.915163   74828 node_ready.go:38] duration metric: took 12.001127ms for node "no-preload-864476" to be "Ready" ...
	I0816 18:19:13.915174   74828 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:19:13.926756   74828 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-6zfgr" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:13.944598   74828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 18:19:13.971002   74828 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 18:19:13.971036   74828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0816 18:19:13.998897   74828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 18:19:14.015731   74828 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 18:19:14.015754   74828 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0816 18:19:14.080186   74828 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 18:19:14.080212   74828 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0816 18:19:14.187279   74828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 18:19:15.075984   74828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.077053329s)
	I0816 18:19:15.076058   74828 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:15.076071   74828 main.go:141] libmachine: (no-preload-864476) Calling .Close
	I0816 18:19:15.076364   74828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.131733705s)
	I0816 18:19:15.076478   74828 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:15.076495   74828 main.go:141] libmachine: (no-preload-864476) Calling .Close
	I0816 18:19:15.076405   74828 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:15.076567   74828 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:15.076591   74828 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:15.076600   74828 main.go:141] libmachine: (no-preload-864476) Calling .Close
	I0816 18:19:15.076436   74828 main.go:141] libmachine: (no-preload-864476) DBG | Closing plugin on server side
	I0816 18:19:15.076786   74828 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:15.076838   74828 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:15.076859   74828 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:15.076879   74828 main.go:141] libmachine: (no-preload-864476) Calling .Close
	I0816 18:19:15.076969   74828 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:15.076987   74828 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:15.077443   74828 main.go:141] libmachine: (no-preload-864476) DBG | Closing plugin on server side
	I0816 18:19:15.077517   74828 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:15.077535   74828 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:15.164872   74828 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:15.164903   74828 main.go:141] libmachine: (no-preload-864476) Calling .Close
	I0816 18:19:15.165218   74828 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:15.165238   74828 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:15.373294   74828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.1859614s)
	I0816 18:19:15.373399   74828 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:15.373417   74828 main.go:141] libmachine: (no-preload-864476) Calling .Close
	I0816 18:19:15.373716   74828 main.go:141] libmachine: (no-preload-864476) DBG | Closing plugin on server side
	I0816 18:19:15.373769   74828 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:15.373804   74828 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:15.373825   74828 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:15.373837   74828 main.go:141] libmachine: (no-preload-864476) Calling .Close
	I0816 18:19:15.374124   74828 main.go:141] libmachine: (no-preload-864476) DBG | Closing plugin on server side
	I0816 18:19:15.374130   74828 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:15.374181   74828 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:15.374192   74828 addons.go:475] Verifying addon metrics-server=true in "no-preload-864476"
	I0816 18:19:15.375801   74828 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0816 18:19:17.638005   75006 kubeadm.go:310] [api-check] The API server is healthy after 4.502130995s
	I0816 18:19:17.658334   75006 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0816 18:19:17.678882   75006 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0816 18:19:17.709612   75006 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0816 18:19:17.709881   75006 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-256678 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0816 18:19:17.724755   75006 kubeadm.go:310] [bootstrap-token] Using token: cdypho.k0vxtmnp4c93945s
	I0816 18:19:14.941895   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:17.440923   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:15.377611   74828 addons.go:510] duration metric: took 1.785861834s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0816 18:19:15.934515   74828 pod_ready.go:103] pod "coredns-6f6b679f8f-6zfgr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:18.435321   74828 pod_ready.go:103] pod "coredns-6f6b679f8f-6zfgr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:17.726222   75006 out.go:235]   - Configuring RBAC rules ...
	I0816 18:19:17.726361   75006 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0816 18:19:17.733325   75006 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0816 18:19:17.740707   75006 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0816 18:19:17.747325   75006 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0816 18:19:17.751554   75006 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0816 18:19:17.761084   75006 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0816 18:19:18.044607   75006 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0816 18:19:18.485134   75006 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0816 18:19:19.044481   75006 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0816 18:19:19.045968   75006 kubeadm.go:310] 
	I0816 18:19:19.046038   75006 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0816 18:19:19.046069   75006 kubeadm.go:310] 
	I0816 18:19:19.046185   75006 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0816 18:19:19.046198   75006 kubeadm.go:310] 
	I0816 18:19:19.046229   75006 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0816 18:19:19.046298   75006 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0816 18:19:19.046343   75006 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0816 18:19:19.046349   75006 kubeadm.go:310] 
	I0816 18:19:19.046396   75006 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0816 18:19:19.046413   75006 kubeadm.go:310] 
	I0816 18:19:19.046504   75006 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0816 18:19:19.046529   75006 kubeadm.go:310] 
	I0816 18:19:19.046614   75006 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0816 18:19:19.046718   75006 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0816 18:19:19.046813   75006 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0816 18:19:19.046828   75006 kubeadm.go:310] 
	I0816 18:19:19.046941   75006 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0816 18:19:19.047047   75006 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0816 18:19:19.047056   75006 kubeadm.go:310] 
	I0816 18:19:19.047153   75006 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token cdypho.k0vxtmnp4c93945s \
	I0816 18:19:19.047304   75006 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:340cd7e6f4afd769ffe0b97bb40a910498795aaf29fab06cd6b8c9a3957ccd57 \
	I0816 18:19:19.047346   75006 kubeadm.go:310] 	--control-plane 
	I0816 18:19:19.047358   75006 kubeadm.go:310] 
	I0816 18:19:19.047470   75006 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0816 18:19:19.047480   75006 kubeadm.go:310] 
	I0816 18:19:19.047596   75006 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token cdypho.k0vxtmnp4c93945s \
	I0816 18:19:19.047740   75006 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:340cd7e6f4afd769ffe0b97bb40a910498795aaf29fab06cd6b8c9a3957ccd57 
	I0816 18:19:19.048871   75006 kubeadm.go:310] W0816 18:19:10.202021    2564 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 18:19:19.049167   75006 kubeadm.go:310] W0816 18:19:10.202700    2564 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 18:19:19.049279   75006 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 18:19:19.049304   75006 cni.go:84] Creating CNI manager for ""
	I0816 18:19:19.049318   75006 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 18:19:19.051543   75006 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 18:19:19.052677   75006 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 18:19:19.063536   75006 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 18:19:19.084460   75006 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 18:19:19.084540   75006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:19.084608   75006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-256678 minikube.k8s.io/updated_at=2024_08_16T18_19_19_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd minikube.k8s.io/name=default-k8s-diff-port-256678 minikube.k8s.io/primary=true
	I0816 18:19:19.257760   75006 ops.go:34] apiserver oom_adj: -16
	I0816 18:19:19.258124   75006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:19.759000   75006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:19.940737   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:22.440273   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:20.934243   74828 pod_ready.go:103] pod "coredns-6f6b679f8f-6zfgr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:23.433046   74828 pod_ready.go:103] pod "coredns-6f6b679f8f-6zfgr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:20.258798   75006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:20.759112   75006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:21.258598   75006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:21.758433   75006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:22.258181   75006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:22.758276   75006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:23.258184   75006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:23.758168   75006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:23.846653   75006 kubeadm.go:1113] duration metric: took 4.762173901s to wait for elevateKubeSystemPrivileges
	I0816 18:19:23.846688   75006 kubeadm.go:394] duration metric: took 5m1.846731834s to StartCluster
	I0816 18:19:23.846708   75006 settings.go:142] acquiring lock: {Name:mk7d6bbff611e99a9222156e58c7ea3b0a5541f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:19:23.846784   75006 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19461-9545/kubeconfig
	I0816 18:19:23.848375   75006 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/kubeconfig: {Name:mk7d5abb3a1b9b654c5d7490d32aeae2c9a1bdd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:19:23.848662   75006 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.144 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 18:19:23.848750   75006 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 18:19:23.848814   75006 config.go:182] Loaded profile config "default-k8s-diff-port-256678": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 18:19:23.848840   75006 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-256678"
	I0816 18:19:23.848858   75006 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-256678"
	I0816 18:19:23.848866   75006 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-256678"
	I0816 18:19:23.848878   75006 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-256678"
	I0816 18:19:23.848882   75006 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-256678"
	W0816 18:19:23.848887   75006 addons.go:243] addon storage-provisioner should already be in state true
	W0816 18:19:23.848890   75006 addons.go:243] addon metrics-server should already be in state true
	I0816 18:19:23.848915   75006 host.go:66] Checking if "default-k8s-diff-port-256678" exists ...
	I0816 18:19:23.848918   75006 host.go:66] Checking if "default-k8s-diff-port-256678" exists ...
	I0816 18:19:23.848914   75006 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-256678"
	I0816 18:19:23.849232   75006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:23.849259   75006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:23.849271   75006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:23.849293   75006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:23.849362   75006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:23.849404   75006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:23.850478   75006 out.go:177] * Verifying Kubernetes components...
	I0816 18:19:23.852034   75006 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:19:23.865786   75006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32949
	I0816 18:19:23.865939   75006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32833
	I0816 18:19:23.866248   75006 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:23.866304   75006 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:23.866398   75006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46809
	I0816 18:19:23.866816   75006 main.go:141] libmachine: Using API Version  1
	I0816 18:19:23.866845   75006 main.go:141] libmachine: Using API Version  1
	I0816 18:19:23.866860   75006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:23.866863   75006 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:23.866935   75006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:23.867328   75006 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:23.867333   75006 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:23.867430   75006 main.go:141] libmachine: Using API Version  1
	I0816 18:19:23.867447   75006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:23.867517   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetState
	I0816 18:19:23.867742   75006 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:23.867871   75006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:23.867897   75006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:23.868227   75006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:23.868247   75006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:23.870993   75006 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-256678"
	W0816 18:19:23.871020   75006 addons.go:243] addon default-storageclass should already be in state true
	I0816 18:19:23.871051   75006 host.go:66] Checking if "default-k8s-diff-port-256678" exists ...
	I0816 18:19:23.871403   75006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:23.871433   75006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:23.885139   75006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42813
	I0816 18:19:23.885814   75006 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:23.886386   75006 main.go:141] libmachine: Using API Version  1
	I0816 18:19:23.886403   75006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:23.886814   75006 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:23.886856   75006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39359
	I0816 18:19:23.887024   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetState
	I0816 18:19:23.887202   75006 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:23.887542   75006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36361
	I0816 18:19:23.887784   75006 main.go:141] libmachine: Using API Version  1
	I0816 18:19:23.887797   75006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:23.887863   75006 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:23.888165   75006 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:23.888372   75006 main.go:141] libmachine: Using API Version  1
	I0816 18:19:23.888389   75006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:23.889026   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .DriverName
	I0816 18:19:23.889254   75006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:23.889268   75006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:23.889518   75006 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:23.889758   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetState
	I0816 18:19:23.890483   75006 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:19:23.891262   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .DriverName
	I0816 18:19:23.891838   75006 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 18:19:23.891859   75006 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 18:19:23.891877   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:19:23.892581   75006 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0816 18:19:23.893621   75006 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 18:19:23.893684   75006 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0816 18:19:23.893882   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:19:23.894413   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:19:23.894973   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:19:23.894994   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:19:23.895161   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHPort
	I0816 18:19:23.895322   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:19:23.895578   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHUsername
	I0816 18:19:23.895757   75006 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/default-k8s-diff-port-256678/id_rsa Username:docker}
	I0816 18:19:23.897167   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:19:23.897666   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:19:23.897685   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:19:23.897802   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHPort
	I0816 18:19:23.897972   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:19:23.898132   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHUsername
	I0816 18:19:23.898248   75006 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/default-k8s-diff-port-256678/id_rsa Username:docker}
	I0816 18:19:23.906377   75006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43895
	I0816 18:19:23.906708   75006 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:23.907497   75006 main.go:141] libmachine: Using API Version  1
	I0816 18:19:23.907513   75006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:23.907932   75006 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:23.908240   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetState
	I0816 18:19:23.909917   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .DriverName
	I0816 18:19:23.910141   75006 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 18:19:23.910159   75006 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 18:19:23.910177   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:19:23.912435   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:19:23.912678   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:19:23.912710   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:19:23.912858   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHPort
	I0816 18:19:23.912982   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:19:23.913066   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHUsername
	I0816 18:19:23.913138   75006 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/default-k8s-diff-port-256678/id_rsa Username:docker}
	I0816 18:19:24.062487   75006 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 18:19:24.083148   75006 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-256678" to be "Ready" ...
	I0816 18:19:24.092886   75006 node_ready.go:49] node "default-k8s-diff-port-256678" has status "Ready":"True"
	I0816 18:19:24.092907   75006 node_ready.go:38] duration metric: took 9.72996ms for node "default-k8s-diff-port-256678" to be "Ready" ...
	I0816 18:19:24.092916   75006 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:19:24.099123   75006 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-hx7sb" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:24.184211   75006 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 18:19:24.197461   75006 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 18:19:24.197491   75006 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0816 18:19:24.219263   75006 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 18:19:24.258463   75006 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 18:19:24.258498   75006 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0816 18:19:24.355822   75006 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 18:19:24.355902   75006 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0816 18:19:24.436401   75006 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 18:19:24.866038   75006 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:24.866125   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .Close
	I0816 18:19:24.866058   75006 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:24.866163   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .Close
	I0816 18:19:24.866478   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | Closing plugin on server side
	I0816 18:19:24.866517   75006 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:24.866526   75006 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:24.866536   75006 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:24.866546   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .Close
	I0816 18:19:24.866600   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | Closing plugin on server side
	I0816 18:19:24.866626   75006 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:24.866636   75006 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:24.866649   75006 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:24.866676   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .Close
	I0816 18:19:24.866778   75006 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:24.866793   75006 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:24.866810   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | Closing plugin on server side
	I0816 18:19:24.866888   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | Closing plugin on server side
	I0816 18:19:24.866923   75006 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:24.866932   75006 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:24.886041   75006 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:24.886065   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .Close
	I0816 18:19:24.886338   75006 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:24.886359   75006 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:24.886384   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | Closing plugin on server side
	I0816 18:19:25.225367   75006 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:25.225397   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .Close
	I0816 18:19:25.225704   75006 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:25.225720   75006 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:25.225730   75006 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:25.225739   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .Close
	I0816 18:19:25.225961   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | Closing plugin on server side
	I0816 18:19:25.226005   75006 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:25.226025   75006 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:25.226043   75006 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-256678"
	I0816 18:19:25.227605   75006 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0816 18:19:23.934167   74828 pod_ready.go:93] pod "coredns-6f6b679f8f-6zfgr" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:23.934191   74828 pod_ready.go:82] duration metric: took 10.007408518s for pod "coredns-6f6b679f8f-6zfgr" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:23.934200   74828 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-qr4q9" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:23.940226   74828 pod_ready.go:93] pod "coredns-6f6b679f8f-qr4q9" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:23.940249   74828 pod_ready.go:82] duration metric: took 6.040513ms for pod "coredns-6f6b679f8f-qr4q9" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:23.940260   74828 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:23.945330   74828 pod_ready.go:93] pod "etcd-no-preload-864476" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:23.945351   74828 pod_ready.go:82] duration metric: took 5.082362ms for pod "etcd-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:23.945361   74828 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:23.949772   74828 pod_ready.go:93] pod "kube-apiserver-no-preload-864476" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:23.949800   74828 pod_ready.go:82] duration metric: took 4.429575ms for pod "kube-apiserver-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:23.949810   74828 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:23.954308   74828 pod_ready.go:93] pod "kube-controller-manager-no-preload-864476" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:23.954328   74828 pod_ready.go:82] duration metric: took 4.510361ms for pod "kube-controller-manager-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:23.954338   74828 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6g6zx" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:24.331265   74828 pod_ready.go:93] pod "kube-proxy-6g6zx" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:24.331306   74828 pod_ready.go:82] duration metric: took 376.9609ms for pod "kube-proxy-6g6zx" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:24.331320   74828 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:24.730715   74828 pod_ready.go:93] pod "kube-scheduler-no-preload-864476" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:24.730740   74828 pod_ready.go:82] duration metric: took 399.412376ms for pod "kube-scheduler-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:24.730748   74828 pod_ready.go:39] duration metric: took 10.815561534s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:19:24.730761   74828 api_server.go:52] waiting for apiserver process to appear ...
	I0816 18:19:24.730820   74828 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:19:24.746674   74828 api_server.go:72] duration metric: took 11.155216371s to wait for apiserver process to appear ...
	I0816 18:19:24.746697   74828 api_server.go:88] waiting for apiserver healthz status ...
	I0816 18:19:24.746714   74828 api_server.go:253] Checking apiserver healthz at https://192.168.50.50:8443/healthz ...
	I0816 18:19:24.750801   74828 api_server.go:279] https://192.168.50.50:8443/healthz returned 200:
	ok
	I0816 18:19:24.751835   74828 api_server.go:141] control plane version: v1.31.0
	I0816 18:19:24.751864   74828 api_server.go:131] duration metric: took 5.159229ms to wait for apiserver health ...
	I0816 18:19:24.751872   74828 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 18:19:24.935471   74828 system_pods.go:59] 9 kube-system pods found
	I0816 18:19:24.935510   74828 system_pods.go:61] "coredns-6f6b679f8f-6zfgr" [99157766-5089-4abe-a888-ec5992e5720a] Running
	I0816 18:19:24.935520   74828 system_pods.go:61] "coredns-6f6b679f8f-qr4q9" [d20f51f3-6786-496b-a6bc-7457462e46e9] Running
	I0816 18:19:24.935539   74828 system_pods.go:61] "etcd-no-preload-864476" [246e2b57-dbfe-4fd2-bc9d-ef927d48ba0b] Running
	I0816 18:19:24.935548   74828 system_pods.go:61] "kube-apiserver-no-preload-864476" [0e386448-037f-4543-941a-63f07e0d3186] Running
	I0816 18:19:24.935555   74828 system_pods.go:61] "kube-controller-manager-no-preload-864476" [71617b5c-9968-4d49-ac6c-7728712ac880] Running
	I0816 18:19:24.935562   74828 system_pods.go:61] "kube-proxy-6g6zx" [71a027eb-99e3-4b48-b9f1-2fc80cad9d2e] Running
	I0816 18:19:24.935572   74828 system_pods.go:61] "kube-scheduler-no-preload-864476" [c9b6ef2a-41fa-408b-86b7-eae10db4bec6] Running
	I0816 18:19:24.935584   74828 system_pods.go:61] "metrics-server-6867b74b74-r6cph" [a842267c-2c75-4799-aefc-2fb92ccb9129] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 18:19:24.935596   74828 system_pods.go:61] "storage-provisioner" [c05cdb7c-d74e-4008-a0fc-5eb6df9595af] Running
	I0816 18:19:24.935607   74828 system_pods.go:74] duration metric: took 183.727841ms to wait for pod list to return data ...
	I0816 18:19:24.935621   74828 default_sa.go:34] waiting for default service account to be created ...
	I0816 18:19:25.132713   74828 default_sa.go:45] found service account: "default"
	I0816 18:19:25.132740   74828 default_sa.go:55] duration metric: took 197.112152ms for default service account to be created ...
	I0816 18:19:25.132750   74828 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 18:19:25.335012   74828 system_pods.go:86] 9 kube-system pods found
	I0816 18:19:25.335043   74828 system_pods.go:89] "coredns-6f6b679f8f-6zfgr" [99157766-5089-4abe-a888-ec5992e5720a] Running
	I0816 18:19:25.335048   74828 system_pods.go:89] "coredns-6f6b679f8f-qr4q9" [d20f51f3-6786-496b-a6bc-7457462e46e9] Running
	I0816 18:19:25.335052   74828 system_pods.go:89] "etcd-no-preload-864476" [246e2b57-dbfe-4fd2-bc9d-ef927d48ba0b] Running
	I0816 18:19:25.335057   74828 system_pods.go:89] "kube-apiserver-no-preload-864476" [0e386448-037f-4543-941a-63f07e0d3186] Running
	I0816 18:19:25.335061   74828 system_pods.go:89] "kube-controller-manager-no-preload-864476" [71617b5c-9968-4d49-ac6c-7728712ac880] Running
	I0816 18:19:25.335064   74828 system_pods.go:89] "kube-proxy-6g6zx" [71a027eb-99e3-4b48-b9f1-2fc80cad9d2e] Running
	I0816 18:19:25.335068   74828 system_pods.go:89] "kube-scheduler-no-preload-864476" [c9b6ef2a-41fa-408b-86b7-eae10db4bec6] Running
	I0816 18:19:25.335075   74828 system_pods.go:89] "metrics-server-6867b74b74-r6cph" [a842267c-2c75-4799-aefc-2fb92ccb9129] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 18:19:25.335081   74828 system_pods.go:89] "storage-provisioner" [c05cdb7c-d74e-4008-a0fc-5eb6df9595af] Running
	I0816 18:19:25.335089   74828 system_pods.go:126] duration metric: took 202.33381ms to wait for k8s-apps to be running ...
	I0816 18:19:25.335098   74828 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 18:19:25.335141   74828 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 18:19:25.349420   74828 system_svc.go:56] duration metric: took 14.310938ms WaitForService to wait for kubelet
	I0816 18:19:25.349457   74828 kubeadm.go:582] duration metric: took 11.758002576s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 18:19:25.349480   74828 node_conditions.go:102] verifying NodePressure condition ...
	I0816 18:19:25.532145   74828 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 18:19:25.532175   74828 node_conditions.go:123] node cpu capacity is 2
	I0816 18:19:25.532189   74828 node_conditions.go:105] duration metric: took 182.702662ms to run NodePressure ...
	I0816 18:19:25.532200   74828 start.go:241] waiting for startup goroutines ...
	I0816 18:19:25.532209   74828 start.go:246] waiting for cluster config update ...
	I0816 18:19:25.532222   74828 start.go:255] writing updated cluster config ...
	I0816 18:19:25.532529   74828 ssh_runner.go:195] Run: rm -f paused
	I0816 18:19:25.588070   74828 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0816 18:19:25.589615   74828 out.go:177] * Done! kubectl is now configured to use "no-preload-864476" cluster and "default" namespace by default
	I0816 18:19:24.440489   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:25.441683   74510 pod_ready.go:82] duration metric: took 4m0.007816418s for pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace to be "Ready" ...
	E0816 18:19:25.441706   74510 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0816 18:19:25.441714   74510 pod_ready.go:39] duration metric: took 4m6.551547163s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:19:25.441726   74510 api_server.go:52] waiting for apiserver process to appear ...
	I0816 18:19:25.441753   74510 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:19:25.441805   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:19:25.492207   74510 cri.go:89] found id: "8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b"
	I0816 18:19:25.492235   74510 cri.go:89] found id: ""
	I0816 18:19:25.492245   74510 logs.go:276] 1 containers: [8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b]
	I0816 18:19:25.492313   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:25.497307   74510 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:19:25.497388   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:19:25.537185   74510 cri.go:89] found id: "fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468"
	I0816 18:19:25.537211   74510 cri.go:89] found id: ""
	I0816 18:19:25.537220   74510 logs.go:276] 1 containers: [fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468]
	I0816 18:19:25.537422   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:25.546564   74510 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:19:25.546644   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:19:25.602794   74510 cri.go:89] found id: "3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d"
	I0816 18:19:25.602817   74510 cri.go:89] found id: ""
	I0816 18:19:25.602827   74510 logs.go:276] 1 containers: [3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d]
	I0816 18:19:25.602879   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:25.609018   74510 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:19:25.609097   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:19:25.657942   74510 cri.go:89] found id: "99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc"
	I0816 18:19:25.657970   74510 cri.go:89] found id: ""
	I0816 18:19:25.657980   74510 logs.go:276] 1 containers: [99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc]
	I0816 18:19:25.658044   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:25.663485   74510 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:19:25.663551   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:19:25.709526   74510 cri.go:89] found id: "92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf"
	I0816 18:19:25.709554   74510 cri.go:89] found id: ""
	I0816 18:19:25.709564   74510 logs.go:276] 1 containers: [92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf]
	I0816 18:19:25.709612   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:25.715845   74510 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:19:25.715898   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:19:25.766505   74510 cri.go:89] found id: "72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c"
	I0816 18:19:25.766522   74510 cri.go:89] found id: ""
	I0816 18:19:25.766529   74510 logs.go:276] 1 containers: [72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c]
	I0816 18:19:25.766573   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:25.771051   74510 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:19:25.771127   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:19:25.810669   74510 cri.go:89] found id: ""
	I0816 18:19:25.810699   74510 logs.go:276] 0 containers: []
	W0816 18:19:25.810711   74510 logs.go:278] No container was found matching "kindnet"
	I0816 18:19:25.810720   74510 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 18:19:25.810779   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 18:19:25.851412   74510 cri.go:89] found id: "08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970"
	I0816 18:19:25.851432   74510 cri.go:89] found id: "81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e"
	I0816 18:19:25.851438   74510 cri.go:89] found id: ""
	I0816 18:19:25.851454   74510 logs.go:276] 2 containers: [08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970 81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e]
	I0816 18:19:25.851507   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:25.856154   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:25.860812   74510 logs.go:123] Gathering logs for etcd [fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468] ...
	I0816 18:19:25.860837   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468"
	I0816 18:19:25.910929   74510 logs.go:123] Gathering logs for coredns [3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d] ...
	I0816 18:19:25.910957   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d"
	I0816 18:19:25.951932   74510 logs.go:123] Gathering logs for storage-provisioner [08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970] ...
	I0816 18:19:25.951959   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970"
	I0816 18:19:25.999861   74510 logs.go:123] Gathering logs for storage-provisioner [81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e] ...
	I0816 18:19:25.999894   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e"
	I0816 18:19:26.036535   74510 logs.go:123] Gathering logs for container status ...
	I0816 18:19:26.036559   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:19:26.089637   74510 logs.go:123] Gathering logs for kubelet ...
	I0816 18:19:26.089675   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:19:26.157679   74510 logs.go:123] Gathering logs for dmesg ...
	I0816 18:19:26.157714   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:19:26.171402   74510 logs.go:123] Gathering logs for kube-scheduler [99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc] ...
	I0816 18:19:26.171432   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc"
	I0816 18:19:26.209537   74510 logs.go:123] Gathering logs for kube-proxy [92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf] ...
	I0816 18:19:26.209564   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf"
	I0816 18:19:26.252702   74510 logs.go:123] Gathering logs for kube-controller-manager [72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c] ...
	I0816 18:19:26.252732   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c"
	I0816 18:19:26.303169   74510 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:19:26.303203   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:19:26.784058   74510 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:19:26.784090   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 18:19:26.904095   74510 logs.go:123] Gathering logs for kube-apiserver [8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b] ...
	I0816 18:19:26.904137   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b"
	I0816 18:19:25.228674   75006 addons.go:510] duration metric: took 1.37992722s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0816 18:19:26.105147   75006 pod_ready.go:103] pod "coredns-6f6b679f8f-hx7sb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:28.107202   75006 pod_ready.go:103] pod "coredns-6f6b679f8f-hx7sb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:32.607933   75402 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0816 18:19:32.608136   75402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:19:32.608430   75402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:19:29.459100   74510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:19:29.476158   74510 api_server.go:72] duration metric: took 4m17.827179017s to wait for apiserver process to appear ...
	I0816 18:19:29.476183   74510 api_server.go:88] waiting for apiserver healthz status ...
	I0816 18:19:29.476222   74510 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:19:29.476279   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:19:29.509739   74510 cri.go:89] found id: "8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b"
	I0816 18:19:29.509767   74510 cri.go:89] found id: ""
	I0816 18:19:29.509776   74510 logs.go:276] 1 containers: [8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b]
	I0816 18:19:29.509836   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:29.516078   74510 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:19:29.516150   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:19:29.553766   74510 cri.go:89] found id: "fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468"
	I0816 18:19:29.553795   74510 cri.go:89] found id: ""
	I0816 18:19:29.553805   74510 logs.go:276] 1 containers: [fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468]
	I0816 18:19:29.553857   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:29.558145   74510 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:19:29.558210   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:19:29.599559   74510 cri.go:89] found id: "3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d"
	I0816 18:19:29.599583   74510 cri.go:89] found id: ""
	I0816 18:19:29.599594   74510 logs.go:276] 1 containers: [3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d]
	I0816 18:19:29.599651   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:29.604108   74510 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:19:29.604187   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:19:29.641990   74510 cri.go:89] found id: "99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc"
	I0816 18:19:29.642009   74510 cri.go:89] found id: ""
	I0816 18:19:29.642016   74510 logs.go:276] 1 containers: [99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc]
	I0816 18:19:29.642062   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:29.645990   74510 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:19:29.646047   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:19:29.679480   74510 cri.go:89] found id: "92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf"
	I0816 18:19:29.679505   74510 cri.go:89] found id: ""
	I0816 18:19:29.679514   74510 logs.go:276] 1 containers: [92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf]
	I0816 18:19:29.679571   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:29.683361   74510 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:19:29.683425   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:19:29.733167   74510 cri.go:89] found id: "72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c"
	I0816 18:19:29.733197   74510 cri.go:89] found id: ""
	I0816 18:19:29.733208   74510 logs.go:276] 1 containers: [72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c]
	I0816 18:19:29.733266   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:29.737449   74510 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:19:29.737518   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:19:29.771597   74510 cri.go:89] found id: ""
	I0816 18:19:29.771628   74510 logs.go:276] 0 containers: []
	W0816 18:19:29.771639   74510 logs.go:278] No container was found matching "kindnet"
	I0816 18:19:29.771647   74510 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 18:19:29.771714   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 18:19:29.812346   74510 cri.go:89] found id: "08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970"
	I0816 18:19:29.812375   74510 cri.go:89] found id: "81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e"
	I0816 18:19:29.812381   74510 cri.go:89] found id: ""
	I0816 18:19:29.812390   74510 logs.go:276] 2 containers: [08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970 81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e]
	I0816 18:19:29.812447   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:29.817909   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:29.821575   74510 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:19:29.821602   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:19:30.288789   74510 logs.go:123] Gathering logs for container status ...
	I0816 18:19:30.288836   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:19:30.332874   74510 logs.go:123] Gathering logs for dmesg ...
	I0816 18:19:30.332904   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:19:30.347128   74510 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:19:30.347168   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 18:19:30.456809   74510 logs.go:123] Gathering logs for etcd [fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468] ...
	I0816 18:19:30.456845   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468"
	I0816 18:19:30.505332   74510 logs.go:123] Gathering logs for kube-scheduler [99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc] ...
	I0816 18:19:30.505362   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc"
	I0816 18:19:30.540765   74510 logs.go:123] Gathering logs for kube-proxy [92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf] ...
	I0816 18:19:30.540798   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf"
	I0816 18:19:30.576047   74510 logs.go:123] Gathering logs for storage-provisioner [81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e] ...
	I0816 18:19:30.576077   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e"
	I0816 18:19:30.611956   74510 logs.go:123] Gathering logs for kubelet ...
	I0816 18:19:30.611992   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:19:30.678135   74510 logs.go:123] Gathering logs for kube-apiserver [8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b] ...
	I0816 18:19:30.678177   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b"
	I0816 18:19:30.732409   74510 logs.go:123] Gathering logs for coredns [3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d] ...
	I0816 18:19:30.732437   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d"
	I0816 18:19:30.773306   74510 logs.go:123] Gathering logs for kube-controller-manager [72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c] ...
	I0816 18:19:30.773331   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c"
	I0816 18:19:30.827732   74510 logs.go:123] Gathering logs for storage-provisioner [08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970] ...
	I0816 18:19:30.827763   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970"
	I0816 18:19:33.367134   74510 api_server.go:253] Checking apiserver healthz at https://192.168.61.218:8443/healthz ...
	I0816 18:19:33.371523   74510 api_server.go:279] https://192.168.61.218:8443/healthz returned 200:
	ok
	I0816 18:19:33.372537   74510 api_server.go:141] control plane version: v1.31.0
	I0816 18:19:33.372560   74510 api_server.go:131] duration metric: took 3.896368169s to wait for apiserver health ...
	I0816 18:19:33.372568   74510 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 18:19:33.372589   74510 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:19:33.372653   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:19:33.409551   74510 cri.go:89] found id: "8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b"
	I0816 18:19:33.409579   74510 cri.go:89] found id: ""
	I0816 18:19:33.409590   74510 logs.go:276] 1 containers: [8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b]
	I0816 18:19:33.409648   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:33.413727   74510 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:19:33.413802   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:19:33.457246   74510 cri.go:89] found id: "fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468"
	I0816 18:19:33.457268   74510 cri.go:89] found id: ""
	I0816 18:19:33.457277   74510 logs.go:276] 1 containers: [fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468]
	I0816 18:19:33.457337   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:33.461490   74510 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:19:33.461556   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:19:33.497141   74510 cri.go:89] found id: "3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d"
	I0816 18:19:33.497169   74510 cri.go:89] found id: ""
	I0816 18:19:33.497180   74510 logs.go:276] 1 containers: [3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d]
	I0816 18:19:33.497241   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:33.501353   74510 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:19:33.501421   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:19:33.537797   74510 cri.go:89] found id: "99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc"
	I0816 18:19:33.537816   74510 cri.go:89] found id: ""
	I0816 18:19:33.537823   74510 logs.go:276] 1 containers: [99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc]
	I0816 18:19:33.537877   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:33.541727   74510 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:19:33.541784   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:19:33.575882   74510 cri.go:89] found id: "92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf"
	I0816 18:19:33.575905   74510 cri.go:89] found id: ""
	I0816 18:19:33.575913   74510 logs.go:276] 1 containers: [92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf]
	I0816 18:19:33.575964   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:33.579592   74510 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:19:33.579644   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:19:33.614425   74510 cri.go:89] found id: "72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c"
	I0816 18:19:33.614447   74510 cri.go:89] found id: ""
	I0816 18:19:33.614455   74510 logs.go:276] 1 containers: [72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c]
	I0816 18:19:33.614507   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:33.618130   74510 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:19:33.618178   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:19:33.652369   74510 cri.go:89] found id: ""
	I0816 18:19:33.652393   74510 logs.go:276] 0 containers: []
	W0816 18:19:33.652403   74510 logs.go:278] No container was found matching "kindnet"
	I0816 18:19:33.652410   74510 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 18:19:33.652463   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 18:19:33.687276   74510 cri.go:89] found id: "08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970"
	I0816 18:19:33.687295   74510 cri.go:89] found id: "81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e"
	I0816 18:19:33.687301   74510 cri.go:89] found id: ""
	I0816 18:19:33.687309   74510 logs.go:276] 2 containers: [08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970 81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e]
	I0816 18:19:33.687361   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:33.691100   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:33.695148   74510 logs.go:123] Gathering logs for kube-proxy [92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf] ...
	I0816 18:19:33.695179   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf"
	I0816 18:19:30.110901   75006 pod_ready.go:103] pod "coredns-6f6b679f8f-hx7sb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:32.606195   75006 pod_ready.go:103] pod "coredns-6f6b679f8f-hx7sb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:34.110732   75006 pod_ready.go:93] pod "coredns-6f6b679f8f-hx7sb" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:34.110764   75006 pod_ready.go:82] duration metric: took 10.011612904s for pod "coredns-6f6b679f8f-hx7sb" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.110778   75006 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-t74vf" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.116373   75006 pod_ready.go:93] pod "coredns-6f6b679f8f-t74vf" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:34.116392   75006 pod_ready.go:82] duration metric: took 5.607377ms for pod "coredns-6f6b679f8f-t74vf" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.116401   75006 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.124005   75006 pod_ready.go:93] pod "etcd-default-k8s-diff-port-256678" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:34.124027   75006 pod_ready.go:82] duration metric: took 7.618878ms for pod "etcd-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.124039   75006 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.129603   75006 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-256678" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:34.129623   75006 pod_ready.go:82] duration metric: took 5.575452ms for pod "kube-apiserver-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.129633   75006 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.145449   75006 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-256678" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:34.145474   75006 pod_ready.go:82] duration metric: took 15.831669ms for pod "kube-controller-manager-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.145486   75006 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qsskg" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.506455   75006 pod_ready.go:93] pod "kube-proxy-qsskg" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:34.506477   75006 pod_ready.go:82] duration metric: took 360.982998ms for pod "kube-proxy-qsskg" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.506486   75006 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.905345   75006 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-256678" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:34.905365   75006 pod_ready.go:82] duration metric: took 398.872303ms for pod "kube-scheduler-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.905373   75006 pod_ready.go:39] duration metric: took 10.812448791s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:19:34.905386   75006 api_server.go:52] waiting for apiserver process to appear ...
	I0816 18:19:34.905430   75006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:19:34.920554   75006 api_server.go:72] duration metric: took 11.071846456s to wait for apiserver process to appear ...
	I0816 18:19:34.920574   75006 api_server.go:88] waiting for apiserver healthz status ...
	I0816 18:19:34.920589   75006 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I0816 18:19:34.927194   75006 api_server.go:279] https://192.168.72.144:8444/healthz returned 200:
	ok
	I0816 18:19:34.928420   75006 api_server.go:141] control plane version: v1.31.0
	I0816 18:19:34.928437   75006 api_server.go:131] duration metric: took 7.857168ms to wait for apiserver health ...
	I0816 18:19:34.928443   75006 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 18:19:35.107220   75006 system_pods.go:59] 9 kube-system pods found
	I0816 18:19:35.107248   75006 system_pods.go:61] "coredns-6f6b679f8f-hx7sb" [4ebcdf34-c4e8-47bd-83f3-c56ea7bfb7d4] Running
	I0816 18:19:35.107254   75006 system_pods.go:61] "coredns-6f6b679f8f-t74vf" [41afd723-b034-460e-8e5f-197c8d8bcd7a] Running
	I0816 18:19:35.107258   75006 system_pods.go:61] "etcd-default-k8s-diff-port-256678" [46e68942-a5fc-433d-bf35-70f87a1b5962] Running
	I0816 18:19:35.107262   75006 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-256678" [0083826c-61fc-4597-84d9-a529df660696] Running
	I0816 18:19:35.107267   75006 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-256678" [e96435e2-1034-46d7-9f70-ba4435962528] Running
	I0816 18:19:35.107270   75006 system_pods.go:61] "kube-proxy-qsskg" [c863ca3c-8451-4fa7-b22d-c709e67bd26b] Running
	I0816 18:19:35.107274   75006 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-256678" [83bd764c-55ee-4fc4-8ebc-567b3fba1f95] Running
	I0816 18:19:35.107280   75006 system_pods.go:61] "metrics-server-6867b74b74-vmt5v" [8446e983-380f-42a8-ab5b-ce9b6d67ebad] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 18:19:35.107288   75006 system_pods.go:61] "storage-provisioner" [491e3d8e-5a8b-4187-a682-411c6fb9dd92] Running
	I0816 18:19:35.107296   75006 system_pods.go:74] duration metric: took 178.847431ms to wait for pod list to return data ...
	I0816 18:19:35.107302   75006 default_sa.go:34] waiting for default service account to be created ...
	I0816 18:19:35.303619   75006 default_sa.go:45] found service account: "default"
	I0816 18:19:35.303646   75006 default_sa.go:55] duration metric: took 196.337687ms for default service account to be created ...
	I0816 18:19:35.303655   75006 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 18:19:35.508401   75006 system_pods.go:86] 9 kube-system pods found
	I0816 18:19:35.508442   75006 system_pods.go:89] "coredns-6f6b679f8f-hx7sb" [4ebcdf34-c4e8-47bd-83f3-c56ea7bfb7d4] Running
	I0816 18:19:35.508452   75006 system_pods.go:89] "coredns-6f6b679f8f-t74vf" [41afd723-b034-460e-8e5f-197c8d8bcd7a] Running
	I0816 18:19:35.508460   75006 system_pods.go:89] "etcd-default-k8s-diff-port-256678" [46e68942-a5fc-433d-bf35-70f87a1b5962] Running
	I0816 18:19:35.508466   75006 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-256678" [0083826c-61fc-4597-84d9-a529df660696] Running
	I0816 18:19:35.508471   75006 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-256678" [e96435e2-1034-46d7-9f70-ba4435962528] Running
	I0816 18:19:35.508477   75006 system_pods.go:89] "kube-proxy-qsskg" [c863ca3c-8451-4fa7-b22d-c709e67bd26b] Running
	I0816 18:19:35.508483   75006 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-256678" [83bd764c-55ee-4fc4-8ebc-567b3fba1f95] Running
	I0816 18:19:35.508494   75006 system_pods.go:89] "metrics-server-6867b74b74-vmt5v" [8446e983-380f-42a8-ab5b-ce9b6d67ebad] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 18:19:35.508504   75006 system_pods.go:89] "storage-provisioner" [491e3d8e-5a8b-4187-a682-411c6fb9dd92] Running
	I0816 18:19:35.508521   75006 system_pods.go:126] duration metric: took 204.859728ms to wait for k8s-apps to be running ...
	I0816 18:19:35.508544   75006 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 18:19:35.508605   75006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 18:19:35.523660   75006 system_svc.go:56] duration metric: took 15.109288ms WaitForService to wait for kubelet
	I0816 18:19:35.523687   75006 kubeadm.go:582] duration metric: took 11.674985717s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 18:19:35.523704   75006 node_conditions.go:102] verifying NodePressure condition ...
	I0816 18:19:35.704770   75006 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 18:19:35.704797   75006 node_conditions.go:123] node cpu capacity is 2
	I0816 18:19:35.704808   75006 node_conditions.go:105] duration metric: took 181.099433ms to run NodePressure ...
	I0816 18:19:35.704818   75006 start.go:241] waiting for startup goroutines ...
	I0816 18:19:35.704824   75006 start.go:246] waiting for cluster config update ...
	I0816 18:19:35.704834   75006 start.go:255] writing updated cluster config ...
	I0816 18:19:35.705096   75006 ssh_runner.go:195] Run: rm -f paused
	I0816 18:19:35.753637   75006 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0816 18:19:35.755747   75006 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-256678" cluster and "default" namespace by default
	I0816 18:19:33.732856   74510 logs.go:123] Gathering logs for kube-controller-manager [72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c] ...
	I0816 18:19:33.732881   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c"
	I0816 18:19:33.796167   74510 logs.go:123] Gathering logs for storage-provisioner [08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970] ...
	I0816 18:19:33.796215   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970"
	I0816 18:19:33.835842   74510 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:19:33.835869   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 18:19:33.956412   74510 logs.go:123] Gathering logs for kube-apiserver [8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b] ...
	I0816 18:19:33.956450   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b"
	I0816 18:19:34.004102   74510 logs.go:123] Gathering logs for etcd [fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468] ...
	I0816 18:19:34.004137   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468"
	I0816 18:19:34.050504   74510 logs.go:123] Gathering logs for coredns [3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d] ...
	I0816 18:19:34.050548   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d"
	I0816 18:19:34.087815   74510 logs.go:123] Gathering logs for kube-scheduler [99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc] ...
	I0816 18:19:34.087850   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc"
	I0816 18:19:34.124096   74510 logs.go:123] Gathering logs for kubelet ...
	I0816 18:19:34.124127   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:19:34.193377   74510 logs.go:123] Gathering logs for dmesg ...
	I0816 18:19:34.193410   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:19:34.206480   74510 logs.go:123] Gathering logs for storage-provisioner [81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e] ...
	I0816 18:19:34.206505   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e"
	I0816 18:19:34.240262   74510 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:19:34.240305   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:19:34.591979   74510 logs.go:123] Gathering logs for container status ...
	I0816 18:19:34.592014   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:19:37.142552   74510 system_pods.go:59] 8 kube-system pods found
	I0816 18:19:37.142580   74510 system_pods.go:61] "coredns-6f6b679f8f-8njs2" [f29c31e1-4c2a-4dd8-ba60-62998504c55e] Running
	I0816 18:19:37.142585   74510 system_pods.go:61] "etcd-embed-certs-777541" [9cad9a1c-cea5-4271-9a68-3689ecdad607] Running
	I0816 18:19:37.142590   74510 system_pods.go:61] "kube-apiserver-embed-certs-777541" [a5105f98-368f-4687-8eca-ecc66ae59b42] Running
	I0816 18:19:37.142593   74510 system_pods.go:61] "kube-controller-manager-embed-certs-777541" [63cb72fb-167b-446b-874d-6ee665ad8a55] Running
	I0816 18:19:37.142596   74510 system_pods.go:61] "kube-proxy-j5rl7" [fcbc8903-6fa2-4f55-9ec0-92b77e21fb08] Running
	I0816 18:19:37.142600   74510 system_pods.go:61] "kube-scheduler-embed-certs-777541" [f2224375-d2f9-4e85-9eb8-26070a90642f] Running
	I0816 18:19:37.142605   74510 system_pods.go:61] "metrics-server-6867b74b74-6hkzb" [3e01da8d-7ddf-47cc-9079-5162cf2c2b53] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 18:19:37.142609   74510 system_pods.go:61] "storage-provisioner" [6fc6c4da-0e0f-45cc-84a6-bd4907f5e852] Running
	I0816 18:19:37.142616   74510 system_pods.go:74] duration metric: took 3.770043434s to wait for pod list to return data ...
	I0816 18:19:37.142625   74510 default_sa.go:34] waiting for default service account to be created ...
	I0816 18:19:37.145135   74510 default_sa.go:45] found service account: "default"
	I0816 18:19:37.145161   74510 default_sa.go:55] duration metric: took 2.530779ms for default service account to be created ...
	I0816 18:19:37.145169   74510 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 18:19:37.149397   74510 system_pods.go:86] 8 kube-system pods found
	I0816 18:19:37.149423   74510 system_pods.go:89] "coredns-6f6b679f8f-8njs2" [f29c31e1-4c2a-4dd8-ba60-62998504c55e] Running
	I0816 18:19:37.149431   74510 system_pods.go:89] "etcd-embed-certs-777541" [9cad9a1c-cea5-4271-9a68-3689ecdad607] Running
	I0816 18:19:37.149437   74510 system_pods.go:89] "kube-apiserver-embed-certs-777541" [a5105f98-368f-4687-8eca-ecc66ae59b42] Running
	I0816 18:19:37.149443   74510 system_pods.go:89] "kube-controller-manager-embed-certs-777541" [63cb72fb-167b-446b-874d-6ee665ad8a55] Running
	I0816 18:19:37.149451   74510 system_pods.go:89] "kube-proxy-j5rl7" [fcbc8903-6fa2-4f55-9ec0-92b77e21fb08] Running
	I0816 18:19:37.149458   74510 system_pods.go:89] "kube-scheduler-embed-certs-777541" [f2224375-d2f9-4e85-9eb8-26070a90642f] Running
	I0816 18:19:37.149471   74510 system_pods.go:89] "metrics-server-6867b74b74-6hkzb" [3e01da8d-7ddf-47cc-9079-5162cf2c2b53] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 18:19:37.149480   74510 system_pods.go:89] "storage-provisioner" [6fc6c4da-0e0f-45cc-84a6-bd4907f5e852] Running
	I0816 18:19:37.149491   74510 system_pods.go:126] duration metric: took 4.31556ms to wait for k8s-apps to be running ...
	I0816 18:19:37.149502   74510 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 18:19:37.149564   74510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 18:19:37.166663   74510 system_svc.go:56] duration metric: took 17.15398ms WaitForService to wait for kubelet
	I0816 18:19:37.166692   74510 kubeadm.go:582] duration metric: took 4m25.517719342s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 18:19:37.166711   74510 node_conditions.go:102] verifying NodePressure condition ...
	I0816 18:19:37.170081   74510 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 18:19:37.170102   74510 node_conditions.go:123] node cpu capacity is 2
	I0816 18:19:37.170112   74510 node_conditions.go:105] duration metric: took 3.396116ms to run NodePressure ...
	I0816 18:19:37.170122   74510 start.go:241] waiting for startup goroutines ...
	I0816 18:19:37.170129   74510 start.go:246] waiting for cluster config update ...
	I0816 18:19:37.170138   74510 start.go:255] writing updated cluster config ...
	I0816 18:19:37.170406   74510 ssh_runner.go:195] Run: rm -f paused
	I0816 18:19:37.218383   74510 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0816 18:19:37.220397   74510 out.go:177] * Done! kubectl is now configured to use "embed-certs-777541" cluster and "default" namespace by default
	I0816 18:19:37.609143   75402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:19:37.609401   75402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:19:47.609941   75402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:19:47.610185   75402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:20:07.611108   75402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:20:07.611350   75402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:20:47.613446   75402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:20:47.613708   75402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:20:47.613742   75402 kubeadm.go:310] 
	I0816 18:20:47.613809   75402 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0816 18:20:47.613902   75402 kubeadm.go:310] 		timed out waiting for the condition
	I0816 18:20:47.613926   75402 kubeadm.go:310] 
	I0816 18:20:47.613976   75402 kubeadm.go:310] 	This error is likely caused by:
	I0816 18:20:47.614028   75402 kubeadm.go:310] 		- The kubelet is not running
	I0816 18:20:47.614160   75402 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0816 18:20:47.614174   75402 kubeadm.go:310] 
	I0816 18:20:47.614323   75402 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0816 18:20:47.614383   75402 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0816 18:20:47.614432   75402 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0816 18:20:47.614441   75402 kubeadm.go:310] 
	I0816 18:20:47.614601   75402 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0816 18:20:47.614730   75402 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0816 18:20:47.614751   75402 kubeadm.go:310] 
	I0816 18:20:47.614875   75402 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0816 18:20:47.614982   75402 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0816 18:20:47.615101   75402 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0816 18:20:47.615217   75402 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0816 18:20:47.615230   75402 kubeadm.go:310] 
	I0816 18:20:47.616865   75402 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 18:20:47.616971   75402 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0816 18:20:47.617028   75402 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0816 18:20:47.617173   75402 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0816 18:20:47.617226   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 18:20:48.158066   75402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 18:20:48.172568   75402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 18:20:48.182445   75402 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 18:20:48.182468   75402 kubeadm.go:157] found existing configuration files:
	
	I0816 18:20:48.182527   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 18:20:48.191779   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 18:20:48.191847   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 18:20:48.201531   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 18:20:48.210495   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 18:20:48.210568   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 18:20:48.219701   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 18:20:48.228170   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 18:20:48.228242   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 18:20:48.237366   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 18:20:48.246335   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 18:20:48.246393   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 18:20:48.255655   75402 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 18:20:48.321873   75402 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0816 18:20:48.321930   75402 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 18:20:48.462199   75402 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 18:20:48.462324   75402 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 18:20:48.462448   75402 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0816 18:20:48.646565   75402 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 18:20:48.648485   75402 out.go:235]   - Generating certificates and keys ...
	I0816 18:20:48.648605   75402 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 18:20:48.648748   75402 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 18:20:48.648895   75402 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 18:20:48.648994   75402 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 18:20:48.649088   75402 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 18:20:48.649185   75402 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 18:20:48.649282   75402 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 18:20:48.649368   75402 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 18:20:48.649485   75402 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 18:20:48.649595   75402 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 18:20:48.649649   75402 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 18:20:48.649753   75402 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 18:20:48.864525   75402 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 18:20:49.035729   75402 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 18:20:49.086765   75402 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 18:20:49.222612   75402 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 18:20:49.239121   75402 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 18:20:49.240158   75402 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 18:20:49.240200   75402 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 18:20:49.366027   75402 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 18:20:49.367770   75402 out.go:235]   - Booting up control plane ...
	I0816 18:20:49.367907   75402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 18:20:49.373047   75402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 18:20:49.373886   75402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 18:20:49.374691   75402 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 18:20:49.379220   75402 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0816 18:21:29.381362   75402 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0816 18:21:29.381473   75402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:21:29.381700   75402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:21:34.381889   75402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:21:34.382065   75402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:21:44.382765   75402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:21:44.382964   75402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:22:04.383485   75402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:22:04.383748   75402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:22:44.382265   75402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:22:44.382558   75402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:22:44.382572   75402 kubeadm.go:310] 
	I0816 18:22:44.382628   75402 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0816 18:22:44.382715   75402 kubeadm.go:310] 		timed out waiting for the condition
	I0816 18:22:44.382741   75402 kubeadm.go:310] 
	I0816 18:22:44.382789   75402 kubeadm.go:310] 	This error is likely caused by:
	I0816 18:22:44.382837   75402 kubeadm.go:310] 		- The kubelet is not running
	I0816 18:22:44.382986   75402 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0816 18:22:44.382997   75402 kubeadm.go:310] 
	I0816 18:22:44.383149   75402 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0816 18:22:44.383202   75402 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0816 18:22:44.383246   75402 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0816 18:22:44.383258   75402 kubeadm.go:310] 
	I0816 18:22:44.383421   75402 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0816 18:22:44.383534   75402 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0816 18:22:44.383549   75402 kubeadm.go:310] 
	I0816 18:22:44.383743   75402 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0816 18:22:44.383877   75402 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0816 18:22:44.383993   75402 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0816 18:22:44.384092   75402 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0816 18:22:44.384103   75402 kubeadm.go:310] 
	I0816 18:22:44.384783   75402 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 18:22:44.384895   75402 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0816 18:22:44.384986   75402 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0816 18:22:44.385062   75402 kubeadm.go:394] duration metric: took 8m1.372176417s to StartCluster
	I0816 18:22:44.385108   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:22:44.385173   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:22:44.425862   75402 cri.go:89] found id: ""
	I0816 18:22:44.425892   75402 logs.go:276] 0 containers: []
	W0816 18:22:44.425901   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:22:44.425909   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:22:44.425982   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:22:44.461988   75402 cri.go:89] found id: ""
	I0816 18:22:44.462019   75402 logs.go:276] 0 containers: []
	W0816 18:22:44.462030   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:22:44.462038   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:22:44.462109   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:22:44.496063   75402 cri.go:89] found id: ""
	I0816 18:22:44.496095   75402 logs.go:276] 0 containers: []
	W0816 18:22:44.496106   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:22:44.496114   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:22:44.496175   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:22:44.529875   75402 cri.go:89] found id: ""
	I0816 18:22:44.529899   75402 logs.go:276] 0 containers: []
	W0816 18:22:44.529906   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:22:44.529912   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:22:44.529958   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:22:44.565745   75402 cri.go:89] found id: ""
	I0816 18:22:44.565781   75402 logs.go:276] 0 containers: []
	W0816 18:22:44.565791   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:22:44.565798   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:22:44.565860   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:22:44.604122   75402 cri.go:89] found id: ""
	I0816 18:22:44.604149   75402 logs.go:276] 0 containers: []
	W0816 18:22:44.604160   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:22:44.604168   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:22:44.604228   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:22:44.636607   75402 cri.go:89] found id: ""
	I0816 18:22:44.636658   75402 logs.go:276] 0 containers: []
	W0816 18:22:44.636669   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:22:44.636677   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:22:44.636736   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:22:44.670942   75402 cri.go:89] found id: ""
	I0816 18:22:44.670973   75402 logs.go:276] 0 containers: []
	W0816 18:22:44.670981   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:22:44.670989   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:22:44.671001   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:22:44.722403   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:22:44.722433   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:22:44.738587   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:22:44.738627   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:22:44.854530   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:22:44.854563   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:22:44.854579   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:22:44.957308   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:22:44.957342   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0816 18:22:44.997652   75402 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0816 18:22:44.997714   75402 out.go:270] * 
	W0816 18:22:44.997804   75402 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0816 18:22:44.997828   75402 out.go:270] * 
	W0816 18:22:44.998787   75402 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 18:22:45.002189   75402 out.go:201] 
	W0816 18:22:45.003254   75402 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0816 18:22:45.003310   75402 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0816 18:22:45.003340   75402 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0816 18:22:45.004826   75402 out.go:201] 
	
	
	==> CRI-O <==
	Aug 16 18:31:50 old-k8s-version-783465 crio[653]: time="2024-08-16 18:31:50.319504875Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723833110319479993,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b39f1d6a-24f9-4a6c-9a05-3b246e6efe88 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 18:31:50 old-k8s-version-783465 crio[653]: time="2024-08-16 18:31:50.320013268Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0327e545-00d1-40a4-b3ce-0818383dc2df name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:31:50 old-k8s-version-783465 crio[653]: time="2024-08-16 18:31:50.320081314Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0327e545-00d1-40a4-b3ce-0818383dc2df name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:31:50 old-k8s-version-783465 crio[653]: time="2024-08-16 18:31:50.320161756Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=0327e545-00d1-40a4-b3ce-0818383dc2df name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:31:50 old-k8s-version-783465 crio[653]: time="2024-08-16 18:31:50.381405964Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8b40720d-4006-4870-8149-6c5a4e5e447a name=/runtime.v1.RuntimeService/Version
	Aug 16 18:31:50 old-k8s-version-783465 crio[653]: time="2024-08-16 18:31:50.381519571Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8b40720d-4006-4870-8149-6c5a4e5e447a name=/runtime.v1.RuntimeService/Version
	Aug 16 18:31:50 old-k8s-version-783465 crio[653]: time="2024-08-16 18:31:50.382690463Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d27291dc-7b92-4b59-bbf1-017fd9d2cf45 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 18:31:50 old-k8s-version-783465 crio[653]: time="2024-08-16 18:31:50.383387155Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723833110383305927,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d27291dc-7b92-4b59-bbf1-017fd9d2cf45 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 18:31:50 old-k8s-version-783465 crio[653]: time="2024-08-16 18:31:50.384361301Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=555d0beb-b56b-4b4a-88bf-c4b1295dba8b name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:31:50 old-k8s-version-783465 crio[653]: time="2024-08-16 18:31:50.384451477Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=555d0beb-b56b-4b4a-88bf-c4b1295dba8b name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:31:50 old-k8s-version-783465 crio[653]: time="2024-08-16 18:31:50.384504791Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=555d0beb-b56b-4b4a-88bf-c4b1295dba8b name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:31:50 old-k8s-version-783465 crio[653]: time="2024-08-16 18:31:50.417052990Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7290e9f2-d96c-43c1-a4ce-55fde5200ecc name=/runtime.v1.RuntimeService/Version
	Aug 16 18:31:50 old-k8s-version-783465 crio[653]: time="2024-08-16 18:31:50.417196629Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7290e9f2-d96c-43c1-a4ce-55fde5200ecc name=/runtime.v1.RuntimeService/Version
	Aug 16 18:31:50 old-k8s-version-783465 crio[653]: time="2024-08-16 18:31:50.418356491Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f12c37da-ea50-43f8-a242-da1d459a59eb name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 18:31:50 old-k8s-version-783465 crio[653]: time="2024-08-16 18:31:50.418801039Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723833110418760331,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f12c37da-ea50-43f8-a242-da1d459a59eb name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 18:31:50 old-k8s-version-783465 crio[653]: time="2024-08-16 18:31:50.419403695Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8994308f-71a9-44ec-8f1d-b89b9ece6a43 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:31:50 old-k8s-version-783465 crio[653]: time="2024-08-16 18:31:50.419456834Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8994308f-71a9-44ec-8f1d-b89b9ece6a43 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:31:50 old-k8s-version-783465 crio[653]: time="2024-08-16 18:31:50.419509997Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=8994308f-71a9-44ec-8f1d-b89b9ece6a43 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:31:50 old-k8s-version-783465 crio[653]: time="2024-08-16 18:31:50.448695441Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c6c4e8d2-f0a8-408a-8b89-8b87dc626762 name=/runtime.v1.RuntimeService/Version
	Aug 16 18:31:50 old-k8s-version-783465 crio[653]: time="2024-08-16 18:31:50.448784524Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c6c4e8d2-f0a8-408a-8b89-8b87dc626762 name=/runtime.v1.RuntimeService/Version
	Aug 16 18:31:50 old-k8s-version-783465 crio[653]: time="2024-08-16 18:31:50.449980720Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=997693bf-7011-4cc1-8ef1-01b0423a1351 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 18:31:50 old-k8s-version-783465 crio[653]: time="2024-08-16 18:31:50.450611561Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723833110450572855,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=997693bf-7011-4cc1-8ef1-01b0423a1351 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 18:31:50 old-k8s-version-783465 crio[653]: time="2024-08-16 18:31:50.451100116Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=25b0c341-b93d-4eb2-a33d-e3a805250943 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:31:50 old-k8s-version-783465 crio[653]: time="2024-08-16 18:31:50.451205416Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=25b0c341-b93d-4eb2-a33d-e3a805250943 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:31:50 old-k8s-version-783465 crio[653]: time="2024-08-16 18:31:50.451242608Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=25b0c341-b93d-4eb2-a33d-e3a805250943 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Aug16 18:14] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.064977] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.045169] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.997853] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.853876] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.352877] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.345481] systemd-fstab-generator[572]: Ignoring "noauto" option for root device
	[  +0.064693] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054338] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.181344] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.146416] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.232451] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +6.280356] systemd-fstab-generator[902]: Ignoring "noauto" option for root device
	[  +0.058572] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.868893] systemd-fstab-generator[1027]: Ignoring "noauto" option for root device
	[ +13.997238] kauditd_printk_skb: 46 callbacks suppressed
	[Aug16 18:18] systemd-fstab-generator[5183]: Ignoring "noauto" option for root device
	[Aug16 18:20] systemd-fstab-generator[5458]: Ignoring "noauto" option for root device
	[  +0.064746] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 18:31:50 up 17 min,  0 users,  load average: 0.04, 0.02, 0.03
	Linux old-k8s-version-783465 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Aug 16 18:31:50 old-k8s-version-783465 kubelet[6639]:         /usr/local/go/src/net/sock_posix.go:70 +0x1c5
	Aug 16 18:31:50 old-k8s-version-783465 kubelet[6639]: net.internetSocket(0x4f7fe40, 0xc000354f00, 0x48ab5d6, 0x3, 0x4fb9160, 0x0, 0x4fb9160, 0xc000b78a80, 0x1, 0x0, ...)
	Aug 16 18:31:50 old-k8s-version-783465 kubelet[6639]:         /usr/local/go/src/net/ipsock_posix.go:141 +0x145
	Aug 16 18:31:50 old-k8s-version-783465 kubelet[6639]: net.(*sysDialer).doDialTCP(0xc000c2a800, 0x4f7fe40, 0xc000354f00, 0x0, 0xc000b78a80, 0x3fddce0, 0x70f9210, 0x0)
	Aug 16 18:31:50 old-k8s-version-783465 kubelet[6639]:         /usr/local/go/src/net/tcpsock_posix.go:65 +0xc5
	Aug 16 18:31:50 old-k8s-version-783465 kubelet[6639]: net.(*sysDialer).dialTCP(0xc000c2a800, 0x4f7fe40, 0xc000354f00, 0x0, 0xc000b78a80, 0x57b620, 0x48ab5d6, 0x7f1ed47ac7c8)
	Aug 16 18:31:50 old-k8s-version-783465 kubelet[6639]:         /usr/local/go/src/net/tcpsock_posix.go:61 +0xd7
	Aug 16 18:31:50 old-k8s-version-783465 kubelet[6639]: net.(*sysDialer).dialSingle(0xc000c2a800, 0x4f7fe40, 0xc000354f00, 0x4f1ff00, 0xc000b78a80, 0x0, 0x0, 0x0, 0x0)
	Aug 16 18:31:50 old-k8s-version-783465 kubelet[6639]:         /usr/local/go/src/net/dial.go:580 +0x5e5
	Aug 16 18:31:50 old-k8s-version-783465 kubelet[6639]: net.(*sysDialer).dialSerial(0xc000c2a800, 0x4f7fe40, 0xc000354f00, 0xc000c35da0, 0x1, 0x1, 0x0, 0x0, 0x0, 0x0)
	Aug 16 18:31:50 old-k8s-version-783465 kubelet[6639]:         /usr/local/go/src/net/dial.go:548 +0x152
	Aug 16 18:31:50 old-k8s-version-783465 kubelet[6639]: net.(*Dialer).DialContext(0xc000776de0, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000057f50, 0x24, 0x0, 0x0, 0x0, ...)
	Aug 16 18:31:50 old-k8s-version-783465 kubelet[6639]:         /usr/local/go/src/net/dial.go:425 +0x6e5
	Aug 16 18:31:50 old-k8s-version-783465 kubelet[6639]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000cd41e0, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000057f50, 0x24, 0x60, 0x7f1ed631fe50, 0x118, ...)
	Aug 16 18:31:50 old-k8s-version-783465 kubelet[6639]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Aug 16 18:31:50 old-k8s-version-783465 kubelet[6639]: net/http.(*Transport).dial(0xc000884f00, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000057f50, 0x24, 0x0, 0x0, 0x0, ...)
	Aug 16 18:31:50 old-k8s-version-783465 kubelet[6639]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Aug 16 18:31:50 old-k8s-version-783465 kubelet[6639]: net/http.(*Transport).dialConn(0xc000884f00, 0x4f7fe00, 0xc000120018, 0x0, 0xc000292300, 0x5, 0xc000057f50, 0x24, 0x0, 0xc000b60120, ...)
	Aug 16 18:31:50 old-k8s-version-783465 kubelet[6639]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Aug 16 18:31:50 old-k8s-version-783465 kubelet[6639]: net/http.(*Transport).dialConnFor(0xc000884f00, 0xc000876210)
	Aug 16 18:31:50 old-k8s-version-783465 kubelet[6639]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Aug 16 18:31:50 old-k8s-version-783465 kubelet[6639]: created by net/http.(*Transport).queueForDial
	Aug 16 18:31:50 old-k8s-version-783465 kubelet[6639]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Aug 16 18:31:50 old-k8s-version-783465 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Aug 16 18:31:50 old-k8s-version-783465 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-783465 -n old-k8s-version-783465
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-783465 -n old-k8s-version-783465: exit status 2 (222.519079ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-783465" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.48s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (399.9s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-864476 -n no-preload-864476
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-08-16 18:35:08.091902985 +0000 UTC m=+6408.410448013
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-864476 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-864476 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.379µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-864476 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-864476 -n no-preload-864476
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-864476 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-864476 logs -n 25: (1.254117617s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p custom-flannel-791304 sudo                          | custom-flannel-791304        | jenkins | v1.33.1 | 16 Aug 24 18:05 UTC | 16 Aug 24 18:05 UTC |
	|         | find /etc/crio -type f -exec                           |                              |         |         |                     |                     |
	|         | sh -c 'echo {}; cat {}' \;                             |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-791304 sudo                          | custom-flannel-791304        | jenkins | v1.33.1 | 16 Aug 24 18:05 UTC | 16 Aug 24 18:05 UTC |
	|         | crio config                                            |                              |         |         |                     |                     |
	| delete  | -p custom-flannel-791304                               | custom-flannel-791304        | jenkins | v1.33.1 | 16 Aug 24 18:05 UTC | 16 Aug 24 18:05 UTC |
	| start   | -p                                                     | default-k8s-diff-port-256678 | jenkins | v1.33.1 | 16 Aug 24 18:05 UTC | 16 Aug 24 18:07 UTC |
	|         | default-k8s-diff-port-256678                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-777541            | embed-certs-777541           | jenkins | v1.33.1 | 16 Aug 24 18:06 UTC | 16 Aug 24 18:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-777541                                  | embed-certs-777541           | jenkins | v1.33.1 | 16 Aug 24 18:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-864476             | no-preload-864476            | jenkins | v1.33.1 | 16 Aug 24 18:07 UTC | 16 Aug 24 18:07 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-864476                                   | no-preload-864476            | jenkins | v1.33.1 | 16 Aug 24 18:07 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-256678  | default-k8s-diff-port-256678 | jenkins | v1.33.1 | 16 Aug 24 18:07 UTC | 16 Aug 24 18:07 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-256678 | jenkins | v1.33.1 | 16 Aug 24 18:07 UTC |                     |
	|         | default-k8s-diff-port-256678                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-777541                 | embed-certs-777541           | jenkins | v1.33.1 | 16 Aug 24 18:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-783465        | old-k8s-version-783465       | jenkins | v1.33.1 | 16 Aug 24 18:08 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-777541                                  | embed-certs-777541           | jenkins | v1.33.1 | 16 Aug 24 18:08 UTC | 16 Aug 24 18:19 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-864476                  | no-preload-864476            | jenkins | v1.33.1 | 16 Aug 24 18:09 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-864476                                   | no-preload-864476            | jenkins | v1.33.1 | 16 Aug 24 18:09 UTC | 16 Aug 24 18:19 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-256678       | default-k8s-diff-port-256678 | jenkins | v1.33.1 | 16 Aug 24 18:09 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-256678 | jenkins | v1.33.1 | 16 Aug 24 18:09 UTC | 16 Aug 24 18:19 UTC |
	|         | default-k8s-diff-port-256678                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-783465                              | old-k8s-version-783465       | jenkins | v1.33.1 | 16 Aug 24 18:10 UTC | 16 Aug 24 18:10 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-783465             | old-k8s-version-783465       | jenkins | v1.33.1 | 16 Aug 24 18:10 UTC | 16 Aug 24 18:10 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-783465                              | old-k8s-version-783465       | jenkins | v1.33.1 | 16 Aug 24 18:10 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-783465                              | old-k8s-version-783465       | jenkins | v1.33.1 | 16 Aug 24 18:34 UTC | 16 Aug 24 18:34 UTC |
	| start   | -p newest-cni-774287 --memory=2200 --alsologtostderr   | newest-cni-774287            | jenkins | v1.33.1 | 16 Aug 24 18:34 UTC | 16 Aug 24 18:35 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| delete  | -p embed-certs-777541                                  | embed-certs-777541           | jenkins | v1.33.1 | 16 Aug 24 18:34 UTC | 16 Aug 24 18:34 UTC |
	| addons  | enable metrics-server -p newest-cni-774287             | newest-cni-774287            | jenkins | v1.33.1 | 16 Aug 24 18:35 UTC | 16 Aug 24 18:35 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-774287                                   | newest-cni-774287            | jenkins | v1.33.1 | 16 Aug 24 18:35 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 18:34:14
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 18:34:14.800399   81976 out.go:345] Setting OutFile to fd 1 ...
	I0816 18:34:14.800917   81976 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 18:34:14.800935   81976 out.go:358] Setting ErrFile to fd 2...
	I0816 18:34:14.800943   81976 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 18:34:14.801359   81976 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-9545/.minikube/bin
	I0816 18:34:14.802232   81976 out.go:352] Setting JSON to false
	I0816 18:34:14.803127   81976 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8153,"bootTime":1723825102,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0816 18:34:14.803184   81976 start.go:139] virtualization: kvm guest
	I0816 18:34:14.805422   81976 out.go:177] * [newest-cni-774287] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0816 18:34:14.806793   81976 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 18:34:14.806813   81976 notify.go:220] Checking for updates...
	I0816 18:34:14.809572   81976 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 18:34:14.810834   81976 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19461-9545/kubeconfig
	I0816 18:34:14.812152   81976 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19461-9545/.minikube
	I0816 18:34:14.813328   81976 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 18:34:14.814617   81976 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 18:34:14.816259   81976 config.go:182] Loaded profile config "default-k8s-diff-port-256678": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 18:34:14.816379   81976 config.go:182] Loaded profile config "embed-certs-777541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 18:34:14.816475   81976 config.go:182] Loaded profile config "no-preload-864476": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 18:34:14.816541   81976 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 18:34:14.851938   81976 out.go:177] * Using the kvm2 driver based on user configuration
	I0816 18:34:14.853195   81976 start.go:297] selected driver: kvm2
	I0816 18:34:14.853217   81976 start.go:901] validating driver "kvm2" against <nil>
	I0816 18:34:14.853232   81976 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 18:34:14.853918   81976 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 18:34:14.853993   81976 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19461-9545/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 18:34:14.868665   81976 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0816 18:34:14.868720   81976 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0816 18:34:14.868747   81976 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0816 18:34:14.868939   81976 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0816 18:34:14.868995   81976 cni.go:84] Creating CNI manager for ""
	I0816 18:34:14.869008   81976 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 18:34:14.869019   81976 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0816 18:34:14.869098   81976 start.go:340] cluster config:
	{Name:newest-cni-774287 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-774287 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 18:34:14.869231   81976 iso.go:125] acquiring lock: {Name:mke35866c1d0f078e1f027c2d727692722810e04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 18:34:14.870976   81976 out.go:177] * Starting "newest-cni-774287" primary control-plane node in "newest-cni-774287" cluster
	I0816 18:34:14.872003   81976 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 18:34:14.872030   81976 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0816 18:34:14.872037   81976 cache.go:56] Caching tarball of preloaded images
	I0816 18:34:14.872113   81976 preload.go:172] Found /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 18:34:14.872126   81976 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0816 18:34:14.872244   81976 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/newest-cni-774287/config.json ...
	I0816 18:34:14.872264   81976 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/newest-cni-774287/config.json: {Name:mk36d324910fe56cbc34dc45337a916147efc7d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:34:14.872404   81976 start.go:360] acquireMachinesLock for newest-cni-774287: {Name:mke1ffa1f2d6d714bdd85e184816ba8f4dfd08f1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 18:34:14.872431   81976 start.go:364] duration metric: took 14.058µs to acquireMachinesLock for "newest-cni-774287"
	I0816 18:34:14.872444   81976 start.go:93] Provisioning new machine with config: &{Name:newest-cni-774287 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0 ClusterName:newest-cni-774287 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 18:34:14.872501   81976 start.go:125] createHost starting for "" (driver="kvm2")
	I0816 18:34:14.873921   81976 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 18:34:14.874046   81976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:34:14.874086   81976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:34:14.890021   81976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32831
	I0816 18:34:14.890409   81976 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:34:14.890971   81976 main.go:141] libmachine: Using API Version  1
	I0816 18:34:14.890990   81976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:34:14.891321   81976 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:34:14.891533   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetMachineName
	I0816 18:34:14.891675   81976 main.go:141] libmachine: (newest-cni-774287) Calling .DriverName
	I0816 18:34:14.891816   81976 start.go:159] libmachine.API.Create for "newest-cni-774287" (driver="kvm2")
	I0816 18:34:14.891845   81976 client.go:168] LocalClient.Create starting
	I0816 18:34:14.891877   81976 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem
	I0816 18:34:14.891922   81976 main.go:141] libmachine: Decoding PEM data...
	I0816 18:34:14.891941   81976 main.go:141] libmachine: Parsing certificate...
	I0816 18:34:14.892019   81976 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem
	I0816 18:34:14.892043   81976 main.go:141] libmachine: Decoding PEM data...
	I0816 18:34:14.892060   81976 main.go:141] libmachine: Parsing certificate...
	I0816 18:34:14.892084   81976 main.go:141] libmachine: Running pre-create checks...
	I0816 18:34:14.892095   81976 main.go:141] libmachine: (newest-cni-774287) Calling .PreCreateCheck
	I0816 18:34:14.892428   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetConfigRaw
	I0816 18:34:14.892796   81976 main.go:141] libmachine: Creating machine...
	I0816 18:34:14.892811   81976 main.go:141] libmachine: (newest-cni-774287) Calling .Create
	I0816 18:34:14.892961   81976 main.go:141] libmachine: (newest-cni-774287) Creating KVM machine...
	I0816 18:34:14.894317   81976 main.go:141] libmachine: (newest-cni-774287) DBG | found existing default KVM network
	I0816 18:34:14.896019   81976 main.go:141] libmachine: (newest-cni-774287) DBG | I0816 18:34:14.895877   81999 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012dfa0}
	I0816 18:34:14.896040   81976 main.go:141] libmachine: (newest-cni-774287) DBG | created network xml: 
	I0816 18:34:14.896050   81976 main.go:141] libmachine: (newest-cni-774287) DBG | <network>
	I0816 18:34:14.896056   81976 main.go:141] libmachine: (newest-cni-774287) DBG |   <name>mk-newest-cni-774287</name>
	I0816 18:34:14.896062   81976 main.go:141] libmachine: (newest-cni-774287) DBG |   <dns enable='no'/>
	I0816 18:34:14.896066   81976 main.go:141] libmachine: (newest-cni-774287) DBG |   
	I0816 18:34:14.896073   81976 main.go:141] libmachine: (newest-cni-774287) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0816 18:34:14.896084   81976 main.go:141] libmachine: (newest-cni-774287) DBG |     <dhcp>
	I0816 18:34:14.896093   81976 main.go:141] libmachine: (newest-cni-774287) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0816 18:34:14.896100   81976 main.go:141] libmachine: (newest-cni-774287) DBG |     </dhcp>
	I0816 18:34:14.896111   81976 main.go:141] libmachine: (newest-cni-774287) DBG |   </ip>
	I0816 18:34:14.896117   81976 main.go:141] libmachine: (newest-cni-774287) DBG |   
	I0816 18:34:14.896125   81976 main.go:141] libmachine: (newest-cni-774287) DBG | </network>
	I0816 18:34:14.896136   81976 main.go:141] libmachine: (newest-cni-774287) DBG | 
	I0816 18:34:14.901237   81976 main.go:141] libmachine: (newest-cni-774287) DBG | trying to create private KVM network mk-newest-cni-774287 192.168.39.0/24...
	I0816 18:34:14.971625   81976 main.go:141] libmachine: (newest-cni-774287) DBG | private KVM network mk-newest-cni-774287 192.168.39.0/24 created
	I0816 18:34:14.971674   81976 main.go:141] libmachine: (newest-cni-774287) DBG | I0816 18:34:14.971583   81999 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19461-9545/.minikube
	I0816 18:34:14.971688   81976 main.go:141] libmachine: (newest-cni-774287) Setting up store path in /home/jenkins/minikube-integration/19461-9545/.minikube/machines/newest-cni-774287 ...
	I0816 18:34:14.971710   81976 main.go:141] libmachine: (newest-cni-774287) Building disk image from file:///home/jenkins/minikube-integration/19461-9545/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0816 18:34:14.971768   81976 main.go:141] libmachine: (newest-cni-774287) Downloading /home/jenkins/minikube-integration/19461-9545/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19461-9545/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0816 18:34:15.226744   81976 main.go:141] libmachine: (newest-cni-774287) DBG | I0816 18:34:15.226565   81999 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/newest-cni-774287/id_rsa...
	I0816 18:34:15.482647   81976 main.go:141] libmachine: (newest-cni-774287) DBG | I0816 18:34:15.482516   81999 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/newest-cni-774287/newest-cni-774287.rawdisk...
	I0816 18:34:15.482677   81976 main.go:141] libmachine: (newest-cni-774287) DBG | Writing magic tar header
	I0816 18:34:15.482691   81976 main.go:141] libmachine: (newest-cni-774287) DBG | Writing SSH key tar header
	I0816 18:34:15.482699   81976 main.go:141] libmachine: (newest-cni-774287) DBG | I0816 18:34:15.482631   81999 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19461-9545/.minikube/machines/newest-cni-774287 ...
	I0816 18:34:15.482727   81976 main.go:141] libmachine: (newest-cni-774287) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/newest-cni-774287
	I0816 18:34:15.482770   81976 main.go:141] libmachine: (newest-cni-774287) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19461-9545/.minikube/machines
	I0816 18:34:15.482792   81976 main.go:141] libmachine: (newest-cni-774287) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19461-9545/.minikube
	I0816 18:34:15.482807   81976 main.go:141] libmachine: (newest-cni-774287) Setting executable bit set on /home/jenkins/minikube-integration/19461-9545/.minikube/machines/newest-cni-774287 (perms=drwx------)
	I0816 18:34:15.482822   81976 main.go:141] libmachine: (newest-cni-774287) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19461-9545
	I0816 18:34:15.482833   81976 main.go:141] libmachine: (newest-cni-774287) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0816 18:34:15.482842   81976 main.go:141] libmachine: (newest-cni-774287) DBG | Checking permissions on dir: /home/jenkins
	I0816 18:34:15.482861   81976 main.go:141] libmachine: (newest-cni-774287) Setting executable bit set on /home/jenkins/minikube-integration/19461-9545/.minikube/machines (perms=drwxr-xr-x)
	I0816 18:34:15.482877   81976 main.go:141] libmachine: (newest-cni-774287) Setting executable bit set on /home/jenkins/minikube-integration/19461-9545/.minikube (perms=drwxr-xr-x)
	I0816 18:34:15.482891   81976 main.go:141] libmachine: (newest-cni-774287) Setting executable bit set on /home/jenkins/minikube-integration/19461-9545 (perms=drwxrwxr-x)
	I0816 18:34:15.482902   81976 main.go:141] libmachine: (newest-cni-774287) DBG | Checking permissions on dir: /home
	I0816 18:34:15.482914   81976 main.go:141] libmachine: (newest-cni-774287) DBG | Skipping /home - not owner
	I0816 18:34:15.482928   81976 main.go:141] libmachine: (newest-cni-774287) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0816 18:34:15.482940   81976 main.go:141] libmachine: (newest-cni-774287) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0816 18:34:15.482946   81976 main.go:141] libmachine: (newest-cni-774287) Creating domain...
	I0816 18:34:15.483916   81976 main.go:141] libmachine: (newest-cni-774287) define libvirt domain using xml: 
	I0816 18:34:15.483935   81976 main.go:141] libmachine: (newest-cni-774287) <domain type='kvm'>
	I0816 18:34:15.483945   81976 main.go:141] libmachine: (newest-cni-774287)   <name>newest-cni-774287</name>
	I0816 18:34:15.483953   81976 main.go:141] libmachine: (newest-cni-774287)   <memory unit='MiB'>2200</memory>
	I0816 18:34:15.483962   81976 main.go:141] libmachine: (newest-cni-774287)   <vcpu>2</vcpu>
	I0816 18:34:15.483969   81976 main.go:141] libmachine: (newest-cni-774287)   <features>
	I0816 18:34:15.483981   81976 main.go:141] libmachine: (newest-cni-774287)     <acpi/>
	I0816 18:34:15.483994   81976 main.go:141] libmachine: (newest-cni-774287)     <apic/>
	I0816 18:34:15.484004   81976 main.go:141] libmachine: (newest-cni-774287)     <pae/>
	I0816 18:34:15.484021   81976 main.go:141] libmachine: (newest-cni-774287)     
	I0816 18:34:15.484058   81976 main.go:141] libmachine: (newest-cni-774287)   </features>
	I0816 18:34:15.484090   81976 main.go:141] libmachine: (newest-cni-774287)   <cpu mode='host-passthrough'>
	I0816 18:34:15.484105   81976 main.go:141] libmachine: (newest-cni-774287)   
	I0816 18:34:15.484114   81976 main.go:141] libmachine: (newest-cni-774287)   </cpu>
	I0816 18:34:15.484124   81976 main.go:141] libmachine: (newest-cni-774287)   <os>
	I0816 18:34:15.484132   81976 main.go:141] libmachine: (newest-cni-774287)     <type>hvm</type>
	I0816 18:34:15.484141   81976 main.go:141] libmachine: (newest-cni-774287)     <boot dev='cdrom'/>
	I0816 18:34:15.484150   81976 main.go:141] libmachine: (newest-cni-774287)     <boot dev='hd'/>
	I0816 18:34:15.484159   81976 main.go:141] libmachine: (newest-cni-774287)     <bootmenu enable='no'/>
	I0816 18:34:15.484171   81976 main.go:141] libmachine: (newest-cni-774287)   </os>
	I0816 18:34:15.484191   81976 main.go:141] libmachine: (newest-cni-774287)   <devices>
	I0816 18:34:15.484210   81976 main.go:141] libmachine: (newest-cni-774287)     <disk type='file' device='cdrom'>
	I0816 18:34:15.484227   81976 main.go:141] libmachine: (newest-cni-774287)       <source file='/home/jenkins/minikube-integration/19461-9545/.minikube/machines/newest-cni-774287/boot2docker.iso'/>
	I0816 18:34:15.484239   81976 main.go:141] libmachine: (newest-cni-774287)       <target dev='hdc' bus='scsi'/>
	I0816 18:34:15.484252   81976 main.go:141] libmachine: (newest-cni-774287)       <readonly/>
	I0816 18:34:15.484263   81976 main.go:141] libmachine: (newest-cni-774287)     </disk>
	I0816 18:34:15.484275   81976 main.go:141] libmachine: (newest-cni-774287)     <disk type='file' device='disk'>
	I0816 18:34:15.484292   81976 main.go:141] libmachine: (newest-cni-774287)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0816 18:34:15.484316   81976 main.go:141] libmachine: (newest-cni-774287)       <source file='/home/jenkins/minikube-integration/19461-9545/.minikube/machines/newest-cni-774287/newest-cni-774287.rawdisk'/>
	I0816 18:34:15.484328   81976 main.go:141] libmachine: (newest-cni-774287)       <target dev='hda' bus='virtio'/>
	I0816 18:34:15.484344   81976 main.go:141] libmachine: (newest-cni-774287)     </disk>
	I0816 18:34:15.484360   81976 main.go:141] libmachine: (newest-cni-774287)     <interface type='network'>
	I0816 18:34:15.484372   81976 main.go:141] libmachine: (newest-cni-774287)       <source network='mk-newest-cni-774287'/>
	I0816 18:34:15.484391   81976 main.go:141] libmachine: (newest-cni-774287)       <model type='virtio'/>
	I0816 18:34:15.484404   81976 main.go:141] libmachine: (newest-cni-774287)     </interface>
	I0816 18:34:15.484411   81976 main.go:141] libmachine: (newest-cni-774287)     <interface type='network'>
	I0816 18:34:15.484419   81976 main.go:141] libmachine: (newest-cni-774287)       <source network='default'/>
	I0816 18:34:15.484431   81976 main.go:141] libmachine: (newest-cni-774287)       <model type='virtio'/>
	I0816 18:34:15.484440   81976 main.go:141] libmachine: (newest-cni-774287)     </interface>
	I0816 18:34:15.484448   81976 main.go:141] libmachine: (newest-cni-774287)     <serial type='pty'>
	I0816 18:34:15.484455   81976 main.go:141] libmachine: (newest-cni-774287)       <target port='0'/>
	I0816 18:34:15.484462   81976 main.go:141] libmachine: (newest-cni-774287)     </serial>
	I0816 18:34:15.484474   81976 main.go:141] libmachine: (newest-cni-774287)     <console type='pty'>
	I0816 18:34:15.484486   81976 main.go:141] libmachine: (newest-cni-774287)       <target type='serial' port='0'/>
	I0816 18:34:15.484504   81976 main.go:141] libmachine: (newest-cni-774287)     </console>
	I0816 18:34:15.484527   81976 main.go:141] libmachine: (newest-cni-774287)     <rng model='virtio'>
	I0816 18:34:15.484541   81976 main.go:141] libmachine: (newest-cni-774287)       <backend model='random'>/dev/random</backend>
	I0816 18:34:15.484549   81976 main.go:141] libmachine: (newest-cni-774287)     </rng>
	I0816 18:34:15.484556   81976 main.go:141] libmachine: (newest-cni-774287)     
	I0816 18:34:15.484565   81976 main.go:141] libmachine: (newest-cni-774287)     
	I0816 18:34:15.484581   81976 main.go:141] libmachine: (newest-cni-774287)   </devices>
	I0816 18:34:15.484591   81976 main.go:141] libmachine: (newest-cni-774287) </domain>
	I0816 18:34:15.484633   81976 main.go:141] libmachine: (newest-cni-774287) 
	I0816 18:34:15.489321   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:96:f4:f7 in network default
	I0816 18:34:15.489918   81976 main.go:141] libmachine: (newest-cni-774287) Ensuring networks are active...
	I0816 18:34:15.489947   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:15.490591   81976 main.go:141] libmachine: (newest-cni-774287) Ensuring network default is active
	I0816 18:34:15.490867   81976 main.go:141] libmachine: (newest-cni-774287) Ensuring network mk-newest-cni-774287 is active
	I0816 18:34:15.491446   81976 main.go:141] libmachine: (newest-cni-774287) Getting domain xml...
	I0816 18:34:15.492270   81976 main.go:141] libmachine: (newest-cni-774287) Creating domain...
	I0816 18:34:16.745025   81976 main.go:141] libmachine: (newest-cni-774287) Waiting to get IP...
	I0816 18:34:16.745811   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:16.746247   81976 main.go:141] libmachine: (newest-cni-774287) DBG | unable to find current IP address of domain newest-cni-774287 in network mk-newest-cni-774287
	I0816 18:34:16.746273   81976 main.go:141] libmachine: (newest-cni-774287) DBG | I0816 18:34:16.746226   81999 retry.go:31] will retry after 265.597921ms: waiting for machine to come up
	I0816 18:34:17.013618   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:17.014114   81976 main.go:141] libmachine: (newest-cni-774287) DBG | unable to find current IP address of domain newest-cni-774287 in network mk-newest-cni-774287
	I0816 18:34:17.014146   81976 main.go:141] libmachine: (newest-cni-774287) DBG | I0816 18:34:17.014065   81999 retry.go:31] will retry after 374.317465ms: waiting for machine to come up
	I0816 18:34:17.389569   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:17.390116   81976 main.go:141] libmachine: (newest-cni-774287) DBG | unable to find current IP address of domain newest-cni-774287 in network mk-newest-cni-774287
	I0816 18:34:17.390148   81976 main.go:141] libmachine: (newest-cni-774287) DBG | I0816 18:34:17.390067   81999 retry.go:31] will retry after 371.344854ms: waiting for machine to come up
	I0816 18:34:17.762470   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:17.762866   81976 main.go:141] libmachine: (newest-cni-774287) DBG | unable to find current IP address of domain newest-cni-774287 in network mk-newest-cni-774287
	I0816 18:34:17.762897   81976 main.go:141] libmachine: (newest-cni-774287) DBG | I0816 18:34:17.762818   81999 retry.go:31] will retry after 424.91842ms: waiting for machine to come up
	I0816 18:34:18.189428   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:18.189942   81976 main.go:141] libmachine: (newest-cni-774287) DBG | unable to find current IP address of domain newest-cni-774287 in network mk-newest-cni-774287
	I0816 18:34:18.189967   81976 main.go:141] libmachine: (newest-cni-774287) DBG | I0816 18:34:18.189901   81999 retry.go:31] will retry after 487.835028ms: waiting for machine to come up
	I0816 18:34:18.679759   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:18.680200   81976 main.go:141] libmachine: (newest-cni-774287) DBG | unable to find current IP address of domain newest-cni-774287 in network mk-newest-cni-774287
	I0816 18:34:18.680225   81976 main.go:141] libmachine: (newest-cni-774287) DBG | I0816 18:34:18.680155   81999 retry.go:31] will retry after 850.214847ms: waiting for machine to come up
	I0816 18:34:19.532156   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:19.532604   81976 main.go:141] libmachine: (newest-cni-774287) DBG | unable to find current IP address of domain newest-cni-774287 in network mk-newest-cni-774287
	I0816 18:34:19.532655   81976 main.go:141] libmachine: (newest-cni-774287) DBG | I0816 18:34:19.532552   81999 retry.go:31] will retry after 792.840893ms: waiting for machine to come up
	I0816 18:34:20.326950   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:20.327482   81976 main.go:141] libmachine: (newest-cni-774287) DBG | unable to find current IP address of domain newest-cni-774287 in network mk-newest-cni-774287
	I0816 18:34:20.327512   81976 main.go:141] libmachine: (newest-cni-774287) DBG | I0816 18:34:20.327427   81999 retry.go:31] will retry after 1.013314353s: waiting for machine to come up
	I0816 18:34:21.342627   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:21.343114   81976 main.go:141] libmachine: (newest-cni-774287) DBG | unable to find current IP address of domain newest-cni-774287 in network mk-newest-cni-774287
	I0816 18:34:21.343142   81976 main.go:141] libmachine: (newest-cni-774287) DBG | I0816 18:34:21.342998   81999 retry.go:31] will retry after 1.257401636s: waiting for machine to come up
	I0816 18:34:22.601621   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:22.602248   81976 main.go:141] libmachine: (newest-cni-774287) DBG | unable to find current IP address of domain newest-cni-774287 in network mk-newest-cni-774287
	I0816 18:34:22.602271   81976 main.go:141] libmachine: (newest-cni-774287) DBG | I0816 18:34:22.602191   81999 retry.go:31] will retry after 1.727032619s: waiting for machine to come up
	I0816 18:34:24.330884   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:24.331372   81976 main.go:141] libmachine: (newest-cni-774287) DBG | unable to find current IP address of domain newest-cni-774287 in network mk-newest-cni-774287
	I0816 18:34:24.331398   81976 main.go:141] libmachine: (newest-cni-774287) DBG | I0816 18:34:24.331343   81999 retry.go:31] will retry after 2.002119281s: waiting for machine to come up
	I0816 18:34:26.334731   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:26.335301   81976 main.go:141] libmachine: (newest-cni-774287) DBG | unable to find current IP address of domain newest-cni-774287 in network mk-newest-cni-774287
	I0816 18:34:26.335321   81976 main.go:141] libmachine: (newest-cni-774287) DBG | I0816 18:34:26.335251   81999 retry.go:31] will retry after 3.422510613s: waiting for machine to come up
	I0816 18:34:29.761853   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:29.762217   81976 main.go:141] libmachine: (newest-cni-774287) DBG | unable to find current IP address of domain newest-cni-774287 in network mk-newest-cni-774287
	I0816 18:34:29.762242   81976 main.go:141] libmachine: (newest-cni-774287) DBG | I0816 18:34:29.762173   81999 retry.go:31] will retry after 4.140861901s: waiting for machine to come up
	I0816 18:34:33.905830   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:33.906250   81976 main.go:141] libmachine: (newest-cni-774287) DBG | unable to find current IP address of domain newest-cni-774287 in network mk-newest-cni-774287
	I0816 18:34:33.906283   81976 main.go:141] libmachine: (newest-cni-774287) DBG | I0816 18:34:33.906189   81999 retry.go:31] will retry after 4.137136905s: waiting for machine to come up
	I0816 18:34:38.046346   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:38.047072   81976 main.go:141] libmachine: (newest-cni-774287) Found IP for machine: 192.168.39.194
	I0816 18:34:38.047107   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has current primary IP address 192.168.39.194 and MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:38.047119   81976 main.go:141] libmachine: (newest-cni-774287) Reserving static IP address...
	I0816 18:34:38.048060   81976 main.go:141] libmachine: (newest-cni-774287) DBG | unable to find host DHCP lease matching {name: "newest-cni-774287", mac: "52:54:00:2d:15:e2", ip: "192.168.39.194"} in network mk-newest-cni-774287
	I0816 18:34:38.126390   81976 main.go:141] libmachine: (newest-cni-774287) Reserved static IP address: 192.168.39.194
	I0816 18:34:38.126431   81976 main.go:141] libmachine: (newest-cni-774287) Waiting for SSH to be available...
	I0816 18:34:38.126443   81976 main.go:141] libmachine: (newest-cni-774287) DBG | Getting to WaitForSSH function...
	I0816 18:34:38.129678   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:38.130117   81976 main.go:141] libmachine: (newest-cni-774287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:15:e2", ip: ""} in network mk-newest-cni-774287: {Iface:virbr1 ExpiryTime:2024-08-16 19:34:29 +0000 UTC Type:0 Mac:52:54:00:2d:15:e2 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:minikube Clientid:01:52:54:00:2d:15:e2}
	I0816 18:34:38.130148   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined IP address 192.168.39.194 and MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:38.130291   81976 main.go:141] libmachine: (newest-cni-774287) DBG | Using SSH client type: external
	I0816 18:34:38.130333   81976 main.go:141] libmachine: (newest-cni-774287) DBG | Using SSH private key: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/newest-cni-774287/id_rsa (-rw-------)
	I0816 18:34:38.130393   81976 main.go:141] libmachine: (newest-cni-774287) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.194 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19461-9545/.minikube/machines/newest-cni-774287/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 18:34:38.130413   81976 main.go:141] libmachine: (newest-cni-774287) DBG | About to run SSH command:
	I0816 18:34:38.130431   81976 main.go:141] libmachine: (newest-cni-774287) DBG | exit 0
	I0816 18:34:38.261282   81976 main.go:141] libmachine: (newest-cni-774287) DBG | SSH cmd err, output: <nil>: 
	I0816 18:34:38.261587   81976 main.go:141] libmachine: (newest-cni-774287) KVM machine creation complete!
	I0816 18:34:38.261960   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetConfigRaw
	I0816 18:34:38.262482   81976 main.go:141] libmachine: (newest-cni-774287) Calling .DriverName
	I0816 18:34:38.262687   81976 main.go:141] libmachine: (newest-cni-774287) Calling .DriverName
	I0816 18:34:38.262887   81976 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0816 18:34:38.262907   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetState
	I0816 18:34:38.264106   81976 main.go:141] libmachine: Detecting operating system of created instance...
	I0816 18:34:38.264120   81976 main.go:141] libmachine: Waiting for SSH to be available...
	I0816 18:34:38.264128   81976 main.go:141] libmachine: Getting to WaitForSSH function...
	I0816 18:34:38.264156   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHHostname
	I0816 18:34:38.266644   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:38.266973   81976 main.go:141] libmachine: (newest-cni-774287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:15:e2", ip: ""} in network mk-newest-cni-774287: {Iface:virbr1 ExpiryTime:2024-08-16 19:34:29 +0000 UTC Type:0 Mac:52:54:00:2d:15:e2 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:newest-cni-774287 Clientid:01:52:54:00:2d:15:e2}
	I0816 18:34:38.267010   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined IP address 192.168.39.194 and MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:38.267183   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHPort
	I0816 18:34:38.267359   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHKeyPath
	I0816 18:34:38.267527   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHKeyPath
	I0816 18:34:38.267642   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHUsername
	I0816 18:34:38.267806   81976 main.go:141] libmachine: Using SSH client type: native
	I0816 18:34:38.268044   81976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0816 18:34:38.268059   81976 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0816 18:34:38.384019   81976 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 18:34:38.384044   81976 main.go:141] libmachine: Detecting the provisioner...
	I0816 18:34:38.384053   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHHostname
	I0816 18:34:38.387470   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:38.387991   81976 main.go:141] libmachine: (newest-cni-774287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:15:e2", ip: ""} in network mk-newest-cni-774287: {Iface:virbr1 ExpiryTime:2024-08-16 19:34:29 +0000 UTC Type:0 Mac:52:54:00:2d:15:e2 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:newest-cni-774287 Clientid:01:52:54:00:2d:15:e2}
	I0816 18:34:38.388027   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined IP address 192.168.39.194 and MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:38.388192   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHPort
	I0816 18:34:38.388363   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHKeyPath
	I0816 18:34:38.388500   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHKeyPath
	I0816 18:34:38.388678   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHUsername
	I0816 18:34:38.388850   81976 main.go:141] libmachine: Using SSH client type: native
	I0816 18:34:38.389024   81976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0816 18:34:38.389035   81976 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0816 18:34:38.505538   81976 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0816 18:34:38.505659   81976 main.go:141] libmachine: found compatible host: buildroot
	I0816 18:34:38.505681   81976 main.go:141] libmachine: Provisioning with buildroot...
	I0816 18:34:38.505694   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetMachineName
	I0816 18:34:38.505963   81976 buildroot.go:166] provisioning hostname "newest-cni-774287"
	I0816 18:34:38.505985   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetMachineName
	I0816 18:34:38.506208   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHHostname
	I0816 18:34:38.508968   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:38.509327   81976 main.go:141] libmachine: (newest-cni-774287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:15:e2", ip: ""} in network mk-newest-cni-774287: {Iface:virbr1 ExpiryTime:2024-08-16 19:34:29 +0000 UTC Type:0 Mac:52:54:00:2d:15:e2 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:newest-cni-774287 Clientid:01:52:54:00:2d:15:e2}
	I0816 18:34:38.509346   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined IP address 192.168.39.194 and MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:38.509558   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHPort
	I0816 18:34:38.509748   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHKeyPath
	I0816 18:34:38.509912   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHKeyPath
	I0816 18:34:38.510044   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHUsername
	I0816 18:34:38.510192   81976 main.go:141] libmachine: Using SSH client type: native
	I0816 18:34:38.510394   81976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0816 18:34:38.510408   81976 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-774287 && echo "newest-cni-774287" | sudo tee /etc/hostname
	I0816 18:34:38.639166   81976 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-774287
	
	I0816 18:34:38.639190   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHHostname
	I0816 18:34:38.642270   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:38.642699   81976 main.go:141] libmachine: (newest-cni-774287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:15:e2", ip: ""} in network mk-newest-cni-774287: {Iface:virbr1 ExpiryTime:2024-08-16 19:34:29 +0000 UTC Type:0 Mac:52:54:00:2d:15:e2 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:newest-cni-774287 Clientid:01:52:54:00:2d:15:e2}
	I0816 18:34:38.642720   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined IP address 192.168.39.194 and MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:38.642975   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHPort
	I0816 18:34:38.643182   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHKeyPath
	I0816 18:34:38.643333   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHKeyPath
	I0816 18:34:38.643496   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHUsername
	I0816 18:34:38.643695   81976 main.go:141] libmachine: Using SSH client type: native
	I0816 18:34:38.643909   81976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0816 18:34:38.643927   81976 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-774287' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-774287/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-774287' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 18:34:38.764958   81976 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 18:34:38.764995   81976 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19461-9545/.minikube CaCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19461-9545/.minikube}
	I0816 18:34:38.765019   81976 buildroot.go:174] setting up certificates
	I0816 18:34:38.765033   81976 provision.go:84] configureAuth start
	I0816 18:34:38.765049   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetMachineName
	I0816 18:34:38.765348   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetIP
	I0816 18:34:38.768384   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:38.768734   81976 main.go:141] libmachine: (newest-cni-774287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:15:e2", ip: ""} in network mk-newest-cni-774287: {Iface:virbr1 ExpiryTime:2024-08-16 19:34:29 +0000 UTC Type:0 Mac:52:54:00:2d:15:e2 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:newest-cni-774287 Clientid:01:52:54:00:2d:15:e2}
	I0816 18:34:38.768762   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined IP address 192.168.39.194 and MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:38.769013   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHHostname
	I0816 18:34:38.771573   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:38.771990   81976 main.go:141] libmachine: (newest-cni-774287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:15:e2", ip: ""} in network mk-newest-cni-774287: {Iface:virbr1 ExpiryTime:2024-08-16 19:34:29 +0000 UTC Type:0 Mac:52:54:00:2d:15:e2 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:newest-cni-774287 Clientid:01:52:54:00:2d:15:e2}
	I0816 18:34:38.772021   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined IP address 192.168.39.194 and MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:38.772208   81976 provision.go:143] copyHostCerts
	I0816 18:34:38.772295   81976 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem, removing ...
	I0816 18:34:38.772319   81976 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem
	I0816 18:34:38.772403   81976 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem (1082 bytes)
	I0816 18:34:38.772557   81976 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem, removing ...
	I0816 18:34:38.772569   81976 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem
	I0816 18:34:38.772604   81976 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem (1123 bytes)
	I0816 18:34:38.772725   81976 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem, removing ...
	I0816 18:34:38.772735   81976 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem
	I0816 18:34:38.772764   81976 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem (1679 bytes)
	I0816 18:34:38.772841   81976 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem org=jenkins.newest-cni-774287 san=[127.0.0.1 192.168.39.194 localhost minikube newest-cni-774287]
	I0816 18:34:39.063813   81976 provision.go:177] copyRemoteCerts
	I0816 18:34:39.063874   81976 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 18:34:39.063898   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHHostname
	I0816 18:34:39.066633   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:39.067097   81976 main.go:141] libmachine: (newest-cni-774287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:15:e2", ip: ""} in network mk-newest-cni-774287: {Iface:virbr1 ExpiryTime:2024-08-16 19:34:29 +0000 UTC Type:0 Mac:52:54:00:2d:15:e2 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:newest-cni-774287 Clientid:01:52:54:00:2d:15:e2}
	I0816 18:34:39.067132   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined IP address 192.168.39.194 and MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:39.067288   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHPort
	I0816 18:34:39.067470   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHKeyPath
	I0816 18:34:39.067625   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHUsername
	I0816 18:34:39.067788   81976 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/newest-cni-774287/id_rsa Username:docker}
	I0816 18:34:39.154901   81976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 18:34:39.178395   81976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 18:34:39.200786   81976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0816 18:34:39.223737   81976 provision.go:87] duration metric: took 458.687372ms to configureAuth
	I0816 18:34:39.223765   81976 buildroot.go:189] setting minikube options for container-runtime
	I0816 18:34:39.223959   81976 config.go:182] Loaded profile config "newest-cni-774287": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 18:34:39.224043   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHHostname
	I0816 18:34:39.226949   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:39.227378   81976 main.go:141] libmachine: (newest-cni-774287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:15:e2", ip: ""} in network mk-newest-cni-774287: {Iface:virbr1 ExpiryTime:2024-08-16 19:34:29 +0000 UTC Type:0 Mac:52:54:00:2d:15:e2 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:newest-cni-774287 Clientid:01:52:54:00:2d:15:e2}
	I0816 18:34:39.227413   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined IP address 192.168.39.194 and MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:39.227579   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHPort
	I0816 18:34:39.227784   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHKeyPath
	I0816 18:34:39.227958   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHKeyPath
	I0816 18:34:39.228132   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHUsername
	I0816 18:34:39.228327   81976 main.go:141] libmachine: Using SSH client type: native
	I0816 18:34:39.228534   81976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0816 18:34:39.228569   81976 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 18:34:39.514003   81976 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 18:34:39.514040   81976 main.go:141] libmachine: Checking connection to Docker...
	I0816 18:34:39.514052   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetURL
	I0816 18:34:39.515607   81976 main.go:141] libmachine: (newest-cni-774287) DBG | Using libvirt version 6000000
	I0816 18:34:39.518012   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:39.518416   81976 main.go:141] libmachine: (newest-cni-774287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:15:e2", ip: ""} in network mk-newest-cni-774287: {Iface:virbr1 ExpiryTime:2024-08-16 19:34:29 +0000 UTC Type:0 Mac:52:54:00:2d:15:e2 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:newest-cni-774287 Clientid:01:52:54:00:2d:15:e2}
	I0816 18:34:39.518437   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined IP address 192.168.39.194 and MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:39.518699   81976 main.go:141] libmachine: Docker is up and running!
	I0816 18:34:39.518717   81976 main.go:141] libmachine: Reticulating splines...
	I0816 18:34:39.518725   81976 client.go:171] duration metric: took 24.626872631s to LocalClient.Create
	I0816 18:34:39.518751   81976 start.go:167] duration metric: took 24.626937052s to libmachine.API.Create "newest-cni-774287"
	I0816 18:34:39.518760   81976 start.go:293] postStartSetup for "newest-cni-774287" (driver="kvm2")
	I0816 18:34:39.518772   81976 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 18:34:39.518792   81976 main.go:141] libmachine: (newest-cni-774287) Calling .DriverName
	I0816 18:34:39.519063   81976 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 18:34:39.519090   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHHostname
	I0816 18:34:39.521609   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:39.521925   81976 main.go:141] libmachine: (newest-cni-774287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:15:e2", ip: ""} in network mk-newest-cni-774287: {Iface:virbr1 ExpiryTime:2024-08-16 19:34:29 +0000 UTC Type:0 Mac:52:54:00:2d:15:e2 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:newest-cni-774287 Clientid:01:52:54:00:2d:15:e2}
	I0816 18:34:39.521958   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined IP address 192.168.39.194 and MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:39.522039   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHPort
	I0816 18:34:39.522214   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHKeyPath
	I0816 18:34:39.522374   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHUsername
	I0816 18:34:39.522489   81976 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/newest-cni-774287/id_rsa Username:docker}
	I0816 18:34:39.610884   81976 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 18:34:39.614996   81976 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 18:34:39.615023   81976 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/addons for local assets ...
	I0816 18:34:39.615082   81976 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/files for local assets ...
	I0816 18:34:39.615151   81976 filesync.go:149] local asset: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem -> 167532.pem in /etc/ssl/certs
	I0816 18:34:39.615260   81976 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 18:34:39.624399   81976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /etc/ssl/certs/167532.pem (1708 bytes)
	I0816 18:34:39.648986   81976 start.go:296] duration metric: took 130.2114ms for postStartSetup
	I0816 18:34:39.649037   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetConfigRaw
	I0816 18:34:39.649632   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetIP
	I0816 18:34:39.652258   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:39.652593   81976 main.go:141] libmachine: (newest-cni-774287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:15:e2", ip: ""} in network mk-newest-cni-774287: {Iface:virbr1 ExpiryTime:2024-08-16 19:34:29 +0000 UTC Type:0 Mac:52:54:00:2d:15:e2 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:newest-cni-774287 Clientid:01:52:54:00:2d:15:e2}
	I0816 18:34:39.652643   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined IP address 192.168.39.194 and MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:39.652925   81976 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/newest-cni-774287/config.json ...
	I0816 18:34:39.653140   81976 start.go:128] duration metric: took 24.780630072s to createHost
	I0816 18:34:39.653166   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHHostname
	I0816 18:34:39.655622   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:39.655955   81976 main.go:141] libmachine: (newest-cni-774287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:15:e2", ip: ""} in network mk-newest-cni-774287: {Iface:virbr1 ExpiryTime:2024-08-16 19:34:29 +0000 UTC Type:0 Mac:52:54:00:2d:15:e2 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:newest-cni-774287 Clientid:01:52:54:00:2d:15:e2}
	I0816 18:34:39.656010   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined IP address 192.168.39.194 and MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:39.656103   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHPort
	I0816 18:34:39.656356   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHKeyPath
	I0816 18:34:39.656537   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHKeyPath
	I0816 18:34:39.656710   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHUsername
	I0816 18:34:39.656859   81976 main.go:141] libmachine: Using SSH client type: native
	I0816 18:34:39.657018   81976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0816 18:34:39.657037   81976 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 18:34:39.777271   81976 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723833279.749567986
	
	I0816 18:34:39.777292   81976 fix.go:216] guest clock: 1723833279.749567986
	I0816 18:34:39.777298   81976 fix.go:229] Guest: 2024-08-16 18:34:39.749567986 +0000 UTC Remote: 2024-08-16 18:34:39.653152847 +0000 UTC m=+24.886950896 (delta=96.415139ms)
	I0816 18:34:39.777346   81976 fix.go:200] guest clock delta is within tolerance: 96.415139ms
	I0816 18:34:39.777354   81976 start.go:83] releasing machines lock for "newest-cni-774287", held for 24.904916568s
	I0816 18:34:39.777384   81976 main.go:141] libmachine: (newest-cni-774287) Calling .DriverName
	I0816 18:34:39.777658   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetIP
	I0816 18:34:39.780903   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:39.781313   81976 main.go:141] libmachine: (newest-cni-774287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:15:e2", ip: ""} in network mk-newest-cni-774287: {Iface:virbr1 ExpiryTime:2024-08-16 19:34:29 +0000 UTC Type:0 Mac:52:54:00:2d:15:e2 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:newest-cni-774287 Clientid:01:52:54:00:2d:15:e2}
	I0816 18:34:39.781343   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined IP address 192.168.39.194 and MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:39.781470   81976 main.go:141] libmachine: (newest-cni-774287) Calling .DriverName
	I0816 18:34:39.781967   81976 main.go:141] libmachine: (newest-cni-774287) Calling .DriverName
	I0816 18:34:39.782154   81976 main.go:141] libmachine: (newest-cni-774287) Calling .DriverName
	I0816 18:34:39.782247   81976 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 18:34:39.782286   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHHostname
	I0816 18:34:39.782401   81976 ssh_runner.go:195] Run: cat /version.json
	I0816 18:34:39.782425   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHHostname
	I0816 18:34:39.784819   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:39.785157   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:39.785263   81976 main.go:141] libmachine: (newest-cni-774287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:15:e2", ip: ""} in network mk-newest-cni-774287: {Iface:virbr1 ExpiryTime:2024-08-16 19:34:29 +0000 UTC Type:0 Mac:52:54:00:2d:15:e2 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:newest-cni-774287 Clientid:01:52:54:00:2d:15:e2}
	I0816 18:34:39.785296   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined IP address 192.168.39.194 and MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:39.785471   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHPort
	I0816 18:34:39.785568   81976 main.go:141] libmachine: (newest-cni-774287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:15:e2", ip: ""} in network mk-newest-cni-774287: {Iface:virbr1 ExpiryTime:2024-08-16 19:34:29 +0000 UTC Type:0 Mac:52:54:00:2d:15:e2 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:newest-cni-774287 Clientid:01:52:54:00:2d:15:e2}
	I0816 18:34:39.785594   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined IP address 192.168.39.194 and MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:39.785628   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHKeyPath
	I0816 18:34:39.785782   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHPort
	I0816 18:34:39.785792   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHUsername
	I0816 18:34:39.785965   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHKeyPath
	I0816 18:34:39.785962   81976 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/newest-cni-774287/id_rsa Username:docker}
	I0816 18:34:39.786115   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHUsername
	I0816 18:34:39.786259   81976 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/newest-cni-774287/id_rsa Username:docker}
	I0816 18:34:39.905988   81976 ssh_runner.go:195] Run: systemctl --version
	I0816 18:34:39.912029   81976 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 18:34:40.073666   81976 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 18:34:40.079320   81976 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 18:34:40.079396   81976 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 18:34:40.094734   81976 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 18:34:40.094760   81976 start.go:495] detecting cgroup driver to use...
	I0816 18:34:40.094812   81976 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 18:34:40.110377   81976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 18:34:40.123825   81976 docker.go:217] disabling cri-docker service (if available) ...
	I0816 18:34:40.123886   81976 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 18:34:40.137975   81976 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 18:34:40.150867   81976 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 18:34:40.273358   81976 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 18:34:40.433238   81976 docker.go:233] disabling docker service ...
	I0816 18:34:40.433314   81976 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 18:34:40.447429   81976 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 18:34:40.462059   81976 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 18:34:40.592974   81976 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 18:34:40.722449   81976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 18:34:40.736925   81976 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 18:34:40.755887   81976 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 18:34:40.755957   81976 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:34:40.766221   81976 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 18:34:40.766281   81976 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:34:40.777391   81976 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:34:40.787248   81976 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:34:40.798119   81976 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 18:34:40.808410   81976 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:34:40.818609   81976 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:34:40.836177   81976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:34:40.847019   81976 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 18:34:40.856589   81976 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 18:34:40.856678   81976 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 18:34:40.870035   81976 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 18:34:40.879791   81976 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:34:41.012032   81976 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 18:34:41.149016   81976 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 18:34:41.149106   81976 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 18:34:41.153646   81976 start.go:563] Will wait 60s for crictl version
	I0816 18:34:41.153710   81976 ssh_runner.go:195] Run: which crictl
	I0816 18:34:41.158088   81976 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 18:34:41.199450   81976 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 18:34:41.199520   81976 ssh_runner.go:195] Run: crio --version
	I0816 18:34:41.227587   81976 ssh_runner.go:195] Run: crio --version
	I0816 18:34:41.255800   81976 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 18:34:41.257025   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetIP
	I0816 18:34:41.259537   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:41.259924   81976 main.go:141] libmachine: (newest-cni-774287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:15:e2", ip: ""} in network mk-newest-cni-774287: {Iface:virbr1 ExpiryTime:2024-08-16 19:34:29 +0000 UTC Type:0 Mac:52:54:00:2d:15:e2 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:newest-cni-774287 Clientid:01:52:54:00:2d:15:e2}
	I0816 18:34:41.259954   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined IP address 192.168.39.194 and MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:41.260129   81976 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0816 18:34:41.264282   81976 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 18:34:41.278214   81976 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0816 18:34:41.279347   81976 kubeadm.go:883] updating cluster {Name:newest-cni-774287 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:newest-cni-774287 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.194 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 18:34:41.279486   81976 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 18:34:41.279554   81976 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 18:34:41.311048   81976 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0816 18:34:41.311127   81976 ssh_runner.go:195] Run: which lz4
	I0816 18:34:41.315060   81976 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 18:34:41.319427   81976 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 18:34:41.319463   81976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0816 18:34:42.497551   81976 crio.go:462] duration metric: took 1.182531558s to copy over tarball
	I0816 18:34:42.497631   81976 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 18:34:44.549865   81976 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.05220237s)
	I0816 18:34:44.549898   81976 crio.go:469] duration metric: took 2.052316244s to extract the tarball
	I0816 18:34:44.549908   81976 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 18:34:44.589205   81976 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 18:34:44.633492   81976 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 18:34:44.633513   81976 cache_images.go:84] Images are preloaded, skipping loading
	I0816 18:34:44.633520   81976 kubeadm.go:934] updating node { 192.168.39.194 8443 v1.31.0 crio true true} ...
	I0816 18:34:44.633634   81976 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-774287 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.194
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:newest-cni-774287 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 18:34:44.633723   81976 ssh_runner.go:195] Run: crio config
	I0816 18:34:44.676555   81976 cni.go:84] Creating CNI manager for ""
	I0816 18:34:44.676585   81976 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 18:34:44.676599   81976 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0816 18:34:44.676645   81976 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.39.194 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-774287 NodeName:newest-cni-774287 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.194"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:ma
p[] NodeIP:192.168.39.194 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 18:34:44.676823   81976 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.194
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-774287"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.194
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.194"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 18:34:44.676895   81976 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 18:34:44.687664   81976 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 18:34:44.687731   81976 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 18:34:44.697996   81976 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I0816 18:34:44.714412   81976 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 18:34:44.730623   81976 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2285 bytes)
	I0816 18:34:44.746941   81976 ssh_runner.go:195] Run: grep 192.168.39.194	control-plane.minikube.internal$ /etc/hosts
	I0816 18:34:44.750540   81976 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.194	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 18:34:44.762447   81976 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:34:44.896041   81976 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 18:34:44.912732   81976 certs.go:68] Setting up /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/newest-cni-774287 for IP: 192.168.39.194
	I0816 18:34:44.912759   81976 certs.go:194] generating shared ca certs ...
	I0816 18:34:44.912780   81976 certs.go:226] acquiring lock for ca certs: {Name:mk1e000575c94c138513704c2900b8a68810eb65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:34:44.912954   81976 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key
	I0816 18:34:44.913017   81976 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key
	I0816 18:34:44.913033   81976 certs.go:256] generating profile certs ...
	I0816 18:34:44.913107   81976 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/newest-cni-774287/client.key
	I0816 18:34:44.913128   81976 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/newest-cni-774287/client.crt with IP's: []
	I0816 18:34:45.019285   81976 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/newest-cni-774287/client.crt ...
	I0816 18:34:45.019318   81976 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/newest-cni-774287/client.crt: {Name:mke416257574e1cbfe7f500544155bbf43ff3cc2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:34:45.019507   81976 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/newest-cni-774287/client.key ...
	I0816 18:34:45.019522   81976 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/newest-cni-774287/client.key: {Name:mkfc6e8a8609badd7af0ff0f077e791069f3e2de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:34:45.019624   81976 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/newest-cni-774287/apiserver.key.fca7bee4
	I0816 18:34:45.019647   81976 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/newest-cni-774287/apiserver.crt.fca7bee4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.194]
	I0816 18:34:45.139778   81976 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/newest-cni-774287/apiserver.crt.fca7bee4 ...
	I0816 18:34:45.139806   81976 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/newest-cni-774287/apiserver.crt.fca7bee4: {Name:mk38e2ebede44cac70d2a0a81f88de5e47468e66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:34:45.139963   81976 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/newest-cni-774287/apiserver.key.fca7bee4 ...
	I0816 18:34:45.139976   81976 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/newest-cni-774287/apiserver.key.fca7bee4: {Name:mk9b63ab98be75509146e485e7071054c6b17ef2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:34:45.140044   81976 certs.go:381] copying /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/newest-cni-774287/apiserver.crt.fca7bee4 -> /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/newest-cni-774287/apiserver.crt
	I0816 18:34:45.140133   81976 certs.go:385] copying /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/newest-cni-774287/apiserver.key.fca7bee4 -> /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/newest-cni-774287/apiserver.key
	I0816 18:34:45.140197   81976 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/newest-cni-774287/proxy-client.key
	I0816 18:34:45.140215   81976 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/newest-cni-774287/proxy-client.crt with IP's: []
	I0816 18:34:45.257627   81976 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/newest-cni-774287/proxy-client.crt ...
	I0816 18:34:45.257653   81976 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/newest-cni-774287/proxy-client.crt: {Name:mkcee641644bdccccb7d381966fb4e2a8538b6d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:34:45.257811   81976 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/newest-cni-774287/proxy-client.key ...
	I0816 18:34:45.257824   81976 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/newest-cni-774287/proxy-client.key: {Name:mkd46ee1511a6bc8543ffec2fca3738441211235 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:34:45.258007   81976 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem (1338 bytes)
	W0816 18:34:45.258042   81976 certs.go:480] ignoring /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753_empty.pem, impossibly tiny 0 bytes
	I0816 18:34:45.258052   81976 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 18:34:45.258073   81976 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem (1082 bytes)
	I0816 18:34:45.258099   81976 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem (1123 bytes)
	I0816 18:34:45.258119   81976 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem (1679 bytes)
	I0816 18:34:45.258155   81976 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem (1708 bytes)
	I0816 18:34:45.258737   81976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 18:34:45.284605   81976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 18:34:45.306820   81976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 18:34:45.329093   81976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 18:34:45.352763   81976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/newest-cni-774287/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0816 18:34:45.374776   81976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/newest-cni-774287/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 18:34:45.397223   81976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/newest-cni-774287/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 18:34:45.419303   81976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/newest-cni-774287/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 18:34:45.441020   81976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /usr/share/ca-certificates/167532.pem (1708 bytes)
	I0816 18:34:45.462959   81976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 18:34:45.485684   81976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem --> /usr/share/ca-certificates/16753.pem (1338 bytes)
	I0816 18:34:45.507710   81976 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 18:34:45.523719   81976 ssh_runner.go:195] Run: openssl version
	I0816 18:34:45.529401   81976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167532.pem && ln -fs /usr/share/ca-certificates/167532.pem /etc/ssl/certs/167532.pem"
	I0816 18:34:45.540607   81976 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167532.pem
	I0816 18:34:45.544642   81976 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 17:00 /usr/share/ca-certificates/167532.pem
	I0816 18:34:45.544695   81976 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167532.pem
	I0816 18:34:45.550626   81976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167532.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 18:34:45.561095   81976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 18:34:45.571628   81976 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:34:45.575730   81976 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:49 /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:34:45.575777   81976 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:34:45.581073   81976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 18:34:45.591332   81976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16753.pem && ln -fs /usr/share/ca-certificates/16753.pem /etc/ssl/certs/16753.pem"
	I0816 18:34:45.601763   81976 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16753.pem
	I0816 18:34:45.606286   81976 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 17:00 /usr/share/ca-certificates/16753.pem
	I0816 18:34:45.606341   81976 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16753.pem
	I0816 18:34:45.612569   81976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16753.pem /etc/ssl/certs/51391683.0"
	I0816 18:34:45.624890   81976 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 18:34:45.629002   81976 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0816 18:34:45.629048   81976 kubeadm.go:392] StartCluster: {Name:newest-cni-774287 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:newest-cni-774287 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.194 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 18:34:45.629109   81976 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 18:34:45.629147   81976 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 18:34:45.675750   81976 cri.go:89] found id: ""
	I0816 18:34:45.675818   81976 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 18:34:45.690501   81976 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 18:34:45.700138   81976 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 18:34:45.718445   81976 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 18:34:45.718462   81976 kubeadm.go:157] found existing configuration files:
	
	I0816 18:34:45.718512   81976 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 18:34:45.729902   81976 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 18:34:45.729963   81976 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 18:34:45.743104   81976 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 18:34:45.758217   81976 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 18:34:45.758286   81976 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 18:34:45.767172   81976 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 18:34:45.776327   81976 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 18:34:45.776373   81976 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 18:34:45.785770   81976 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 18:34:45.794964   81976 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 18:34:45.795032   81976 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 18:34:45.804575   81976 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 18:34:45.919289   81976 kubeadm.go:310] W0816 18:34:45.895617     844 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 18:34:45.920345   81976 kubeadm.go:310] W0816 18:34:45.897019     844 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 18:34:46.027959   81976 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 18:34:55.551131   81976 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0816 18:34:55.551244   81976 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 18:34:55.551377   81976 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 18:34:55.551506   81976 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 18:34:55.551629   81976 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0816 18:34:55.551722   81976 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 18:34:55.553395   81976 out.go:235]   - Generating certificates and keys ...
	I0816 18:34:55.553493   81976 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 18:34:55.553609   81976 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 18:34:55.553716   81976 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0816 18:34:55.553807   81976 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0816 18:34:55.553891   81976 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0816 18:34:55.553958   81976 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0816 18:34:55.554035   81976 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0816 18:34:55.554170   81976 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-774287] and IPs [192.168.39.194 127.0.0.1 ::1]
	I0816 18:34:55.554237   81976 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0816 18:34:55.554414   81976 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-774287] and IPs [192.168.39.194 127.0.0.1 ::1]
	I0816 18:34:55.554503   81976 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0816 18:34:55.554585   81976 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0816 18:34:55.554635   81976 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0816 18:34:55.554694   81976 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 18:34:55.554760   81976 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 18:34:55.554833   81976 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0816 18:34:55.554906   81976 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 18:34:55.554997   81976 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 18:34:55.555073   81976 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 18:34:55.555179   81976 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 18:34:55.555236   81976 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 18:34:55.557543   81976 out.go:235]   - Booting up control plane ...
	I0816 18:34:55.557626   81976 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 18:34:55.557710   81976 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 18:34:55.557804   81976 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 18:34:55.557924   81976 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 18:34:55.557996   81976 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 18:34:55.558054   81976 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 18:34:55.558193   81976 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0816 18:34:55.558287   81976 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0816 18:34:55.558342   81976 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001685997s
	I0816 18:34:55.558407   81976 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0816 18:34:55.558492   81976 kubeadm.go:310] [api-check] The API server is healthy after 4.502015091s
	I0816 18:34:55.558663   81976 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0816 18:34:55.558833   81976 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0816 18:34:55.558921   81976 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0816 18:34:55.559188   81976 kubeadm.go:310] [mark-control-plane] Marking the node newest-cni-774287 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0816 18:34:55.559253   81976 kubeadm.go:310] [bootstrap-token] Using token: yvn5rx.4c86rba9eq5eef77
	I0816 18:34:55.560564   81976 out.go:235]   - Configuring RBAC rules ...
	I0816 18:34:55.560694   81976 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0816 18:34:55.560793   81976 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0816 18:34:55.560968   81976 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0816 18:34:55.561125   81976 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0816 18:34:55.561237   81976 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0816 18:34:55.561329   81976 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0816 18:34:55.561451   81976 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0816 18:34:55.561495   81976 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0816 18:34:55.561535   81976 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0816 18:34:55.561542   81976 kubeadm.go:310] 
	I0816 18:34:55.561637   81976 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0816 18:34:55.561655   81976 kubeadm.go:310] 
	I0816 18:34:55.561751   81976 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0816 18:34:55.561760   81976 kubeadm.go:310] 
	I0816 18:34:55.561795   81976 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0816 18:34:55.561881   81976 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0816 18:34:55.561949   81976 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0816 18:34:55.561957   81976 kubeadm.go:310] 
	I0816 18:34:55.562022   81976 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0816 18:34:55.562032   81976 kubeadm.go:310] 
	I0816 18:34:55.562081   81976 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0816 18:34:55.562087   81976 kubeadm.go:310] 
	I0816 18:34:55.562129   81976 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0816 18:34:55.562200   81976 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0816 18:34:55.562258   81976 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0816 18:34:55.562264   81976 kubeadm.go:310] 
	I0816 18:34:55.562354   81976 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0816 18:34:55.562434   81976 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0816 18:34:55.562441   81976 kubeadm.go:310] 
	I0816 18:34:55.562512   81976 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token yvn5rx.4c86rba9eq5eef77 \
	I0816 18:34:55.562602   81976 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:340cd7e6f4afd769ffe0b97bb40a910498795aaf29fab06cd6b8c9a3957ccd57 \
	I0816 18:34:55.562622   81976 kubeadm.go:310] 	--control-plane 
	I0816 18:34:55.562625   81976 kubeadm.go:310] 
	I0816 18:34:55.562690   81976 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0816 18:34:55.562696   81976 kubeadm.go:310] 
	I0816 18:34:55.562757   81976 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token yvn5rx.4c86rba9eq5eef77 \
	I0816 18:34:55.562846   81976 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:340cd7e6f4afd769ffe0b97bb40a910498795aaf29fab06cd6b8c9a3957ccd57 
	I0816 18:34:55.562855   81976 cni.go:84] Creating CNI manager for ""
	I0816 18:34:55.562862   81976 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 18:34:55.564062   81976 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 18:34:55.565525   81976 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 18:34:55.575316   81976 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 18:34:55.592600   81976 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 18:34:55.592675   81976 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:34:55.592689   81976 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-774287 minikube.k8s.io/updated_at=2024_08_16T18_34_55_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd minikube.k8s.io/name=newest-cni-774287 minikube.k8s.io/primary=true
	I0816 18:34:55.623432   81976 ops.go:34] apiserver oom_adj: -16
	I0816 18:34:55.842549   81976 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:34:56.342893   81976 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:34:56.843497   81976 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:34:57.343565   81976 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:34:57.843068   81976 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:34:58.343173   81976 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:34:58.843503   81976 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:34:59.342899   81976 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:34:59.842609   81976 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:34:59.948255   81976 kubeadm.go:1113] duration metric: took 4.355649573s to wait for elevateKubeSystemPrivileges
	I0816 18:34:59.948302   81976 kubeadm.go:394] duration metric: took 14.319254196s to StartCluster
	I0816 18:34:59.948346   81976 settings.go:142] acquiring lock: {Name:mk7d6bbff611e99a9222156e58c7ea3b0a5541f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:34:59.948421   81976 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19461-9545/kubeconfig
	I0816 18:34:59.949677   81976 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/kubeconfig: {Name:mk7d5abb3a1b9b654c5d7490d32aeae2c9a1bdd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:34:59.949919   81976 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 18:34:59.949936   81976 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.194 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 18:34:59.950044   81976 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 18:34:59.950128   81976 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-774287"
	I0816 18:34:59.950139   81976 config.go:182] Loaded profile config "newest-cni-774287": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 18:34:59.950156   81976 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-774287"
	I0816 18:34:59.950163   81976 addons.go:69] Setting default-storageclass=true in profile "newest-cni-774287"
	I0816 18:34:59.950191   81976 host.go:66] Checking if "newest-cni-774287" exists ...
	I0816 18:34:59.950211   81976 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-774287"
	I0816 18:34:59.950564   81976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:34:59.950596   81976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:34:59.950607   81976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:34:59.950630   81976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:34:59.951434   81976 out.go:177] * Verifying Kubernetes components...
	I0816 18:34:59.952902   81976 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:34:59.966585   81976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39443
	I0816 18:34:59.966622   81976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42883
	I0816 18:34:59.967042   81976 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:34:59.967048   81976 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:34:59.967493   81976 main.go:141] libmachine: Using API Version  1
	I0816 18:34:59.967496   81976 main.go:141] libmachine: Using API Version  1
	I0816 18:34:59.967516   81976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:34:59.967535   81976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:34:59.967874   81976 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:34:59.967936   81976 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:34:59.968093   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetState
	I0816 18:34:59.968496   81976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:34:59.968534   81976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:34:59.971967   81976 addons.go:234] Setting addon default-storageclass=true in "newest-cni-774287"
	I0816 18:34:59.972002   81976 host.go:66] Checking if "newest-cni-774287" exists ...
	I0816 18:34:59.972338   81976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:34:59.972376   81976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:34:59.984417   81976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46169
	I0816 18:34:59.984968   81976 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:34:59.985448   81976 main.go:141] libmachine: Using API Version  1
	I0816 18:34:59.985469   81976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:34:59.985767   81976 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:34:59.985956   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetState
	I0816 18:34:59.987655   81976 main.go:141] libmachine: (newest-cni-774287) Calling .DriverName
	I0816 18:34:59.989724   81976 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:34:59.991564   81976 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 18:34:59.991583   81976 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 18:34:59.991602   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHHostname
	I0816 18:34:59.992189   81976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41927
	I0816 18:34:59.992593   81976 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:34:59.993117   81976 main.go:141] libmachine: Using API Version  1
	I0816 18:34:59.993141   81976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:34:59.993515   81976 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:34:59.994020   81976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:34:59.994064   81976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:34:59.995170   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:59.995618   81976 main.go:141] libmachine: (newest-cni-774287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:15:e2", ip: ""} in network mk-newest-cni-774287: {Iface:virbr1 ExpiryTime:2024-08-16 19:34:29 +0000 UTC Type:0 Mac:52:54:00:2d:15:e2 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:newest-cni-774287 Clientid:01:52:54:00:2d:15:e2}
	I0816 18:34:59.995642   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined IP address 192.168.39.194 and MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:59.995839   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHPort
	I0816 18:34:59.996043   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHKeyPath
	I0816 18:34:59.996239   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHUsername
	I0816 18:34:59.996414   81976 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/newest-cni-774287/id_rsa Username:docker}
	I0816 18:35:00.010490   81976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45601
	I0816 18:35:00.010992   81976 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:35:00.011451   81976 main.go:141] libmachine: Using API Version  1
	I0816 18:35:00.011476   81976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:35:00.011786   81976 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:35:00.011941   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetState
	I0816 18:35:00.013785   81976 main.go:141] libmachine: (newest-cni-774287) Calling .DriverName
	I0816 18:35:00.014031   81976 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 18:35:00.014045   81976 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 18:35:00.014060   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHHostname
	I0816 18:35:00.017420   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:35:00.017916   81976 main.go:141] libmachine: (newest-cni-774287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:15:e2", ip: ""} in network mk-newest-cni-774287: {Iface:virbr1 ExpiryTime:2024-08-16 19:34:29 +0000 UTC Type:0 Mac:52:54:00:2d:15:e2 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:newest-cni-774287 Clientid:01:52:54:00:2d:15:e2}
	I0816 18:35:00.017939   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined IP address 192.168.39.194 and MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:35:00.018234   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHPort
	I0816 18:35:00.018505   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHKeyPath
	I0816 18:35:00.018708   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHUsername
	I0816 18:35:00.018852   81976 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/newest-cni-774287/id_rsa Username:docker}
	I0816 18:35:00.256321   81976 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 18:35:00.256382   81976 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0816 18:35:00.385799   81976 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 18:35:00.494776   81976 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 18:35:00.828833   81976 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0816 18:35:00.830309   81976 api_server.go:52] waiting for apiserver process to appear ...
	I0816 18:35:00.830382   81976 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:35:00.947653   81976 main.go:141] libmachine: Making call to close driver server
	I0816 18:35:00.947677   81976 main.go:141] libmachine: (newest-cni-774287) Calling .Close
	I0816 18:35:00.947941   81976 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:35:00.948005   81976 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:35:00.948018   81976 main.go:141] libmachine: Making call to close driver server
	I0816 18:35:00.948028   81976 main.go:141] libmachine: (newest-cni-774287) Calling .Close
	I0816 18:35:00.947969   81976 main.go:141] libmachine: (newest-cni-774287) DBG | Closing plugin on server side
	I0816 18:35:00.948262   81976 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:35:00.948278   81976 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:35:00.966067   81976 main.go:141] libmachine: Making call to close driver server
	I0816 18:35:00.966088   81976 main.go:141] libmachine: (newest-cni-774287) Calling .Close
	I0816 18:35:00.966384   81976 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:35:00.966421   81976 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:35:00.966424   81976 main.go:141] libmachine: (newest-cni-774287) DBG | Closing plugin on server side
	I0816 18:35:01.336800   81976 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-774287" context rescaled to 1 replicas
	I0816 18:35:01.506703   81976 api_server.go:72] duration metric: took 1.556728964s to wait for apiserver process to appear ...
	I0816 18:35:01.506734   81976 api_server.go:88] waiting for apiserver healthz status ...
	I0816 18:35:01.506757   81976 api_server.go:253] Checking apiserver healthz at https://192.168.39.194:8443/healthz ...
	I0816 18:35:01.506935   81976 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.012128597s)
	I0816 18:35:01.506982   81976 main.go:141] libmachine: Making call to close driver server
	I0816 18:35:01.506997   81976 main.go:141] libmachine: (newest-cni-774287) Calling .Close
	I0816 18:35:01.507309   81976 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:35:01.507388   81976 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:35:01.507411   81976 main.go:141] libmachine: Making call to close driver server
	I0816 18:35:01.507424   81976 main.go:141] libmachine: (newest-cni-774287) Calling .Close
	I0816 18:35:01.507678   81976 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:35:01.507716   81976 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:35:01.509424   81976 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0816 18:35:01.510521   81976 addons.go:510] duration metric: took 1.560479258s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0816 18:35:01.515011   81976 api_server.go:279] https://192.168.39.194:8443/healthz returned 200:
	ok
	I0816 18:35:01.526253   81976 api_server.go:141] control plane version: v1.31.0
	I0816 18:35:01.526276   81976 api_server.go:131] duration metric: took 19.535959ms to wait for apiserver health ...
	I0816 18:35:01.526285   81976 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 18:35:01.542469   81976 system_pods.go:59] 8 kube-system pods found
	I0816 18:35:01.542508   81976 system_pods.go:61] "coredns-6f6b679f8f-8djgt" [7c51786b-3d74-4c2b-bb32-c99b9aeebdc4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 18:35:01.542522   81976 system_pods.go:61] "coredns-6f6b679f8f-mkf4v" [2a1e64f6-8e75-44eb-a56e-2b906451e5d7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 18:35:01.542534   81976 system_pods.go:61] "etcd-newest-cni-774287" [56f584bc-707b-45f0-bd0b-88a6e501e983] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 18:35:01.542547   81976 system_pods.go:61] "kube-apiserver-newest-cni-774287" [13190a30-4671-4519-bb50-d30afe24c4dc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 18:35:01.542556   81976 system_pods.go:61] "kube-controller-manager-newest-cni-774287" [5417f08b-55b4-4aef-8f2b-c742344794de] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 18:35:01.542569   81976 system_pods.go:61] "kube-proxy-5ng9m" [2b4ced34-b436-4834-991b-776537026b48] Running
	I0816 18:35:01.542577   81976 system_pods.go:61] "kube-scheduler-newest-cni-774287" [daac0293-c0d0-4e25-9ae1-9c27698d481b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 18:35:01.542584   81976 system_pods.go:61] "storage-provisioner" [34347a9d-1d36-48e0-a138-40916f0ad78f] Pending
	I0816 18:35:01.542594   81976 system_pods.go:74] duration metric: took 16.30211ms to wait for pod list to return data ...
	I0816 18:35:01.542606   81976 default_sa.go:34] waiting for default service account to be created ...
	I0816 18:35:01.552393   81976 default_sa.go:45] found service account: "default"
	I0816 18:35:01.552424   81976 default_sa.go:55] duration metric: took 9.810458ms for default service account to be created ...
	I0816 18:35:01.552438   81976 kubeadm.go:582] duration metric: took 1.602468132s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0816 18:35:01.552459   81976 node_conditions.go:102] verifying NodePressure condition ...
	I0816 18:35:01.562140   81976 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 18:35:01.562170   81976 node_conditions.go:123] node cpu capacity is 2
	I0816 18:35:01.562183   81976 node_conditions.go:105] duration metric: took 9.718667ms to run NodePressure ...
	I0816 18:35:01.562198   81976 start.go:241] waiting for startup goroutines ...
	I0816 18:35:01.562208   81976 start.go:246] waiting for cluster config update ...
	I0816 18:35:01.562223   81976 start.go:255] writing updated cluster config ...
	I0816 18:35:01.562553   81976 ssh_runner.go:195] Run: rm -f paused
	I0816 18:35:01.611916   81976 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0816 18:35:01.613927   81976 out.go:177] * Done! kubectl is now configured to use "newest-cni-774287" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 16 18:35:08 no-preload-864476 crio[741]: time="2024-08-16 18:35:08.779238290Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:230fb46a1bbd951425820b63075fc2b72c92f05c2efb94a91b76a01b07a6775b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1723832354286914338,StartedAt:1723832354374553473,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-proxy:v1.31.0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6g6zx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71a027eb-99e3-4b48-b9f1-2fc80cad9d2e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/run/xtables.lock,HostPath:/run/xtables.lock,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/lib/modules,HostPath:/lib/modules,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/71a027eb-99e3-4b48-b9f1-2fc80cad9d2e/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/71a027eb-99e3-4b48-b9f1-2fc80cad9d2e/containers/kube-proxy/61691101,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/kube-proxy,HostPath:/var/l
ib/kubelet/pods/71a027eb-99e3-4b48-b9f1-2fc80cad9d2e/volumes/kubernetes.io~configmap/kube-proxy,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/71a027eb-99e3-4b48-b9f1-2fc80cad9d2e/volumes/kubernetes.io~projected/kube-api-access-n9gtd,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-proxy-6g6zx_71a027eb-99e3-4b48-b9f1-2fc80cad9d2e/kube-proxy/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-co
llector/interceptors.go:74" id=be2900ec-5c66-4261-ad11-180014c37e7b name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 16 18:35:08 no-preload-864476 crio[741]: time="2024-08-16 18:35:08.779978440Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:a88473ef80e0ff82a5245a6ff0e5cb8b0dea144f21a3406eb7e7ddc516f7aefa,Verbose:false,}" file="otel-collector/interceptors.go:62" id=3c148469-152a-4555-a2b1-a7ee0c6ec3a0 name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 16 18:35:08 no-preload-864476 crio[741]: time="2024-08-16 18:35:08.781067868Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:a88473ef80e0ff82a5245a6ff0e5cb8b0dea144f21a3406eb7e7ddc516f7aefa,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1723832343513359900,StartedAt:1723832343639880152,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/etcd:3.5.15-0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-864476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6435ac1d94e75156d97949e377dffe47,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminat
ionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/6435ac1d94e75156d97949e377dffe47/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/6435ac1d94e75156d97949e377dffe47/containers/etcd/3e487cc6,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/etcd,HostPath:/var/lib/minikube/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs/etcd,HostPath:/var/lib/minikube/certs/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_etcd-
no-preload-864476_6435ac1d94e75156d97949e377dffe47/etcd/2.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=3c148469-152a-4555-a2b1-a7ee0c6ec3a0 name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 16 18:35:08 no-preload-864476 crio[741]: time="2024-08-16 18:35:08.781542651Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:b854bb0edfc4f26912b0950a720508eb029032beede67e90e428be1e6dcb193e,Verbose:false,}" file="otel-collector/interceptors.go:62" id=7a3e3380-8689-4c02-9019-0bfb0fcef149 name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 16 18:35:08 no-preload-864476 crio[741]: time="2024-08-16 18:35:08.781661724Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:b854bb0edfc4f26912b0950a720508eb029032beede67e90e428be1e6dcb193e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1723832343479512783,StartedAt:1723832343586984540,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-controller-manager:v1.31.0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-864476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f20321a4e2f5424b7598e3868eada327,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/f20321a4e2f5424b7598e3868eada327/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/f20321a4e2f5424b7598e3868eada327/containers/kube-controller-manager/359870b3,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/controller-manager.conf,HostPath:/etc/kubernetes/controller-manager.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE
,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,HostPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-controller-manager-no-preload-864476_f20321a4e2f5424b7598e3868eada327/kube-controller-manager/2.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:204,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetM
ems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=7a3e3380-8689-4c02-9019-0bfb0fcef149 name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 16 18:35:08 no-preload-864476 crio[741]: time="2024-08-16 18:35:08.783011498Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:ae3099e546ae89ec056933cf6c1b07ab17a5f143c9c7145dceb4f42d813b1cc5,Verbose:false,}" file="otel-collector/interceptors.go:62" id=03ff0be7-9e11-4a29-ba88-4ff917db0d34 name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 16 18:35:08 no-preload-864476 crio[741]: time="2024-08-16 18:35:08.783105256Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:ae3099e546ae89ec056933cf6c1b07ab17a5f143c9c7145dceb4f42d813b1cc5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1723832343470582564,StartedAt:1723832343561465960,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-apiserver:v1.31.0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-864476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f5c3a5b91d0afc01c0c747e5aad20a3,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/4f5c3a5b91d0afc01c0c747e5aad20a3/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/4f5c3a5b91d0afc01c0c747e5aad20a3/containers/kube-apiserver/9f6631a4,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{Contain
erPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-apiserver-no-preload-864476_4f5c3a5b91d0afc01c0c747e5aad20a3/kube-apiserver/2.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:256,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=03ff0be7-9e11-4a29-ba88-4ff917db0d34 name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 16 18:35:08 no-preload-864476 crio[741]: time="2024-08-16 18:35:08.783505226Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:57f5608b266b010f3a98b8c7d4ef55dff5ea671b5a01e8b5a260ff29521cca6f,Verbose:false,}" file="otel-collector/interceptors.go:62" id=69be0518-51c5-42f6-b33c-ce5eba700f1c name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 16 18:35:08 no-preload-864476 crio[741]: time="2024-08-16 18:35:08.783605762Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:57f5608b266b010f3a98b8c7d4ef55dff5ea671b5a01e8b5a260ff29521cca6f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1723832343434123412,StartedAt:1723832343521666004,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-scheduler:v1.31.0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-864476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b251e07bba58ee03e247d5688bb7dd6f,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/b251e07bba58ee03e247d5688bb7dd6f/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/b251e07bba58ee03e247d5688bb7dd6f/containers/kube-scheduler/c6a4b509,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/scheduler.conf,HostPath:/etc/kubernetes/scheduler.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-scheduler-no-preload-864476_b251e07bba58ee03e247d5688bb7dd6f/kube-scheduler/2.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPe
riod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=69be0518-51c5-42f6-b33c-ce5eba700f1c name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 16 18:35:08 no-preload-864476 crio[741]: time="2024-08-16 18:35:08.809989882Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9c07e32e-738b-47e6-bbb5-464f71257bd5 name=/runtime.v1.RuntimeService/Version
	Aug 16 18:35:08 no-preload-864476 crio[741]: time="2024-08-16 18:35:08.810088800Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9c07e32e-738b-47e6-bbb5-464f71257bd5 name=/runtime.v1.RuntimeService/Version
	Aug 16 18:35:08 no-preload-864476 crio[741]: time="2024-08-16 18:35:08.811059681Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a17e9803-8ef5-4551-9474-70ac165f3f4d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 18:35:08 no-preload-864476 crio[741]: time="2024-08-16 18:35:08.811536125Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723833308811514593,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a17e9803-8ef5-4551-9474-70ac165f3f4d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 18:35:08 no-preload-864476 crio[741]: time="2024-08-16 18:35:08.812323282Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=931a547e-fc79-4736-ad1a-b9d31dd8e896 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:35:08 no-preload-864476 crio[741]: time="2024-08-16 18:35:08.812390226Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=931a547e-fc79-4736-ad1a-b9d31dd8e896 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:35:08 no-preload-864476 crio[741]: time="2024-08-16 18:35:08.812635895Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:253d2d8e44fc5c972f81ff8ad6191a1229971cb9c39eebda22d6da42fbd5f247,PodSandboxId:3531663ca9faff9fef1494473b5cbadc7e98280ed9dccf83433dc703f94980fb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723832355468099317,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c05cdb7c-d74e-4008-a0fc-5eb6df9595af,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c94f1c42210f3dadff31e144ed0fdc59e0e6be31403bd0be8a952e0f261dc7e5,PodSandboxId:c608522a82bb040cb2825a2d38b57765d3decb28382156984a58fce7da6764d9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723832355004952162,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qr4q9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d20f51f3-6786-496b-a6bc-7457462e46e9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af0515870115c4f2eb2b6740f5f78163e519f74d0074b55c05f3b999237a3e92,PodSandboxId:3f614a7790466d04f2a017da09b86df06a8b93fda4ee1d32ee194a68dbc1e911,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723832354820511186,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6zfgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99
157766-5089-4abe-a888-ec5992e5720a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:230fb46a1bbd951425820b63075fc2b72c92f05c2efb94a91b76a01b07a6775b,PodSandboxId:b4cd5a0c33fdd4e96136732c9a6036cc774f873db96a7f66a8e34c1b2ce0e08e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1723832354059037305,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6g6zx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71a027eb-99e3-4b48-b9f1-2fc80cad9d2e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a88473ef80e0ff82a5245a6ff0e5cb8b0dea144f21a3406eb7e7ddc516f7aefa,PodSandboxId:f5b40775bbeeeed4b4e4cc32fdff38c091fb85b541143717f9a042442473054a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723832343401012402,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-864476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6435ac1d94e75156d97949e377dffe47,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b854bb0edfc4f26912b0950a720508eb029032beede67e90e428be1e6dcb193e,PodSandboxId:e13790f458072e3de7198ed06ad8cd54f77f123f44e61e024895882b3919445d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723832343421929415,Labels:map[string]string{io.kubernetes.conta
iner.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-864476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f20321a4e2f5424b7598e3868eada327,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae3099e546ae89ec056933cf6c1b07ab17a5f143c9c7145dceb4f42d813b1cc5,PodSandboxId:ed6e330ed921c76738379aa0ea0215f2a477cecf27ef625074f5a57ee820ec43,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723832343377345874,Labels:map[string]string{io.kubernetes.
container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-864476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f5c3a5b91d0afc01c0c747e5aad20a3,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57f5608b266b010f3a98b8c7d4ef55dff5ea671b5a01e8b5a260ff29521cca6f,PodSandboxId:2c607d3875087a77c393243c3487231071192d67bb205bee3c0836e4dc171236,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723832343384257730,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-864476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b251e07bba58ee03e247d5688bb7dd6f,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb6801af1233b05eace939ac465fc546ff32a4cfcebf1a2f037df2c2b82da34d,PodSandboxId:5ab70ad2d348cae307e652eb0360fb992cd2935792e2165c8d9d2c66e88eeac5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723832055989521948,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-864476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f5c3a5b91d0afc01c0c747e5aad20a3,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=931a547e-fc79-4736-ad1a-b9d31dd8e896 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:35:08 no-preload-864476 crio[741]: time="2024-08-16 18:35:08.835018551Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=a8d998ee-9f16-495d-b7df-f0c97850d1c4 name=/runtime.v1.RuntimeService/Status
	Aug 16 18:35:08 no-preload-864476 crio[741]: time="2024-08-16 18:35:08.835113547Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=a8d998ee-9f16-495d-b7df-f0c97850d1c4 name=/runtime.v1.RuntimeService/Status
	Aug 16 18:35:08 no-preload-864476 crio[741]: time="2024-08-16 18:35:08.846052317Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7f52090c-2600-4924-9c3d-ffb66aa54c6f name=/runtime.v1.RuntimeService/Version
	Aug 16 18:35:08 no-preload-864476 crio[741]: time="2024-08-16 18:35:08.846135203Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7f52090c-2600-4924-9c3d-ffb66aa54c6f name=/runtime.v1.RuntimeService/Version
	Aug 16 18:35:08 no-preload-864476 crio[741]: time="2024-08-16 18:35:08.847088726Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9adecbf3-8fe8-457e-819d-769d3cab23a6 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 18:35:08 no-preload-864476 crio[741]: time="2024-08-16 18:35:08.847499147Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723833308847477530,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9adecbf3-8fe8-457e-819d-769d3cab23a6 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 18:35:08 no-preload-864476 crio[741]: time="2024-08-16 18:35:08.847951808Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1f069d9b-8f72-4ec0-9bd6-448d51aeb826 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:35:08 no-preload-864476 crio[741]: time="2024-08-16 18:35:08.848019269Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1f069d9b-8f72-4ec0-9bd6-448d51aeb826 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:35:08 no-preload-864476 crio[741]: time="2024-08-16 18:35:08.848393973Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:253d2d8e44fc5c972f81ff8ad6191a1229971cb9c39eebda22d6da42fbd5f247,PodSandboxId:3531663ca9faff9fef1494473b5cbadc7e98280ed9dccf83433dc703f94980fb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723832355468099317,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c05cdb7c-d74e-4008-a0fc-5eb6df9595af,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c94f1c42210f3dadff31e144ed0fdc59e0e6be31403bd0be8a952e0f261dc7e5,PodSandboxId:c608522a82bb040cb2825a2d38b57765d3decb28382156984a58fce7da6764d9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723832355004952162,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qr4q9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d20f51f3-6786-496b-a6bc-7457462e46e9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af0515870115c4f2eb2b6740f5f78163e519f74d0074b55c05f3b999237a3e92,PodSandboxId:3f614a7790466d04f2a017da09b86df06a8b93fda4ee1d32ee194a68dbc1e911,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723832354820511186,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6zfgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99
157766-5089-4abe-a888-ec5992e5720a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:230fb46a1bbd951425820b63075fc2b72c92f05c2efb94a91b76a01b07a6775b,PodSandboxId:b4cd5a0c33fdd4e96136732c9a6036cc774f873db96a7f66a8e34c1b2ce0e08e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1723832354059037305,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6g6zx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71a027eb-99e3-4b48-b9f1-2fc80cad9d2e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a88473ef80e0ff82a5245a6ff0e5cb8b0dea144f21a3406eb7e7ddc516f7aefa,PodSandboxId:f5b40775bbeeeed4b4e4cc32fdff38c091fb85b541143717f9a042442473054a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723832343401012402,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-864476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6435ac1d94e75156d97949e377dffe47,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b854bb0edfc4f26912b0950a720508eb029032beede67e90e428be1e6dcb193e,PodSandboxId:e13790f458072e3de7198ed06ad8cd54f77f123f44e61e024895882b3919445d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723832343421929415,Labels:map[string]string{io.kubernetes.conta
iner.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-864476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f20321a4e2f5424b7598e3868eada327,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae3099e546ae89ec056933cf6c1b07ab17a5f143c9c7145dceb4f42d813b1cc5,PodSandboxId:ed6e330ed921c76738379aa0ea0215f2a477cecf27ef625074f5a57ee820ec43,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723832343377345874,Labels:map[string]string{io.kubernetes.
container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-864476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f5c3a5b91d0afc01c0c747e5aad20a3,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57f5608b266b010f3a98b8c7d4ef55dff5ea671b5a01e8b5a260ff29521cca6f,PodSandboxId:2c607d3875087a77c393243c3487231071192d67bb205bee3c0836e4dc171236,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723832343384257730,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-864476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b251e07bba58ee03e247d5688bb7dd6f,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb6801af1233b05eace939ac465fc546ff32a4cfcebf1a2f037df2c2b82da34d,PodSandboxId:5ab70ad2d348cae307e652eb0360fb992cd2935792e2165c8d9d2c66e88eeac5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723832055989521948,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-864476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f5c3a5b91d0afc01c0c747e5aad20a3,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1f069d9b-8f72-4ec0-9bd6-448d51aeb826 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	253d2d8e44fc5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Running             storage-provisioner       0                   3531663ca9faf       storage-provisioner
	c94f1c42210f3       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   15 minutes ago      Running             coredns                   0                   c608522a82bb0       coredns-6f6b679f8f-qr4q9
	af0515870115c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   15 minutes ago      Running             coredns                   0                   3f614a7790466       coredns-6f6b679f8f-6zfgr
	230fb46a1bbd9       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   15 minutes ago      Running             kube-proxy                0                   b4cd5a0c33fdd       kube-proxy-6g6zx
	b854bb0edfc4f       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   16 minutes ago      Running             kube-controller-manager   2                   e13790f458072       kube-controller-manager-no-preload-864476
	a88473ef80e0f       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   16 minutes ago      Running             etcd                      2                   f5b40775bbeee       etcd-no-preload-864476
	57f5608b266b0       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   16 minutes ago      Running             kube-scheduler            2                   2c607d3875087       kube-scheduler-no-preload-864476
	ae3099e546ae8       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   16 minutes ago      Running             kube-apiserver            2                   ed6e330ed921c       kube-apiserver-no-preload-864476
	fb6801af1233b       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   20 minutes ago      Exited              kube-apiserver            1                   5ab70ad2d348c       kube-apiserver-no-preload-864476
	
	
	==> coredns [af0515870115c4f2eb2b6740f5f78163e519f74d0074b55c05f3b999237a3e92] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [c94f1c42210f3dadff31e144ed0fdc59e0e6be31403bd0be8a952e0f261dc7e5] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-864476
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-864476
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd
	                    minikube.k8s.io/name=no-preload-864476
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_16T18_19_09_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 18:19:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-864476
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 18:34:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 18:34:38 +0000   Fri, 16 Aug 2024 18:19:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 18:34:38 +0000   Fri, 16 Aug 2024 18:19:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 18:34:38 +0000   Fri, 16 Aug 2024 18:19:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 18:34:38 +0000   Fri, 16 Aug 2024 18:19:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.50
	  Hostname:    no-preload-864476
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 98901b82b8c8489f9453902580550602
	  System UUID:                98901b82-b8c8-489f-9453-902580550602
	  Boot ID:                    e954e701-4508-4b66-a634-9625ff35ac85
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-6zfgr                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 coredns-6f6b679f8f-qr4q9                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 etcd-no-preload-864476                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         16m
	  kube-system                 kube-apiserver-no-preload-864476             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-no-preload-864476    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-6g6zx                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-no-preload-864476             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 metrics-server-6867b74b74-r6cph              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         15m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 15m   kube-proxy       
	  Normal  Starting                 16m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m   kubelet          Node no-preload-864476 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m   kubelet          Node no-preload-864476 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m   kubelet          Node no-preload-864476 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m   node-controller  Node no-preload-864476 event: Registered Node no-preload-864476 in Controller
	
	
	==> dmesg <==
	[  +0.036150] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.680657] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.834416] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.548155] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000011] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.272897] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +0.060738] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062899] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +0.191981] systemd-fstab-generator[684]: Ignoring "noauto" option for root device
	[  +0.147334] systemd-fstab-generator[696]: Ignoring "noauto" option for root device
	[  +0.280754] systemd-fstab-generator[725]: Ignoring "noauto" option for root device
	[Aug16 18:14] systemd-fstab-generator[1316]: Ignoring "noauto" option for root device
	[  +0.055893] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.200234] systemd-fstab-generator[1439]: Ignoring "noauto" option for root device
	[  +4.651886] kauditd_printk_skb: 100 callbacks suppressed
	[  +7.552124] kauditd_printk_skb: 54 callbacks suppressed
	[ +23.322395] kauditd_printk_skb: 28 callbacks suppressed
	[Aug16 18:19] kauditd_printk_skb: 7 callbacks suppressed
	[  +1.436175] systemd-fstab-generator[3079]: Ignoring "noauto" option for root device
	[  +4.603573] kauditd_printk_skb: 54 callbacks suppressed
	[  +1.447310] systemd-fstab-generator[3405]: Ignoring "noauto" option for root device
	[  +5.417097] systemd-fstab-generator[3521]: Ignoring "noauto" option for root device
	[  +0.133430] kauditd_printk_skb: 14 callbacks suppressed
	[  +9.665594] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [a88473ef80e0ff82a5245a6ff0e5cb8b0dea144f21a3406eb7e7ddc516f7aefa] <==
	{"level":"info","ts":"2024-08-16T18:19:03.936788Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"c0dcbd712fbd8799","local-member-attributes":"{Name:no-preload-864476 ClientURLs:[https://192.168.50.50:2379]}","request-path":"/0/members/c0dcbd712fbd8799/attributes","cluster-id":"6b98348baa467fce","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-16T18:19:03.936837Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-16T18:19:03.936909Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-16T18:19:03.937253Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-16T18:19:03.938974Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T18:19:03.941085Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-16T18:19:03.941406Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-16T18:19:03.941522Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-16T18:19:03.942889Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T18:19:03.946229Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.50:2379"}
	{"level":"info","ts":"2024-08-16T18:19:03.946394Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6b98348baa467fce","local-member-id":"c0dcbd712fbd8799","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-16T18:19:03.946489Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-16T18:19:03.946546Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-16T18:29:04.314788Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":683}
	{"level":"info","ts":"2024-08-16T18:29:04.328799Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":683,"took":"13.568067ms","hash":4037443426,"current-db-size-bytes":2166784,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2166784,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-08-16T18:29:04.329008Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4037443426,"revision":683,"compact-revision":-1}
	{"level":"info","ts":"2024-08-16T18:34:04.322216Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":926}
	{"level":"info","ts":"2024-08-16T18:34:04.326406Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":926,"took":"3.772805ms","hash":3486407138,"current-db-size-bytes":2166784,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":1531904,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-08-16T18:34:04.326469Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3486407138,"revision":926,"compact-revision":683}
	{"level":"info","ts":"2024-08-16T18:34:46.544079Z","caller":"traceutil/trace.go:171","msg":"trace[1611336036] transaction","detail":"{read_only:false; response_revision:1206; number_of_response:1; }","duration":"236.372387ms","start":"2024-08-16T18:34:46.307666Z","end":"2024-08-16T18:34:46.544039Z","steps":["trace[1611336036] 'process raft request'  (duration: 236.214038ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T18:34:47.780767Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.706573ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9771000692660293081 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.50.50\" mod_revision:1198 > success:<request_put:<key:\"/registry/masterleases/192.168.50.50\" value_size:66 lease:547628655805517271 >> failure:<request_range:<key:\"/registry/masterleases/192.168.50.50\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-08-16T18:34:47.780892Z","caller":"traceutil/trace.go:171","msg":"trace[290187473] linearizableReadLoop","detail":"{readStateIndex:1410; appliedIndex:1409; }","duration":"151.032179ms","start":"2024-08-16T18:34:47.629845Z","end":"2024-08-16T18:34:47.780877Z","steps":["trace[290187473] 'read index received'  (duration: 19.267311ms)","trace[290187473] 'applied index is now lower than readState.Index'  (duration: 131.763916ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-16T18:34:47.780992Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"151.136623ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/\" range_end:\"/registry/pods0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-08-16T18:34:47.781023Z","caller":"traceutil/trace.go:171","msg":"trace[36956461] range","detail":"{range_begin:/registry/pods/; range_end:/registry/pods0; response_count:0; response_revision:1207; }","duration":"151.172641ms","start":"2024-08-16T18:34:47.629841Z","end":"2024-08-16T18:34:47.781014Z","steps":["trace[36956461] 'agreement among raft nodes before linearized reading'  (duration: 151.076873ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T18:34:47.781112Z","caller":"traceutil/trace.go:171","msg":"trace[1762738506] transaction","detail":"{read_only:false; response_revision:1207; number_of_response:1; }","duration":"260.212283ms","start":"2024-08-16T18:34:47.520878Z","end":"2024-08-16T18:34:47.781090Z","steps":["trace[1762738506] 'process raft request'  (duration: 128.320295ms)","trace[1762738506] 'compare'  (duration: 130.606549ms)"],"step_count":2}
	
	
	==> kernel <==
	 18:35:09 up 21 min,  0 users,  load average: 0.27, 0.17, 0.13
	Linux no-preload-864476 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [ae3099e546ae89ec056933cf6c1b07ab17a5f143c9c7145dceb4f42d813b1cc5] <==
	 > logger="UnhandledError"
	I0816 18:32:06.861155       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0816 18:34:05.857583       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 18:34:05.857763       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0816 18:34:06.859715       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 18:34:06.859838       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0816 18:34:06.859724       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 18:34:06.859896       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0816 18:34:06.861055       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0816 18:34:06.861118       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0816 18:35:06.862212       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 18:35:06.862346       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0816 18:35:06.862212       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 18:35:06.862453       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0816 18:35:06.863659       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0816 18:35:06.863729       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [fb6801af1233b05eace939ac465fc546ff32a4cfcebf1a2f037df2c2b82da34d] <==
	W0816 18:18:56.169155       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:18:56.197654       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:18:56.203189       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:18:56.207608       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:18:56.263439       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:18:56.314705       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:18:56.342513       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:18:56.352543       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:18:56.375557       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:18:56.390060       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:18:56.411526       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:18:56.417410       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:18:56.418806       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:18:56.453219       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:18:56.454622       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:18:56.516419       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:18:56.563152       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:18:56.564424       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:18:56.628741       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:18:56.856057       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:18:56.867542       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:18:56.900446       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:18:57.214721       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:18:57.283741       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:18:59.835194       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [b854bb0edfc4f26912b0950a720508eb029032beede67e90e428be1e6dcb193e] <==
	E0816 18:29:42.948143       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 18:29:43.442057       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 18:30:12.958060       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 18:30:13.453726       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0816 18:30:24.566758       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="167.544µs"
	I0816 18:30:38.566821       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="100.155µs"
	E0816 18:30:42.964253       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 18:30:43.461818       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 18:31:12.970234       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 18:31:13.469673       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 18:31:42.976687       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 18:31:43.479560       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 18:32:12.984034       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 18:32:13.489883       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 18:32:42.991145       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 18:32:43.498759       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 18:33:12.997130       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 18:33:13.506207       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 18:33:43.002894       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 18:33:43.513909       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 18:34:13.011471       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 18:34:13.532569       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0816 18:34:38.503747       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-864476"
	E0816 18:34:43.019555       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 18:34:43.543089       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [230fb46a1bbd951425820b63075fc2b72c92f05c2efb94a91b76a01b07a6775b] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0816 18:19:14.593059       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0816 18:19:14.646064       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.50"]
	E0816 18:19:14.646187       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0816 18:19:14.792396       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0816 18:19:14.792451       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0816 18:19:14.792483       1 server_linux.go:169] "Using iptables Proxier"
	I0816 18:19:14.794992       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0816 18:19:14.795260       1 server.go:483] "Version info" version="v1.31.0"
	I0816 18:19:14.795310       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 18:19:14.797685       1 config.go:197] "Starting service config controller"
	I0816 18:19:14.797737       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0816 18:19:14.797771       1 config.go:104] "Starting endpoint slice config controller"
	I0816 18:19:14.797786       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0816 18:19:14.798662       1 config.go:326] "Starting node config controller"
	I0816 18:19:14.798670       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0816 18:19:14.898402       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0816 18:19:14.898491       1 shared_informer.go:320] Caches are synced for service config
	I0816 18:19:14.898716       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [57f5608b266b010f3a98b8c7d4ef55dff5ea671b5a01e8b5a260ff29521cca6f] <==
	W0816 18:19:05.878333       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 18:19:05.879858       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 18:19:05.878369       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0816 18:19:05.879967       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 18:19:05.878404       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0816 18:19:05.880076       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 18:19:05.878435       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 18:19:05.880243       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0816 18:19:05.878822       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 18:19:05.880444       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0816 18:19:05.878859       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 18:19:05.880544       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 18:19:06.781137       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 18:19:06.781251       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 18:19:06.839176       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 18:19:06.840271       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 18:19:06.865091       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0816 18:19:06.865151       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 18:19:06.895383       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 18:19:06.895457       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 18:19:07.071698       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0816 18:19:07.071799       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 18:19:07.096842       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 18:19:07.096891       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0816 18:19:09.359563       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 16 18:34:08 no-preload-864476 kubelet[3412]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 18:34:08 no-preload-864476 kubelet[3412]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 18:34:08 no-preload-864476 kubelet[3412]: E0816 18:34:08.759421    3412 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723833248758696876,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:34:08 no-preload-864476 kubelet[3412]: E0816 18:34:08.759451    3412 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723833248758696876,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:34:13 no-preload-864476 kubelet[3412]: E0816 18:34:13.547868    3412 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-r6cph" podUID="a842267c-2c75-4799-aefc-2fb92ccb9129"
	Aug 16 18:34:18 no-preload-864476 kubelet[3412]: E0816 18:34:18.761311    3412 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723833258760565198,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:34:18 no-preload-864476 kubelet[3412]: E0816 18:34:18.761365    3412 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723833258760565198,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:34:28 no-preload-864476 kubelet[3412]: E0816 18:34:28.549631    3412 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-r6cph" podUID="a842267c-2c75-4799-aefc-2fb92ccb9129"
	Aug 16 18:34:28 no-preload-864476 kubelet[3412]: E0816 18:34:28.763393    3412 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723833268762777952,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:34:28 no-preload-864476 kubelet[3412]: E0816 18:34:28.763470    3412 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723833268762777952,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:34:38 no-preload-864476 kubelet[3412]: E0816 18:34:38.766934    3412 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723833278765881662,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:34:38 no-preload-864476 kubelet[3412]: E0816 18:34:38.767616    3412 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723833278765881662,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:34:43 no-preload-864476 kubelet[3412]: E0816 18:34:43.548698    3412 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-r6cph" podUID="a842267c-2c75-4799-aefc-2fb92ccb9129"
	Aug 16 18:34:48 no-preload-864476 kubelet[3412]: E0816 18:34:48.771028    3412 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723833288769072295,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:34:48 no-preload-864476 kubelet[3412]: E0816 18:34:48.771094    3412 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723833288769072295,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:34:56 no-preload-864476 kubelet[3412]: E0816 18:34:56.548401    3412 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-r6cph" podUID="a842267c-2c75-4799-aefc-2fb92ccb9129"
	Aug 16 18:34:58 no-preload-864476 kubelet[3412]: E0816 18:34:58.772672    3412 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723833298772340423,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:34:58 no-preload-864476 kubelet[3412]: E0816 18:34:58.772743    3412 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723833298772340423,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:35:08 no-preload-864476 kubelet[3412]: E0816 18:35:08.585987    3412 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 18:35:08 no-preload-864476 kubelet[3412]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 18:35:08 no-preload-864476 kubelet[3412]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 18:35:08 no-preload-864476 kubelet[3412]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 18:35:08 no-preload-864476 kubelet[3412]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 18:35:08 no-preload-864476 kubelet[3412]: E0816 18:35:08.775688    3412 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723833308774496034,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:35:08 no-preload-864476 kubelet[3412]: E0816 18:35:08.775711    3412 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723833308774496034,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [253d2d8e44fc5c972f81ff8ad6191a1229971cb9c39eebda22d6da42fbd5f247] <==
	I0816 18:19:15.568890       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0816 18:19:15.587753       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0816 18:19:15.587807       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0816 18:19:15.604117       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0816 18:19:15.604992       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-864476_1bb0b690-6376-46ef-9ce2-3ed8222c67dd!
	I0816 18:19:15.605403       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4a87f0ca-7fd5-417d-81d9-efa74cb5b7ce", APIVersion:"v1", ResourceVersion:"394", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-864476_1bb0b690-6376-46ef-9ce2-3ed8222c67dd became leader
	I0816 18:19:15.713733       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-864476_1bb0b690-6376-46ef-9ce2-3ed8222c67dd!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-864476 -n no-preload-864476
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-864476 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-r6cph
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-864476 describe pod metrics-server-6867b74b74-r6cph
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-864476 describe pod metrics-server-6867b74b74-r6cph: exit status 1 (65.760994ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-r6cph" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-864476 describe pod metrics-server-6867b74b74-r6cph: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (399.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (423.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-256678 -n default-k8s-diff-port-256678
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-08-16 18:35:42.024798397 +0000 UTC m=+6442.343343419
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-256678 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-256678 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.717µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-256678 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-256678 -n default-k8s-diff-port-256678
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-256678 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-256678 logs -n 25: (1.227856282s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p                                                     | default-k8s-diff-port-256678 | jenkins | v1.33.1 | 16 Aug 24 18:05 UTC | 16 Aug 24 18:07 UTC |
	|         | default-k8s-diff-port-256678                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-777541            | embed-certs-777541           | jenkins | v1.33.1 | 16 Aug 24 18:06 UTC | 16 Aug 24 18:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-777541                                  | embed-certs-777541           | jenkins | v1.33.1 | 16 Aug 24 18:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-864476             | no-preload-864476            | jenkins | v1.33.1 | 16 Aug 24 18:07 UTC | 16 Aug 24 18:07 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-864476                                   | no-preload-864476            | jenkins | v1.33.1 | 16 Aug 24 18:07 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-256678  | default-k8s-diff-port-256678 | jenkins | v1.33.1 | 16 Aug 24 18:07 UTC | 16 Aug 24 18:07 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-256678 | jenkins | v1.33.1 | 16 Aug 24 18:07 UTC |                     |
	|         | default-k8s-diff-port-256678                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-777541                 | embed-certs-777541           | jenkins | v1.33.1 | 16 Aug 24 18:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-783465        | old-k8s-version-783465       | jenkins | v1.33.1 | 16 Aug 24 18:08 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-777541                                  | embed-certs-777541           | jenkins | v1.33.1 | 16 Aug 24 18:08 UTC | 16 Aug 24 18:19 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-864476                  | no-preload-864476            | jenkins | v1.33.1 | 16 Aug 24 18:09 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-864476                                   | no-preload-864476            | jenkins | v1.33.1 | 16 Aug 24 18:09 UTC | 16 Aug 24 18:19 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-256678       | default-k8s-diff-port-256678 | jenkins | v1.33.1 | 16 Aug 24 18:09 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-256678 | jenkins | v1.33.1 | 16 Aug 24 18:09 UTC | 16 Aug 24 18:19 UTC |
	|         | default-k8s-diff-port-256678                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-783465                              | old-k8s-version-783465       | jenkins | v1.33.1 | 16 Aug 24 18:10 UTC | 16 Aug 24 18:10 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-783465             | old-k8s-version-783465       | jenkins | v1.33.1 | 16 Aug 24 18:10 UTC | 16 Aug 24 18:10 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-783465                              | old-k8s-version-783465       | jenkins | v1.33.1 | 16 Aug 24 18:10 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-783465                              | old-k8s-version-783465       | jenkins | v1.33.1 | 16 Aug 24 18:34 UTC | 16 Aug 24 18:34 UTC |
	| start   | -p newest-cni-774287 --memory=2200 --alsologtostderr   | newest-cni-774287            | jenkins | v1.33.1 | 16 Aug 24 18:34 UTC | 16 Aug 24 18:35 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| delete  | -p embed-certs-777541                                  | embed-certs-777541           | jenkins | v1.33.1 | 16 Aug 24 18:34 UTC | 16 Aug 24 18:34 UTC |
	| addons  | enable metrics-server -p newest-cni-774287             | newest-cni-774287            | jenkins | v1.33.1 | 16 Aug 24 18:35 UTC | 16 Aug 24 18:35 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-774287                                   | newest-cni-774287            | jenkins | v1.33.1 | 16 Aug 24 18:35 UTC | 16 Aug 24 18:35 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-774287                  | newest-cni-774287            | jenkins | v1.33.1 | 16 Aug 24 18:35 UTC | 16 Aug 24 18:35 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| delete  | -p no-preload-864476                                   | no-preload-864476            | jenkins | v1.33.1 | 16 Aug 24 18:35 UTC | 16 Aug 24 18:35 UTC |
	| start   | -p newest-cni-774287 --memory=2200 --alsologtostderr   | newest-cni-774287            | jenkins | v1.33.1 | 16 Aug 24 18:35 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 18:35:10
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 18:35:10.116032   82860 out.go:345] Setting OutFile to fd 1 ...
	I0816 18:35:10.116138   82860 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 18:35:10.116144   82860 out.go:358] Setting ErrFile to fd 2...
	I0816 18:35:10.116148   82860 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 18:35:10.116345   82860 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-9545/.minikube/bin
	I0816 18:35:10.116952   82860 out.go:352] Setting JSON to false
	I0816 18:35:10.118004   82860 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8208,"bootTime":1723825102,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0816 18:35:10.118072   82860 start.go:139] virtualization: kvm guest
	I0816 18:35:10.120145   82860 out.go:177] * [newest-cni-774287] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0816 18:35:10.121636   82860 notify.go:220] Checking for updates...
	I0816 18:35:10.121663   82860 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 18:35:10.122969   82860 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 18:35:10.124213   82860 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19461-9545/kubeconfig
	I0816 18:35:10.125514   82860 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19461-9545/.minikube
	I0816 18:35:10.126744   82860 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 18:35:10.128039   82860 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 18:35:10.129588   82860 config.go:182] Loaded profile config "newest-cni-774287": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 18:35:10.129988   82860 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:35:10.130037   82860 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:35:10.149008   82860 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33325
	I0816 18:35:10.149479   82860 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:35:10.149958   82860 main.go:141] libmachine: Using API Version  1
	I0816 18:35:10.149980   82860 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:35:10.150362   82860 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:35:10.150558   82860 main.go:141] libmachine: (newest-cni-774287) Calling .DriverName
	I0816 18:35:10.150788   82860 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 18:35:10.151069   82860 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:35:10.151116   82860 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:35:10.167704   82860 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36027
	I0816 18:35:10.168101   82860 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:35:10.168696   82860 main.go:141] libmachine: Using API Version  1
	I0816 18:35:10.168744   82860 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:35:10.169063   82860 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:35:10.169276   82860 main.go:141] libmachine: (newest-cni-774287) Calling .DriverName
	I0816 18:35:10.212265   82860 out.go:177] * Using the kvm2 driver based on existing profile
	I0816 18:35:10.213531   82860 start.go:297] selected driver: kvm2
	I0816 18:35:10.213558   82860 start.go:901] validating driver "kvm2" against &{Name:newest-cni-774287 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:newest-cni-774287 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.194 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] St
artHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 18:35:10.213707   82860 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 18:35:10.214793   82860 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 18:35:10.214874   82860 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19461-9545/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 18:35:10.231218   82860 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0816 18:35:10.231585   82860 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0816 18:35:10.231618   82860 cni.go:84] Creating CNI manager for ""
	I0816 18:35:10.231625   82860 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 18:35:10.231662   82860 start.go:340] cluster config:
	{Name:newest-cni-774287 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-774287 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.194 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network
: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 18:35:10.231765   82860 iso.go:125] acquiring lock: {Name:mke35866c1d0f078e1f027c2d727692722810e04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 18:35:10.233530   82860 out.go:177] * Starting "newest-cni-774287" primary control-plane node in "newest-cni-774287" cluster
	I0816 18:35:10.234739   82860 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 18:35:10.234775   82860 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0816 18:35:10.234796   82860 cache.go:56] Caching tarball of preloaded images
	I0816 18:35:10.234897   82860 preload.go:172] Found /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 18:35:10.234912   82860 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0816 18:35:10.235022   82860 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/newest-cni-774287/config.json ...
	I0816 18:35:10.235215   82860 start.go:360] acquireMachinesLock for newest-cni-774287: {Name:mke1ffa1f2d6d714bdd85e184816ba8f4dfd08f1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 18:35:10.235263   82860 start.go:364] duration metric: took 28.043µs to acquireMachinesLock for "newest-cni-774287"
	I0816 18:35:10.235282   82860 start.go:96] Skipping create...Using existing machine configuration
	I0816 18:35:10.235290   82860 fix.go:54] fixHost starting: 
	I0816 18:35:10.235565   82860 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:35:10.235601   82860 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:35:10.251646   82860 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34763
	I0816 18:35:10.252090   82860 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:35:10.252555   82860 main.go:141] libmachine: Using API Version  1
	I0816 18:35:10.252577   82860 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:35:10.252891   82860 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:35:10.253077   82860 main.go:141] libmachine: (newest-cni-774287) Calling .DriverName
	I0816 18:35:10.253249   82860 main.go:141] libmachine: (newest-cni-774287) Calling .GetState
	I0816 18:35:10.254927   82860 fix.go:112] recreateIfNeeded on newest-cni-774287: state=Stopped err=<nil>
	I0816 18:35:10.254964   82860 main.go:141] libmachine: (newest-cni-774287) Calling .DriverName
	W0816 18:35:10.255110   82860 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 18:35:10.256874   82860 out.go:177] * Restarting existing kvm2 VM for "newest-cni-774287" ...
	I0816 18:35:10.258090   82860 main.go:141] libmachine: (newest-cni-774287) Calling .Start
	I0816 18:35:10.258283   82860 main.go:141] libmachine: (newest-cni-774287) Ensuring networks are active...
	I0816 18:35:10.259037   82860 main.go:141] libmachine: (newest-cni-774287) Ensuring network default is active
	I0816 18:35:10.259400   82860 main.go:141] libmachine: (newest-cni-774287) Ensuring network mk-newest-cni-774287 is active
	I0816 18:35:10.337581   82860 main.go:141] libmachine: (newest-cni-774287) Getting domain xml...
	I0816 18:35:10.680067   82860 main.go:141] libmachine: (newest-cni-774287) Creating domain...
	I0816 18:35:11.923408   82860 main.go:141] libmachine: (newest-cni-774287) Waiting to get IP...
	I0816 18:35:11.924255   82860 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:35:11.924702   82860 main.go:141] libmachine: (newest-cni-774287) DBG | unable to find current IP address of domain newest-cni-774287 in network mk-newest-cni-774287
	I0816 18:35:11.924783   82860 main.go:141] libmachine: (newest-cni-774287) DBG | I0816 18:35:11.924689   82934 retry.go:31] will retry after 294.755078ms: waiting for machine to come up
	I0816 18:35:12.221233   82860 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:35:12.221758   82860 main.go:141] libmachine: (newest-cni-774287) DBG | unable to find current IP address of domain newest-cni-774287 in network mk-newest-cni-774287
	I0816 18:35:12.221784   82860 main.go:141] libmachine: (newest-cni-774287) DBG | I0816 18:35:12.221705   82934 retry.go:31] will retry after 238.966949ms: waiting for machine to come up
	I0816 18:35:12.462259   82860 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:35:12.462780   82860 main.go:141] libmachine: (newest-cni-774287) DBG | unable to find current IP address of domain newest-cni-774287 in network mk-newest-cni-774287
	I0816 18:35:12.462804   82860 main.go:141] libmachine: (newest-cni-774287) DBG | I0816 18:35:12.462755   82934 retry.go:31] will retry after 432.644931ms: waiting for machine to come up
	I0816 18:35:12.897519   82860 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:35:12.897972   82860 main.go:141] libmachine: (newest-cni-774287) DBG | unable to find current IP address of domain newest-cni-774287 in network mk-newest-cni-774287
	I0816 18:35:12.898000   82860 main.go:141] libmachine: (newest-cni-774287) DBG | I0816 18:35:12.897947   82934 retry.go:31] will retry after 605.993544ms: waiting for machine to come up
	I0816 18:35:13.505743   82860 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:35:13.506309   82860 main.go:141] libmachine: (newest-cni-774287) DBG | unable to find current IP address of domain newest-cni-774287 in network mk-newest-cni-774287
	I0816 18:35:13.506342   82860 main.go:141] libmachine: (newest-cni-774287) DBG | I0816 18:35:13.506238   82934 retry.go:31] will retry after 525.130144ms: waiting for machine to come up
	I0816 18:35:14.032615   82860 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:35:14.033128   82860 main.go:141] libmachine: (newest-cni-774287) DBG | unable to find current IP address of domain newest-cni-774287 in network mk-newest-cni-774287
	I0816 18:35:14.033160   82860 main.go:141] libmachine: (newest-cni-774287) DBG | I0816 18:35:14.033083   82934 retry.go:31] will retry after 811.730191ms: waiting for machine to come up
	I0816 18:35:14.847005   82860 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:35:14.847430   82860 main.go:141] libmachine: (newest-cni-774287) DBG | unable to find current IP address of domain newest-cni-774287 in network mk-newest-cni-774287
	I0816 18:35:14.847461   82860 main.go:141] libmachine: (newest-cni-774287) DBG | I0816 18:35:14.847398   82934 retry.go:31] will retry after 1.046653971s: waiting for machine to come up
	I0816 18:35:15.895357   82860 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:35:15.895828   82860 main.go:141] libmachine: (newest-cni-774287) DBG | unable to find current IP address of domain newest-cni-774287 in network mk-newest-cni-774287
	I0816 18:35:15.895852   82860 main.go:141] libmachine: (newest-cni-774287) DBG | I0816 18:35:15.895756   82934 retry.go:31] will retry after 984.307754ms: waiting for machine to come up
	I0816 18:35:16.881692   82860 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:35:16.882354   82860 main.go:141] libmachine: (newest-cni-774287) DBG | unable to find current IP address of domain newest-cni-774287 in network mk-newest-cni-774287
	I0816 18:35:16.882373   82860 main.go:141] libmachine: (newest-cni-774287) DBG | I0816 18:35:16.882299   82934 retry.go:31] will retry after 1.504278112s: waiting for machine to come up
	I0816 18:35:18.388521   82860 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:35:18.389049   82860 main.go:141] libmachine: (newest-cni-774287) DBG | unable to find current IP address of domain newest-cni-774287 in network mk-newest-cni-774287
	I0816 18:35:18.389107   82860 main.go:141] libmachine: (newest-cni-774287) DBG | I0816 18:35:18.389018   82934 retry.go:31] will retry after 2.089262289s: waiting for machine to come up
	I0816 18:35:20.480118   82860 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:35:20.480644   82860 main.go:141] libmachine: (newest-cni-774287) DBG | unable to find current IP address of domain newest-cni-774287 in network mk-newest-cni-774287
	I0816 18:35:20.480681   82860 main.go:141] libmachine: (newest-cni-774287) DBG | I0816 18:35:20.480582   82934 retry.go:31] will retry after 2.790576857s: waiting for machine to come up
	I0816 18:35:23.274516   82860 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:35:23.274923   82860 main.go:141] libmachine: (newest-cni-774287) DBG | unable to find current IP address of domain newest-cni-774287 in network mk-newest-cni-774287
	I0816 18:35:23.274954   82860 main.go:141] libmachine: (newest-cni-774287) DBG | I0816 18:35:23.274879   82934 retry.go:31] will retry after 3.564158s: waiting for machine to come up
	I0816 18:35:26.841268   82860 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:35:26.841753   82860 main.go:141] libmachine: (newest-cni-774287) Found IP for machine: 192.168.39.194
	I0816 18:35:26.841799   82860 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has current primary IP address 192.168.39.194 and MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:35:26.841806   82860 main.go:141] libmachine: (newest-cni-774287) Reserving static IP address...
	I0816 18:35:26.842146   82860 main.go:141] libmachine: (newest-cni-774287) Reserved static IP address: 192.168.39.194
	I0816 18:35:26.842159   82860 main.go:141] libmachine: (newest-cni-774287) Waiting for SSH to be available...
	I0816 18:35:26.842184   82860 main.go:141] libmachine: (newest-cni-774287) DBG | found host DHCP lease matching {name: "newest-cni-774287", mac: "52:54:00:2d:15:e2", ip: "192.168.39.194"} in network mk-newest-cni-774287: {Iface:virbr1 ExpiryTime:2024-08-16 19:35:21 +0000 UTC Type:0 Mac:52:54:00:2d:15:e2 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:newest-cni-774287 Clientid:01:52:54:00:2d:15:e2}
	I0816 18:35:26.842217   82860 main.go:141] libmachine: (newest-cni-774287) DBG | skip adding static IP to network mk-newest-cni-774287 - found existing host DHCP lease matching {name: "newest-cni-774287", mac: "52:54:00:2d:15:e2", ip: "192.168.39.194"}
	I0816 18:35:26.842232   82860 main.go:141] libmachine: (newest-cni-774287) DBG | Getting to WaitForSSH function...
	I0816 18:35:26.844449   82860 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:35:26.844780   82860 main.go:141] libmachine: (newest-cni-774287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:15:e2", ip: ""} in network mk-newest-cni-774287: {Iface:virbr1 ExpiryTime:2024-08-16 19:35:21 +0000 UTC Type:0 Mac:52:54:00:2d:15:e2 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:newest-cni-774287 Clientid:01:52:54:00:2d:15:e2}
	I0816 18:35:26.844807   82860 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined IP address 192.168.39.194 and MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:35:26.844884   82860 main.go:141] libmachine: (newest-cni-774287) DBG | Using SSH client type: external
	I0816 18:35:26.844921   82860 main.go:141] libmachine: (newest-cni-774287) DBG | Using SSH private key: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/newest-cni-774287/id_rsa (-rw-------)
	I0816 18:35:26.844969   82860 main.go:141] libmachine: (newest-cni-774287) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.194 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19461-9545/.minikube/machines/newest-cni-774287/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 18:35:26.844987   82860 main.go:141] libmachine: (newest-cni-774287) DBG | About to run SSH command:
	I0816 18:35:26.845001   82860 main.go:141] libmachine: (newest-cni-774287) DBG | exit 0
	I0816 18:35:26.968489   82860 main.go:141] libmachine: (newest-cni-774287) DBG | SSH cmd err, output: <nil>: 
	I0816 18:35:26.968842   82860 main.go:141] libmachine: (newest-cni-774287) Calling .GetConfigRaw
	I0816 18:35:26.969452   82860 main.go:141] libmachine: (newest-cni-774287) Calling .GetIP
	I0816 18:35:26.971922   82860 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:35:26.972361   82860 main.go:141] libmachine: (newest-cni-774287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:15:e2", ip: ""} in network mk-newest-cni-774287: {Iface:virbr1 ExpiryTime:2024-08-16 19:35:21 +0000 UTC Type:0 Mac:52:54:00:2d:15:e2 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:newest-cni-774287 Clientid:01:52:54:00:2d:15:e2}
	I0816 18:35:26.972401   82860 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined IP address 192.168.39.194 and MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:35:26.972692   82860 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/newest-cni-774287/config.json ...
	I0816 18:35:26.972922   82860 machine.go:93] provisionDockerMachine start ...
	I0816 18:35:26.972946   82860 main.go:141] libmachine: (newest-cni-774287) Calling .DriverName
	I0816 18:35:26.973178   82860 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHHostname
	I0816 18:35:26.975894   82860 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:35:26.976232   82860 main.go:141] libmachine: (newest-cni-774287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:15:e2", ip: ""} in network mk-newest-cni-774287: {Iface:virbr1 ExpiryTime:2024-08-16 19:35:21 +0000 UTC Type:0 Mac:52:54:00:2d:15:e2 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:newest-cni-774287 Clientid:01:52:54:00:2d:15:e2}
	I0816 18:35:26.976263   82860 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined IP address 192.168.39.194 and MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:35:26.976417   82860 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHPort
	I0816 18:35:26.976617   82860 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHKeyPath
	I0816 18:35:26.976801   82860 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHKeyPath
	I0816 18:35:26.976952   82860 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHUsername
	I0816 18:35:26.977119   82860 main.go:141] libmachine: Using SSH client type: native
	I0816 18:35:26.977294   82860 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0816 18:35:26.977304   82860 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 18:35:27.080705   82860 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 18:35:27.080739   82860 main.go:141] libmachine: (newest-cni-774287) Calling .GetMachineName
	I0816 18:35:27.081061   82860 buildroot.go:166] provisioning hostname "newest-cni-774287"
	I0816 18:35:27.081087   82860 main.go:141] libmachine: (newest-cni-774287) Calling .GetMachineName
	I0816 18:35:27.081315   82860 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHHostname
	I0816 18:35:27.083974   82860 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:35:27.084295   82860 main.go:141] libmachine: (newest-cni-774287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:15:e2", ip: ""} in network mk-newest-cni-774287: {Iface:virbr1 ExpiryTime:2024-08-16 19:35:21 +0000 UTC Type:0 Mac:52:54:00:2d:15:e2 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:newest-cni-774287 Clientid:01:52:54:00:2d:15:e2}
	I0816 18:35:27.084333   82860 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined IP address 192.168.39.194 and MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:35:27.084483   82860 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHPort
	I0816 18:35:27.084681   82860 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHKeyPath
	I0816 18:35:27.084834   82860 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHKeyPath
	I0816 18:35:27.084941   82860 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHUsername
	I0816 18:35:27.085133   82860 main.go:141] libmachine: Using SSH client type: native
	I0816 18:35:27.085295   82860 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0816 18:35:27.085312   82860 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-774287 && echo "newest-cni-774287" | sudo tee /etc/hostname
	I0816 18:35:27.202696   82860 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-774287
	
	I0816 18:35:27.202731   82860 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHHostname
	I0816 18:35:27.205636   82860 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:35:27.206020   82860 main.go:141] libmachine: (newest-cni-774287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:15:e2", ip: ""} in network mk-newest-cni-774287: {Iface:virbr1 ExpiryTime:2024-08-16 19:35:21 +0000 UTC Type:0 Mac:52:54:00:2d:15:e2 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:newest-cni-774287 Clientid:01:52:54:00:2d:15:e2}
	I0816 18:35:27.206050   82860 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined IP address 192.168.39.194 and MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:35:27.206196   82860 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHPort
	I0816 18:35:27.206398   82860 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHKeyPath
	I0816 18:35:27.206603   82860 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHKeyPath
	I0816 18:35:27.206735   82860 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHUsername
	I0816 18:35:27.206894   82860 main.go:141] libmachine: Using SSH client type: native
	I0816 18:35:27.207112   82860 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0816 18:35:27.207137   82860 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-774287' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-774287/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-774287' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 18:35:27.320500   82860 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 18:35:27.320527   82860 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19461-9545/.minikube CaCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19461-9545/.minikube}
	I0816 18:35:27.320542   82860 buildroot.go:174] setting up certificates
	I0816 18:35:27.320551   82860 provision.go:84] configureAuth start
	I0816 18:35:27.320560   82860 main.go:141] libmachine: (newest-cni-774287) Calling .GetMachineName
	I0816 18:35:27.320891   82860 main.go:141] libmachine: (newest-cni-774287) Calling .GetIP
	I0816 18:35:27.323838   82860 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:35:27.324255   82860 main.go:141] libmachine: (newest-cni-774287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:15:e2", ip: ""} in network mk-newest-cni-774287: {Iface:virbr1 ExpiryTime:2024-08-16 19:35:21 +0000 UTC Type:0 Mac:52:54:00:2d:15:e2 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:newest-cni-774287 Clientid:01:52:54:00:2d:15:e2}
	I0816 18:35:27.324290   82860 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined IP address 192.168.39.194 and MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:35:27.324465   82860 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHHostname
	I0816 18:35:27.326761   82860 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:35:27.327231   82860 main.go:141] libmachine: (newest-cni-774287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:15:e2", ip: ""} in network mk-newest-cni-774287: {Iface:virbr1 ExpiryTime:2024-08-16 19:35:21 +0000 UTC Type:0 Mac:52:54:00:2d:15:e2 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:newest-cni-774287 Clientid:01:52:54:00:2d:15:e2}
	I0816 18:35:27.327263   82860 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined IP address 192.168.39.194 and MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:35:27.327499   82860 provision.go:143] copyHostCerts
	I0816 18:35:27.327579   82860 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem, removing ...
	I0816 18:35:27.327599   82860 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem
	I0816 18:35:27.327681   82860 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem (1082 bytes)
	I0816 18:35:27.327785   82860 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem, removing ...
	I0816 18:35:27.327794   82860 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem
	I0816 18:35:27.327824   82860 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem (1123 bytes)
	I0816 18:35:27.327882   82860 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem, removing ...
	I0816 18:35:27.327890   82860 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem
	I0816 18:35:27.327920   82860 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem (1679 bytes)
	I0816 18:35:27.327977   82860 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem org=jenkins.newest-cni-774287 san=[127.0.0.1 192.168.39.194 localhost minikube newest-cni-774287]
	I0816 18:35:27.595408   82860 provision.go:177] copyRemoteCerts
	I0816 18:35:27.595474   82860 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 18:35:27.595499   82860 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHHostname
	I0816 18:35:27.598511   82860 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:35:27.598911   82860 main.go:141] libmachine: (newest-cni-774287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:15:e2", ip: ""} in network mk-newest-cni-774287: {Iface:virbr1 ExpiryTime:2024-08-16 19:35:21 +0000 UTC Type:0 Mac:52:54:00:2d:15:e2 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:newest-cni-774287 Clientid:01:52:54:00:2d:15:e2}
	I0816 18:35:27.598945   82860 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined IP address 192.168.39.194 and MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:35:27.599103   82860 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHPort
	I0816 18:35:27.599337   82860 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHKeyPath
	I0816 18:35:27.599496   82860 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHUsername
	I0816 18:35:27.599657   82860 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/newest-cni-774287/id_rsa Username:docker}
	I0816 18:35:27.682531   82860 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 18:35:27.707663   82860 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0816 18:35:27.730537   82860 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 18:35:27.753396   82860 provision.go:87] duration metric: took 432.833047ms to configureAuth
	I0816 18:35:27.753427   82860 buildroot.go:189] setting minikube options for container-runtime
	I0816 18:35:27.753606   82860 config.go:182] Loaded profile config "newest-cni-774287": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 18:35:27.753669   82860 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHHostname
	I0816 18:35:27.756710   82860 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:35:27.757039   82860 main.go:141] libmachine: (newest-cni-774287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:15:e2", ip: ""} in network mk-newest-cni-774287: {Iface:virbr1 ExpiryTime:2024-08-16 19:35:21 +0000 UTC Type:0 Mac:52:54:00:2d:15:e2 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:newest-cni-774287 Clientid:01:52:54:00:2d:15:e2}
	I0816 18:35:27.757061   82860 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined IP address 192.168.39.194 and MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:35:27.757266   82860 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHPort
	I0816 18:35:27.757464   82860 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHKeyPath
	I0816 18:35:27.757656   82860 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHKeyPath
	I0816 18:35:27.757827   82860 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHUsername
	I0816 18:35:27.758009   82860 main.go:141] libmachine: Using SSH client type: native
	I0816 18:35:27.758170   82860 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0816 18:35:27.758185   82860 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 18:35:28.021216   82860 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 18:35:28.021242   82860 machine.go:96] duration metric: took 1.048305236s to provisionDockerMachine
	I0816 18:35:28.021254   82860 start.go:293] postStartSetup for "newest-cni-774287" (driver="kvm2")
	I0816 18:35:28.021264   82860 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 18:35:28.021278   82860 main.go:141] libmachine: (newest-cni-774287) Calling .DriverName
	I0816 18:35:28.021660   82860 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 18:35:28.021685   82860 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHHostname
	I0816 18:35:28.024250   82860 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:35:28.024662   82860 main.go:141] libmachine: (newest-cni-774287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:15:e2", ip: ""} in network mk-newest-cni-774287: {Iface:virbr1 ExpiryTime:2024-08-16 19:35:21 +0000 UTC Type:0 Mac:52:54:00:2d:15:e2 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:newest-cni-774287 Clientid:01:52:54:00:2d:15:e2}
	I0816 18:35:28.024696   82860 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined IP address 192.168.39.194 and MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:35:28.024829   82860 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHPort
	I0816 18:35:28.025061   82860 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHKeyPath
	I0816 18:35:28.025236   82860 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHUsername
	I0816 18:35:28.025373   82860 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/newest-cni-774287/id_rsa Username:docker}
	I0816 18:35:28.107380   82860 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 18:35:28.111642   82860 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 18:35:28.111663   82860 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/addons for local assets ...
	I0816 18:35:28.111733   82860 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/files for local assets ...
	I0816 18:35:28.111835   82860 filesync.go:149] local asset: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem -> 167532.pem in /etc/ssl/certs
	I0816 18:35:28.112090   82860 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 18:35:28.121966   82860 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /etc/ssl/certs/167532.pem (1708 bytes)
	I0816 18:35:28.144713   82860 start.go:296] duration metric: took 123.447237ms for postStartSetup
	I0816 18:35:28.144753   82860 fix.go:56] duration metric: took 17.909462025s for fixHost
	I0816 18:35:28.144774   82860 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHHostname
	I0816 18:35:28.148124   82860 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:35:28.148581   82860 main.go:141] libmachine: (newest-cni-774287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:15:e2", ip: ""} in network mk-newest-cni-774287: {Iface:virbr1 ExpiryTime:2024-08-16 19:35:21 +0000 UTC Type:0 Mac:52:54:00:2d:15:e2 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:newest-cni-774287 Clientid:01:52:54:00:2d:15:e2}
	I0816 18:35:28.148611   82860 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined IP address 192.168.39.194 and MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:35:28.148817   82860 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHPort
	I0816 18:35:28.149061   82860 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHKeyPath
	I0816 18:35:28.149226   82860 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHKeyPath
	I0816 18:35:28.149462   82860 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHUsername
	I0816 18:35:28.149663   82860 main.go:141] libmachine: Using SSH client type: native
	I0816 18:35:28.149875   82860 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0816 18:35:28.149889   82860 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 18:35:28.257265   82860 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723833328.231000393
	
	I0816 18:35:28.257285   82860 fix.go:216] guest clock: 1723833328.231000393
	I0816 18:35:28.257317   82860 fix.go:229] Guest: 2024-08-16 18:35:28.231000393 +0000 UTC Remote: 2024-08-16 18:35:28.1447566 +0000 UTC m=+18.067382841 (delta=86.243793ms)
	I0816 18:35:28.257344   82860 fix.go:200] guest clock delta is within tolerance: 86.243793ms
	I0816 18:35:28.257363   82860 start.go:83] releasing machines lock for "newest-cni-774287", held for 18.022088542s
	I0816 18:35:28.257387   82860 main.go:141] libmachine: (newest-cni-774287) Calling .DriverName
	I0816 18:35:28.257663   82860 main.go:141] libmachine: (newest-cni-774287) Calling .GetIP
	I0816 18:35:28.260202   82860 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:35:28.260697   82860 main.go:141] libmachine: (newest-cni-774287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:15:e2", ip: ""} in network mk-newest-cni-774287: {Iface:virbr1 ExpiryTime:2024-08-16 19:35:21 +0000 UTC Type:0 Mac:52:54:00:2d:15:e2 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:newest-cni-774287 Clientid:01:52:54:00:2d:15:e2}
	I0816 18:35:28.260723   82860 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined IP address 192.168.39.194 and MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:35:28.260967   82860 main.go:141] libmachine: (newest-cni-774287) Calling .DriverName
	I0816 18:35:28.261567   82860 main.go:141] libmachine: (newest-cni-774287) Calling .DriverName
	I0816 18:35:28.261724   82860 main.go:141] libmachine: (newest-cni-774287) Calling .DriverName
	I0816 18:35:28.261813   82860 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 18:35:28.261854   82860 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHHostname
	I0816 18:35:28.261953   82860 ssh_runner.go:195] Run: cat /version.json
	I0816 18:35:28.261975   82860 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHHostname
	I0816 18:35:28.264658   82860 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:35:28.264939   82860 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:35:28.264973   82860 main.go:141] libmachine: (newest-cni-774287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:15:e2", ip: ""} in network mk-newest-cni-774287: {Iface:virbr1 ExpiryTime:2024-08-16 19:35:21 +0000 UTC Type:0 Mac:52:54:00:2d:15:e2 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:newest-cni-774287 Clientid:01:52:54:00:2d:15:e2}
	I0816 18:35:28.264990   82860 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined IP address 192.168.39.194 and MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:35:28.265183   82860 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHPort
	I0816 18:35:28.265315   82860 main.go:141] libmachine: (newest-cni-774287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:15:e2", ip: ""} in network mk-newest-cni-774287: {Iface:virbr1 ExpiryTime:2024-08-16 19:35:21 +0000 UTC Type:0 Mac:52:54:00:2d:15:e2 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:newest-cni-774287 Clientid:01:52:54:00:2d:15:e2}
	I0816 18:35:28.265337   82860 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined IP address 192.168.39.194 and MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:35:28.265356   82860 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHKeyPath
	I0816 18:35:28.265532   82860 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHUsername
	I0816 18:35:28.265596   82860 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHPort
	I0816 18:35:28.265686   82860 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/newest-cni-774287/id_rsa Username:docker}
	I0816 18:35:28.265752   82860 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHKeyPath
	I0816 18:35:28.265890   82860 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHUsername
	I0816 18:35:28.266091   82860 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/newest-cni-774287/id_rsa Username:docker}
	I0816 18:35:28.403075   82860 ssh_runner.go:195] Run: systemctl --version
	I0816 18:35:28.409004   82860 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 18:35:28.552910   82860 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 18:35:28.559546   82860 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 18:35:28.559616   82860 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 18:35:28.574805   82860 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 18:35:28.574830   82860 start.go:495] detecting cgroup driver to use...
	I0816 18:35:28.574894   82860 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 18:35:28.592310   82860 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 18:35:28.605734   82860 docker.go:217] disabling cri-docker service (if available) ...
	I0816 18:35:28.605784   82860 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 18:35:28.618938   82860 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 18:35:28.632511   82860 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 18:35:28.749564   82860 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 18:35:28.884025   82860 docker.go:233] disabling docker service ...
	I0816 18:35:28.884111   82860 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 18:35:28.897946   82860 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 18:35:28.912408   82860 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 18:35:29.045760   82860 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 18:35:29.168597   82860 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 18:35:29.182977   82860 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 18:35:29.200558   82860 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 18:35:29.200632   82860 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:35:29.210715   82860 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 18:35:29.210778   82860 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:35:29.220872   82860 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:35:29.230576   82860 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:35:29.240387   82860 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 18:35:29.250576   82860 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:35:29.260717   82860 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:35:29.277361   82860 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:35:29.286988   82860 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 18:35:29.296729   82860 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 18:35:29.296810   82860 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 18:35:29.308751   82860 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 18:35:29.317992   82860 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:35:29.433262   82860 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 18:35:29.563451   82860 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 18:35:29.563561   82860 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 18:35:29.568357   82860 start.go:563] Will wait 60s for crictl version
	I0816 18:35:29.568415   82860 ssh_runner.go:195] Run: which crictl
	I0816 18:35:29.571788   82860 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 18:35:29.610151   82860 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 18:35:29.610237   82860 ssh_runner.go:195] Run: crio --version
	I0816 18:35:29.636373   82860 ssh_runner.go:195] Run: crio --version
	I0816 18:35:29.664441   82860 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 18:35:29.665865   82860 main.go:141] libmachine: (newest-cni-774287) Calling .GetIP
	I0816 18:35:29.668460   82860 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:35:29.668844   82860 main.go:141] libmachine: (newest-cni-774287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:15:e2", ip: ""} in network mk-newest-cni-774287: {Iface:virbr1 ExpiryTime:2024-08-16 19:35:21 +0000 UTC Type:0 Mac:52:54:00:2d:15:e2 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:newest-cni-774287 Clientid:01:52:54:00:2d:15:e2}
	I0816 18:35:29.668869   82860 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined IP address 192.168.39.194 and MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:35:29.669153   82860 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0816 18:35:29.672812   82860 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 18:35:29.685697   82860 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0816 18:35:29.686956   82860 kubeadm.go:883] updating cluster {Name:newest-cni-774287 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:newest-cni-774287 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.194 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:
6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 18:35:29.687074   82860 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 18:35:29.687155   82860 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 18:35:29.723441   82860 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0816 18:35:29.723513   82860 ssh_runner.go:195] Run: which lz4
	I0816 18:35:29.727176   82860 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 18:35:29.731096   82860 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 18:35:29.731125   82860 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0816 18:35:30.886371   82860 crio.go:462] duration metric: took 1.159237629s to copy over tarball
	I0816 18:35:30.886474   82860 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 18:35:32.950499   82860 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.063986842s)
	I0816 18:35:32.950532   82860 crio.go:469] duration metric: took 2.064126834s to extract the tarball
	I0816 18:35:32.950544   82860 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 18:35:32.990500   82860 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 18:35:33.030583   82860 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 18:35:33.030603   82860 cache_images.go:84] Images are preloaded, skipping loading
	I0816 18:35:33.030611   82860 kubeadm.go:934] updating node { 192.168.39.194 8443 v1.31.0 crio true true} ...
	I0816 18:35:33.030736   82860 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-774287 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.194
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:newest-cni-774287 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 18:35:33.030812   82860 ssh_runner.go:195] Run: crio config
	I0816 18:35:33.078078   82860 cni.go:84] Creating CNI manager for ""
	I0816 18:35:33.078100   82860 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 18:35:33.078112   82860 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0816 18:35:33.078135   82860 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.39.194 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-774287 NodeName:newest-cni-774287 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.194"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:ma
p[] NodeIP:192.168.39.194 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 18:35:33.078272   82860 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.194
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-774287"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.194
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.194"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 18:35:33.078359   82860 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 18:35:33.088040   82860 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 18:35:33.088094   82860 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 18:35:33.097961   82860 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I0816 18:35:33.114328   82860 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 18:35:33.129777   82860 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2285 bytes)
	I0816 18:35:33.145582   82860 ssh_runner.go:195] Run: grep 192.168.39.194	control-plane.minikube.internal$ /etc/hosts
	I0816 18:35:33.148959   82860 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.194	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 18:35:33.160465   82860 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:35:33.280999   82860 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 18:35:33.296739   82860 certs.go:68] Setting up /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/newest-cni-774287 for IP: 192.168.39.194
	I0816 18:35:33.296765   82860 certs.go:194] generating shared ca certs ...
	I0816 18:35:33.296787   82860 certs.go:226] acquiring lock for ca certs: {Name:mk1e000575c94c138513704c2900b8a68810eb65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:35:33.296941   82860 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key
	I0816 18:35:33.296984   82860 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key
	I0816 18:35:33.296994   82860 certs.go:256] generating profile certs ...
	I0816 18:35:33.297066   82860 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/newest-cni-774287/client.key
	I0816 18:35:33.297140   82860 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/newest-cni-774287/apiserver.key.fca7bee4
	I0816 18:35:33.297190   82860 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/newest-cni-774287/proxy-client.key
	I0816 18:35:33.297334   82860 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem (1338 bytes)
	W0816 18:35:33.297365   82860 certs.go:480] ignoring /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753_empty.pem, impossibly tiny 0 bytes
	I0816 18:35:33.297375   82860 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 18:35:33.297400   82860 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem (1082 bytes)
	I0816 18:35:33.297424   82860 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem (1123 bytes)
	I0816 18:35:33.297446   82860 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem (1679 bytes)
	I0816 18:35:33.297484   82860 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem (1708 bytes)
	I0816 18:35:33.298253   82860 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 18:35:33.336966   82860 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 18:35:33.368278   82860 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 18:35:33.413421   82860 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 18:35:33.437025   82860 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/newest-cni-774287/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0816 18:35:33.460198   82860 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/newest-cni-774287/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 18:35:33.486027   82860 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/newest-cni-774287/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 18:35:33.507167   82860 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/newest-cni-774287/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 18:35:33.528180   82860 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem --> /usr/share/ca-certificates/16753.pem (1338 bytes)
	I0816 18:35:33.548703   82860 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /usr/share/ca-certificates/167532.pem (1708 bytes)
	I0816 18:35:33.569269   82860 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 18:35:33.590075   82860 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 18:35:33.604812   82860 ssh_runner.go:195] Run: openssl version
	I0816 18:35:33.610125   82860 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167532.pem && ln -fs /usr/share/ca-certificates/167532.pem /etc/ssl/certs/167532.pem"
	I0816 18:35:33.620172   82860 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167532.pem
	I0816 18:35:33.624070   82860 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 17:00 /usr/share/ca-certificates/167532.pem
	I0816 18:35:33.624139   82860 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167532.pem
	I0816 18:35:33.629684   82860 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167532.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 18:35:33.639598   82860 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 18:35:33.649244   82860 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:35:33.653371   82860 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:49 /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:35:33.653417   82860 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:35:33.658894   82860 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 18:35:33.668227   82860 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16753.pem && ln -fs /usr/share/ca-certificates/16753.pem /etc/ssl/certs/16753.pem"
	I0816 18:35:33.678085   82860 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16753.pem
	I0816 18:35:33.682420   82860 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 17:00 /usr/share/ca-certificates/16753.pem
	I0816 18:35:33.682457   82860 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16753.pem
	I0816 18:35:33.687585   82860 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16753.pem /etc/ssl/certs/51391683.0"
	I0816 18:35:33.697113   82860 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 18:35:33.701068   82860 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 18:35:33.706473   82860 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 18:35:33.711712   82860 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 18:35:33.716938   82860 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 18:35:33.722035   82860 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 18:35:33.727216   82860 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 18:35:33.732377   82860 kubeadm.go:392] StartCluster: {Name:newest-cni-774287 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:newest-cni-774287 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.194 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0
s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 18:35:33.732484   82860 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 18:35:33.732520   82860 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 18:35:33.765222   82860 cri.go:89] found id: ""
	I0816 18:35:33.765289   82860 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 18:35:33.774606   82860 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 18:35:33.774625   82860 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 18:35:33.774664   82860 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 18:35:33.783299   82860 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 18:35:33.783861   82860 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-774287" does not appear in /home/jenkins/minikube-integration/19461-9545/kubeconfig
	I0816 18:35:33.784150   82860 kubeconfig.go:62] /home/jenkins/minikube-integration/19461-9545/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-774287" cluster setting kubeconfig missing "newest-cni-774287" context setting]
	I0816 18:35:33.784646   82860 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/kubeconfig: {Name:mk7d5abb3a1b9b654c5d7490d32aeae2c9a1bdd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:35:33.785890   82860 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 18:35:33.794867   82860 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.194
	I0816 18:35:33.794895   82860 kubeadm.go:1160] stopping kube-system containers ...
	I0816 18:35:33.794906   82860 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 18:35:33.794949   82860 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 18:35:33.830533   82860 cri.go:89] found id: ""
	I0816 18:35:33.830598   82860 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 18:35:33.845838   82860 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 18:35:33.855718   82860 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 18:35:33.855740   82860 kubeadm.go:157] found existing configuration files:
	
	I0816 18:35:33.855798   82860 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 18:35:33.864213   82860 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 18:35:33.864269   82860 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 18:35:33.872768   82860 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 18:35:33.880604   82860 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 18:35:33.880675   82860 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 18:35:33.889082   82860 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 18:35:33.896990   82860 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 18:35:33.897040   82860 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 18:35:33.905402   82860 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 18:35:33.913172   82860 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 18:35:33.913209   82860 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 18:35:33.922356   82860 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 18:35:33.930975   82860 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:35:34.031293   82860 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:35:34.904774   82860 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:35:35.113982   82860 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:35:35.178974   82860 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:35:35.257336   82860 api_server.go:52] waiting for apiserver process to appear ...
	I0816 18:35:35.257416   82860 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:35:35.757925   82860 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:35:36.258201   82860 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:35:36.757505   82860 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:35:37.257620   82860 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:35:37.283299   82860 api_server.go:72] duration metric: took 2.025969216s to wait for apiserver process to appear ...
	I0816 18:35:37.283331   82860 api_server.go:88] waiting for apiserver healthz status ...
	I0816 18:35:37.283355   82860 api_server.go:253] Checking apiserver healthz at https://192.168.39.194:8443/healthz ...
	I0816 18:35:37.283818   82860 api_server.go:269] stopped: https://192.168.39.194:8443/healthz: Get "https://192.168.39.194:8443/healthz": dial tcp 192.168.39.194:8443: connect: connection refused
	I0816 18:35:37.783402   82860 api_server.go:253] Checking apiserver healthz at https://192.168.39.194:8443/healthz ...
	I0816 18:35:39.881725   82860 api_server.go:279] https://192.168.39.194:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 18:35:39.881759   82860 api_server.go:103] status: https://192.168.39.194:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 18:35:39.881776   82860 api_server.go:253] Checking apiserver healthz at https://192.168.39.194:8443/healthz ...
	I0816 18:35:39.951457   82860 api_server.go:279] https://192.168.39.194:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:35:39.951488   82860 api_server.go:103] status: https://192.168.39.194:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:35:40.283870   82860 api_server.go:253] Checking apiserver healthz at https://192.168.39.194:8443/healthz ...
	I0816 18:35:40.289001   82860 api_server.go:279] https://192.168.39.194:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:35:40.289038   82860 api_server.go:103] status: https://192.168.39.194:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:35:40.784265   82860 api_server.go:253] Checking apiserver healthz at https://192.168.39.194:8443/healthz ...
	I0816 18:35:40.790178   82860 api_server.go:279] https://192.168.39.194:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:35:40.790206   82860 api_server.go:103] status: https://192.168.39.194:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:35:41.283704   82860 api_server.go:253] Checking apiserver healthz at https://192.168.39.194:8443/healthz ...
	I0816 18:35:41.288774   82860 api_server.go:279] https://192.168.39.194:8443/healthz returned 200:
	ok
	I0816 18:35:41.295999   82860 api_server.go:141] control plane version: v1.31.0
	I0816 18:35:41.296024   82860 api_server.go:131] duration metric: took 4.01268455s to wait for apiserver health ...
	I0816 18:35:41.296035   82860 cni.go:84] Creating CNI manager for ""
	I0816 18:35:41.296043   82860 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 18:35:41.297757   82860 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 18:35:41.298937   82860 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 18:35:41.314679   82860 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 18:35:41.344687   82860 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 18:35:41.363789   82860 system_pods.go:59] 8 kube-system pods found
	I0816 18:35:41.363832   82860 system_pods.go:61] "coredns-6f6b679f8f-mkf4v" [2a1e64f6-8e75-44eb-a56e-2b906451e5d7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 18:35:41.363846   82860 system_pods.go:61] "etcd-newest-cni-774287" [56f584bc-707b-45f0-bd0b-88a6e501e983] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 18:35:41.363857   82860 system_pods.go:61] "kube-apiserver-newest-cni-774287" [13190a30-4671-4519-bb50-d30afe24c4dc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 18:35:41.363870   82860 system_pods.go:61] "kube-controller-manager-newest-cni-774287" [5417f08b-55b4-4aef-8f2b-c742344794de] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 18:35:41.363882   82860 system_pods.go:61] "kube-proxy-5ng9m" [2b4ced34-b436-4834-991b-776537026b48] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0816 18:35:41.363893   82860 system_pods.go:61] "kube-scheduler-newest-cni-774287" [daac0293-c0d0-4e25-9ae1-9c27698d481b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 18:35:41.363904   82860 system_pods.go:61] "metrics-server-6867b74b74-4z6xq" [2358fe5c-7d10-4583-bc70-4a20440fc190] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 18:35:41.363913   82860 system_pods.go:61] "storage-provisioner" [34347a9d-1d36-48e0-a138-40916f0ad78f] Running
	I0816 18:35:41.363925   82860 system_pods.go:74] duration metric: took 19.217548ms to wait for pod list to return data ...
	I0816 18:35:41.363936   82860 node_conditions.go:102] verifying NodePressure condition ...
	I0816 18:35:41.371652   82860 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 18:35:41.371677   82860 node_conditions.go:123] node cpu capacity is 2
	I0816 18:35:41.371687   82860 node_conditions.go:105] duration metric: took 7.743244ms to run NodePressure ...
	I0816 18:35:41.371704   82860 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:35:41.636318   82860 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 18:35:41.647386   82860 ops.go:34] apiserver oom_adj: -16
	I0816 18:35:41.647419   82860 kubeadm.go:597] duration metric: took 7.872787289s to restartPrimaryControlPlane
	I0816 18:35:41.647432   82860 kubeadm.go:394] duration metric: took 7.915060356s to StartCluster
	I0816 18:35:41.647454   82860 settings.go:142] acquiring lock: {Name:mk7d6bbff611e99a9222156e58c7ea3b0a5541f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:35:41.647540   82860 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19461-9545/kubeconfig
	I0816 18:35:41.648491   82860 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/kubeconfig: {Name:mk7d5abb3a1b9b654c5d7490d32aeae2c9a1bdd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:35:41.648763   82860 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.194 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 18:35:41.648874   82860 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 18:35:41.648974   82860 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-774287"
	I0816 18:35:41.649002   82860 config.go:182] Loaded profile config "newest-cni-774287": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 18:35:41.649004   82860 addons.go:69] Setting metrics-server=true in profile "newest-cni-774287"
	I0816 18:35:41.649008   82860 addons.go:69] Setting dashboard=true in profile "newest-cni-774287"
	I0816 18:35:41.649034   82860 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-774287"
	W0816 18:35:41.649045   82860 addons.go:243] addon storage-provisioner should already be in state true
	I0816 18:35:41.649049   82860 addons.go:234] Setting addon metrics-server=true in "newest-cni-774287"
	W0816 18:35:41.649059   82860 addons.go:243] addon metrics-server should already be in state true
	I0816 18:35:41.649068   82860 addons.go:234] Setting addon dashboard=true in "newest-cni-774287"
	I0816 18:35:41.649075   82860 host.go:66] Checking if "newest-cni-774287" exists ...
	W0816 18:35:41.649082   82860 addons.go:243] addon dashboard should already be in state true
	I0816 18:35:41.649091   82860 host.go:66] Checking if "newest-cni-774287" exists ...
	I0816 18:35:41.649116   82860 host.go:66] Checking if "newest-cni-774287" exists ...
	I0816 18:35:41.648987   82860 addons.go:69] Setting default-storageclass=true in profile "newest-cni-774287"
	I0816 18:35:41.649168   82860 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-774287"
	I0816 18:35:41.649416   82860 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:35:41.649440   82860 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:35:41.649467   82860 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:35:41.649467   82860 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:35:41.649498   82860 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:35:41.649527   82860 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:35:41.649471   82860 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:35:41.649606   82860 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:35:41.650440   82860 out.go:177] * Verifying Kubernetes components...
	I0816 18:35:41.651817   82860 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:35:41.664954   82860 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40569
	I0816 18:35:41.665126   82860 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35389
	I0816 18:35:41.665454   82860 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:35:41.665460   82860 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:35:41.665736   82860 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44153
	I0816 18:35:41.665982   82860 main.go:141] libmachine: Using API Version  1
	I0816 18:35:41.666008   82860 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:35:41.666224   82860 main.go:141] libmachine: Using API Version  1
	I0816 18:35:41.666239   82860 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:35:41.666276   82860 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:35:41.666393   82860 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:35:41.666594   82860 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:35:41.666722   82860 main.go:141] libmachine: Using API Version  1
	I0816 18:35:41.666742   82860 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:35:41.666834   82860 main.go:141] libmachine: (newest-cni-774287) Calling .GetState
	I0816 18:35:41.666939   82860 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:35:41.666978   82860 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:35:41.667053   82860 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:35:41.667547   82860 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:35:41.667574   82860 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:35:41.667670   82860 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43729
	I0816 18:35:41.668102   82860 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:35:41.668643   82860 main.go:141] libmachine: Using API Version  1
	I0816 18:35:41.668665   82860 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:35:41.669026   82860 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:35:41.669587   82860 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:35:41.669616   82860 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:35:41.669891   82860 addons.go:234] Setting addon default-storageclass=true in "newest-cni-774287"
	W0816 18:35:41.669908   82860 addons.go:243] addon default-storageclass should already be in state true
	I0816 18:35:41.669937   82860 host.go:66] Checking if "newest-cni-774287" exists ...
	I0816 18:35:41.670253   82860 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:35:41.670278   82860 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:35:41.682518   82860 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45721
	I0816 18:35:41.682930   82860 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:35:41.683493   82860 main.go:141] libmachine: Using API Version  1
	I0816 18:35:41.683519   82860 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:35:41.683865   82860 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:35:41.684049   82860 main.go:141] libmachine: (newest-cni-774287) Calling .GetState
	I0816 18:35:41.684458   82860 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46279
	I0816 18:35:41.684837   82860 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:35:41.685479   82860 main.go:141] libmachine: Using API Version  1
	I0816 18:35:41.685496   82860 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:35:41.685791   82860 main.go:141] libmachine: (newest-cni-774287) Calling .DriverName
	I0816 18:35:41.686012   82860 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:35:41.686621   82860 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:35:41.686651   82860 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:35:41.686889   82860 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38249
	I0816 18:35:41.687220   82860 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:35:41.687758   82860 main.go:141] libmachine: Using API Version  1
	I0816 18:35:41.687779   82860 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:35:41.687853   82860 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:35:41.688200   82860 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42017
	I0816 18:35:41.688296   82860 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:35:41.688428   82860 main.go:141] libmachine: (newest-cni-774287) Calling .GetState
	I0816 18:35:41.688786   82860 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:35:41.689227   82860 main.go:141] libmachine: Using API Version  1
	I0816 18:35:41.689241   82860 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:35:41.689317   82860 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 18:35:41.689334   82860 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 18:35:41.689354   82860 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHHostname
	I0816 18:35:41.689761   82860 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:35:41.689946   82860 main.go:141] libmachine: (newest-cni-774287) Calling .GetState
	I0816 18:35:41.691502   82860 main.go:141] libmachine: (newest-cni-774287) Calling .DriverName
	I0816 18:35:41.692255   82860 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:35:41.692495   82860 main.go:141] libmachine: (newest-cni-774287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:15:e2", ip: ""} in network mk-newest-cni-774287: {Iface:virbr1 ExpiryTime:2024-08-16 19:35:21 +0000 UTC Type:0 Mac:52:54:00:2d:15:e2 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:newest-cni-774287 Clientid:01:52:54:00:2d:15:e2}
	I0816 18:35:41.692521   82860 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined IP address 192.168.39.194 and MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:35:41.692702   82860 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHPort
	I0816 18:35:41.692914   82860 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHKeyPath
	I0816 18:35:41.692972   82860 main.go:141] libmachine: (newest-cni-774287) Calling .DriverName
	I0816 18:35:41.693357   82860 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0816 18:35:41.693614   82860 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHUsername
	I0816 18:35:41.693782   82860 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/newest-cni-774287/id_rsa Username:docker}
	I0816 18:35:41.694667   82860 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0816 18:35:41.695553   82860 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	
	
	==> CRI-O <==
	Aug 16 18:35:42 default-k8s-diff-port-256678 crio[724]: time="2024-08-16 18:35:42.679865863Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723833342679833487,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d49d3e7c-bcb2-499b-a301-d63f4cd8f38d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 18:35:42 default-k8s-diff-port-256678 crio[724]: time="2024-08-16 18:35:42.680675377Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=801a8fac-d24d-4f21-9cab-5e6b16b09e2e name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:35:42 default-k8s-diff-port-256678 crio[724]: time="2024-08-16 18:35:42.680748867Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=801a8fac-d24d-4f21-9cab-5e6b16b09e2e name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:35:42 default-k8s-diff-port-256678 crio[724]: time="2024-08-16 18:35:42.681083326Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:44ffea7ac7a4fe457d2f8e864b98109d89702ab7760e8b91fa031af7842f3ee0,PodSandboxId:befe37e8816455cec081a73ae9c7e33e3d73e53db4af187a50be5ac80e5e833d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723832365774370023,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-t74vf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41afd723-b034-460e-8e5f-197c8d8bcd7a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8150e1ec7b21f4494ae4b9f9dd2874f68eac8136968363add1726c684a6ecfa2,PodSandboxId:04fd86f5a1cdd6a05692a954edd76e4f4e1e3bee1b25577f90db51fc121a1c58,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723832365521802309,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-hx7sb,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 4ebcdf34-c4e8-47bd-83f3-c56ea7bfb7d4,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e868145890802c78b2224210ec0fc9d6e76a46b800dfcf40d962dd8776c4d4c,PodSandboxId:3f5f107543d865946007d7e55aaa014a788bb20f53ddfc0b695f1ebfd4f7ac1e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_RUNNING,CreatedAt:1723832365304056913,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 491e3d8e-5a8b-4187-a682-411c6fb9dd92,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:172b97dc3d12c4ee85db2aa377199c187b484e2cf6dd686fa942659c1c155a5a,PodSandboxId:a9a4d48c479ae912b315f439a6a7dc6584c867463dd6b44c9bbe103d6d9dab33,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING
,CreatedAt:1723832364285774094,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qsskg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c863ca3c-8451-4fa7-b22d-c709e67bd26b,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f18a0112d14e18ef684dc2fe9092d50c1f5512d044a42c2a6517cb0a45ad8fd9,PodSandboxId:4b2d9901dd4e3547e9e046b4946c7f2c329387fe0cac35378e8a1e704904bafe,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723832353352324985,Labels:ma
p[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-256678,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 495866009f6d3fb6a7d309d47e72d3ce,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b09f30797f03bc13b3cea0e942fad8ad2a711ee5bfc9ae535f3e636bf7801f4d,PodSandboxId:6c4dbc8c596f1d872ab17a7d965a41e7b3f91972a9dfa1690a3d39018c3c9657,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723832353316994754,Labels:map[string]string{io.kub
ernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-256678,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec8451e8c42a3a85d47f8c1e58894360,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7c0d25b7b476bcadc1d1d410ec17838a8b45cedb0bb40fc76adc6ce146ce252,PodSandboxId:0d896848c3a212ee599c6e2393bec96f3c9acb22f16ba783572287d9335a59ca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723832353304207151,Labels:map[string]string
{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-256678,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d383f69673220a201da4925ed691535,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c862407ecc8544e43f5fe5a09b60d7fc3df75cd26ec1342c489eda6f3bdd32a,PodSandboxId:192b1742f764a2d69f974f54f922665a9bd21f8e11c4b3c15230d54af4956b90,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723832353298245359,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-256678,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faa47c22ff820064872f0dddee3e5397,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c2cdc235d0c89b96b197444acb5a9714191d8fe3722cbff5bcb5513a73de8ed,PodSandboxId:8e408789e7ce797b7059b60b63d172305a262759954de690dbd784330d52e507,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723832064378467122,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-256678,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faa47c22ff820064872f0dddee3e5397,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=801a8fac-d24d-4f21-9cab-5e6b16b09e2e name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:35:42 default-k8s-diff-port-256678 crio[724]: time="2024-08-16 18:35:42.723378819Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ef251536-aca4-4715-aa26-88250eb7d183 name=/runtime.v1.RuntimeService/Version
	Aug 16 18:35:42 default-k8s-diff-port-256678 crio[724]: time="2024-08-16 18:35:42.723967471Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ef251536-aca4-4715-aa26-88250eb7d183 name=/runtime.v1.RuntimeService/Version
	Aug 16 18:35:42 default-k8s-diff-port-256678 crio[724]: time="2024-08-16 18:35:42.725177495Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6bae6c97-12d8-411f-b6cc-d05e21e709d1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 18:35:42 default-k8s-diff-port-256678 crio[724]: time="2024-08-16 18:35:42.726102649Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723833342726071574,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6bae6c97-12d8-411f-b6cc-d05e21e709d1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 18:35:42 default-k8s-diff-port-256678 crio[724]: time="2024-08-16 18:35:42.726826150Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3a082d15-6dae-4f23-880a-6ae4b35fe8ac name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:35:42 default-k8s-diff-port-256678 crio[724]: time="2024-08-16 18:35:42.726971329Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3a082d15-6dae-4f23-880a-6ae4b35fe8ac name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:35:42 default-k8s-diff-port-256678 crio[724]: time="2024-08-16 18:35:42.727259072Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:44ffea7ac7a4fe457d2f8e864b98109d89702ab7760e8b91fa031af7842f3ee0,PodSandboxId:befe37e8816455cec081a73ae9c7e33e3d73e53db4af187a50be5ac80e5e833d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723832365774370023,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-t74vf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41afd723-b034-460e-8e5f-197c8d8bcd7a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8150e1ec7b21f4494ae4b9f9dd2874f68eac8136968363add1726c684a6ecfa2,PodSandboxId:04fd86f5a1cdd6a05692a954edd76e4f4e1e3bee1b25577f90db51fc121a1c58,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723832365521802309,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-hx7sb,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 4ebcdf34-c4e8-47bd-83f3-c56ea7bfb7d4,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e868145890802c78b2224210ec0fc9d6e76a46b800dfcf40d962dd8776c4d4c,PodSandboxId:3f5f107543d865946007d7e55aaa014a788bb20f53ddfc0b695f1ebfd4f7ac1e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_RUNNING,CreatedAt:1723832365304056913,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 491e3d8e-5a8b-4187-a682-411c6fb9dd92,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:172b97dc3d12c4ee85db2aa377199c187b484e2cf6dd686fa942659c1c155a5a,PodSandboxId:a9a4d48c479ae912b315f439a6a7dc6584c867463dd6b44c9bbe103d6d9dab33,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING
,CreatedAt:1723832364285774094,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qsskg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c863ca3c-8451-4fa7-b22d-c709e67bd26b,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f18a0112d14e18ef684dc2fe9092d50c1f5512d044a42c2a6517cb0a45ad8fd9,PodSandboxId:4b2d9901dd4e3547e9e046b4946c7f2c329387fe0cac35378e8a1e704904bafe,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723832353352324985,Labels:ma
p[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-256678,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 495866009f6d3fb6a7d309d47e72d3ce,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b09f30797f03bc13b3cea0e942fad8ad2a711ee5bfc9ae535f3e636bf7801f4d,PodSandboxId:6c4dbc8c596f1d872ab17a7d965a41e7b3f91972a9dfa1690a3d39018c3c9657,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723832353316994754,Labels:map[string]string{io.kub
ernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-256678,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec8451e8c42a3a85d47f8c1e58894360,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7c0d25b7b476bcadc1d1d410ec17838a8b45cedb0bb40fc76adc6ce146ce252,PodSandboxId:0d896848c3a212ee599c6e2393bec96f3c9acb22f16ba783572287d9335a59ca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723832353304207151,Labels:map[string]string
{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-256678,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d383f69673220a201da4925ed691535,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c862407ecc8544e43f5fe5a09b60d7fc3df75cd26ec1342c489eda6f3bdd32a,PodSandboxId:192b1742f764a2d69f974f54f922665a9bd21f8e11c4b3c15230d54af4956b90,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723832353298245359,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-256678,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faa47c22ff820064872f0dddee3e5397,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c2cdc235d0c89b96b197444acb5a9714191d8fe3722cbff5bcb5513a73de8ed,PodSandboxId:8e408789e7ce797b7059b60b63d172305a262759954de690dbd784330d52e507,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723832064378467122,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-256678,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faa47c22ff820064872f0dddee3e5397,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3a082d15-6dae-4f23-880a-6ae4b35fe8ac name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:35:42 default-k8s-diff-port-256678 crio[724]: time="2024-08-16 18:35:42.777568508Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7c898b63-bf3b-4dfc-85de-7167dc03ed04 name=/runtime.v1.RuntimeService/Version
	Aug 16 18:35:42 default-k8s-diff-port-256678 crio[724]: time="2024-08-16 18:35:42.777656094Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7c898b63-bf3b-4dfc-85de-7167dc03ed04 name=/runtime.v1.RuntimeService/Version
	Aug 16 18:35:42 default-k8s-diff-port-256678 crio[724]: time="2024-08-16 18:35:42.779504404Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2b09b330-2d64-436a-b1a1-ff5c4ed95d97 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 18:35:42 default-k8s-diff-port-256678 crio[724]: time="2024-08-16 18:35:42.780081878Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723833342780042321,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2b09b330-2d64-436a-b1a1-ff5c4ed95d97 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 18:35:42 default-k8s-diff-port-256678 crio[724]: time="2024-08-16 18:35:42.781632046Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5f0641c6-ab79-4637-a4f8-d93318754158 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:35:42 default-k8s-diff-port-256678 crio[724]: time="2024-08-16 18:35:42.781719952Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5f0641c6-ab79-4637-a4f8-d93318754158 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:35:42 default-k8s-diff-port-256678 crio[724]: time="2024-08-16 18:35:42.782056239Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:44ffea7ac7a4fe457d2f8e864b98109d89702ab7760e8b91fa031af7842f3ee0,PodSandboxId:befe37e8816455cec081a73ae9c7e33e3d73e53db4af187a50be5ac80e5e833d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723832365774370023,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-t74vf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41afd723-b034-460e-8e5f-197c8d8bcd7a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8150e1ec7b21f4494ae4b9f9dd2874f68eac8136968363add1726c684a6ecfa2,PodSandboxId:04fd86f5a1cdd6a05692a954edd76e4f4e1e3bee1b25577f90db51fc121a1c58,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723832365521802309,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-hx7sb,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 4ebcdf34-c4e8-47bd-83f3-c56ea7bfb7d4,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e868145890802c78b2224210ec0fc9d6e76a46b800dfcf40d962dd8776c4d4c,PodSandboxId:3f5f107543d865946007d7e55aaa014a788bb20f53ddfc0b695f1ebfd4f7ac1e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_RUNNING,CreatedAt:1723832365304056913,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 491e3d8e-5a8b-4187-a682-411c6fb9dd92,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:172b97dc3d12c4ee85db2aa377199c187b484e2cf6dd686fa942659c1c155a5a,PodSandboxId:a9a4d48c479ae912b315f439a6a7dc6584c867463dd6b44c9bbe103d6d9dab33,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING
,CreatedAt:1723832364285774094,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qsskg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c863ca3c-8451-4fa7-b22d-c709e67bd26b,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f18a0112d14e18ef684dc2fe9092d50c1f5512d044a42c2a6517cb0a45ad8fd9,PodSandboxId:4b2d9901dd4e3547e9e046b4946c7f2c329387fe0cac35378e8a1e704904bafe,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723832353352324985,Labels:ma
p[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-256678,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 495866009f6d3fb6a7d309d47e72d3ce,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b09f30797f03bc13b3cea0e942fad8ad2a711ee5bfc9ae535f3e636bf7801f4d,PodSandboxId:6c4dbc8c596f1d872ab17a7d965a41e7b3f91972a9dfa1690a3d39018c3c9657,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723832353316994754,Labels:map[string]string{io.kub
ernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-256678,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec8451e8c42a3a85d47f8c1e58894360,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7c0d25b7b476bcadc1d1d410ec17838a8b45cedb0bb40fc76adc6ce146ce252,PodSandboxId:0d896848c3a212ee599c6e2393bec96f3c9acb22f16ba783572287d9335a59ca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723832353304207151,Labels:map[string]string
{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-256678,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d383f69673220a201da4925ed691535,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c862407ecc8544e43f5fe5a09b60d7fc3df75cd26ec1342c489eda6f3bdd32a,PodSandboxId:192b1742f764a2d69f974f54f922665a9bd21f8e11c4b3c15230d54af4956b90,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723832353298245359,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-256678,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faa47c22ff820064872f0dddee3e5397,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c2cdc235d0c89b96b197444acb5a9714191d8fe3722cbff5bcb5513a73de8ed,PodSandboxId:8e408789e7ce797b7059b60b63d172305a262759954de690dbd784330d52e507,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723832064378467122,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-256678,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faa47c22ff820064872f0dddee3e5397,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5f0641c6-ab79-4637-a4f8-d93318754158 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:35:42 default-k8s-diff-port-256678 crio[724]: time="2024-08-16 18:35:42.815851896Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=22a4cc04-b26e-4443-857a-d2751ee3dbaf name=/runtime.v1.RuntimeService/Version
	Aug 16 18:35:42 default-k8s-diff-port-256678 crio[724]: time="2024-08-16 18:35:42.815996736Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=22a4cc04-b26e-4443-857a-d2751ee3dbaf name=/runtime.v1.RuntimeService/Version
	Aug 16 18:35:42 default-k8s-diff-port-256678 crio[724]: time="2024-08-16 18:35:42.817231243Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=781a49ac-1a1e-4b8e-8598-2801a6d30d2b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 18:35:42 default-k8s-diff-port-256678 crio[724]: time="2024-08-16 18:35:42.817654929Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723833342817632829,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=781a49ac-1a1e-4b8e-8598-2801a6d30d2b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 18:35:42 default-k8s-diff-port-256678 crio[724]: time="2024-08-16 18:35:42.818326700Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=972c4bd0-9068-4b21-90a4-43d05e66e150 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:35:42 default-k8s-diff-port-256678 crio[724]: time="2024-08-16 18:35:42.818400091Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=972c4bd0-9068-4b21-90a4-43d05e66e150 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:35:42 default-k8s-diff-port-256678 crio[724]: time="2024-08-16 18:35:42.818645514Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:44ffea7ac7a4fe457d2f8e864b98109d89702ab7760e8b91fa031af7842f3ee0,PodSandboxId:befe37e8816455cec081a73ae9c7e33e3d73e53db4af187a50be5ac80e5e833d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723832365774370023,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-t74vf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41afd723-b034-460e-8e5f-197c8d8bcd7a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8150e1ec7b21f4494ae4b9f9dd2874f68eac8136968363add1726c684a6ecfa2,PodSandboxId:04fd86f5a1cdd6a05692a954edd76e4f4e1e3bee1b25577f90db51fc121a1c58,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723832365521802309,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-hx7sb,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 4ebcdf34-c4e8-47bd-83f3-c56ea7bfb7d4,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e868145890802c78b2224210ec0fc9d6e76a46b800dfcf40d962dd8776c4d4c,PodSandboxId:3f5f107543d865946007d7e55aaa014a788bb20f53ddfc0b695f1ebfd4f7ac1e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_RUNNING,CreatedAt:1723832365304056913,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 491e3d8e-5a8b-4187-a682-411c6fb9dd92,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:172b97dc3d12c4ee85db2aa377199c187b484e2cf6dd686fa942659c1c155a5a,PodSandboxId:a9a4d48c479ae912b315f439a6a7dc6584c867463dd6b44c9bbe103d6d9dab33,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING
,CreatedAt:1723832364285774094,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qsskg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c863ca3c-8451-4fa7-b22d-c709e67bd26b,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f18a0112d14e18ef684dc2fe9092d50c1f5512d044a42c2a6517cb0a45ad8fd9,PodSandboxId:4b2d9901dd4e3547e9e046b4946c7f2c329387fe0cac35378e8a1e704904bafe,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723832353352324985,Labels:ma
p[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-256678,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 495866009f6d3fb6a7d309d47e72d3ce,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b09f30797f03bc13b3cea0e942fad8ad2a711ee5bfc9ae535f3e636bf7801f4d,PodSandboxId:6c4dbc8c596f1d872ab17a7d965a41e7b3f91972a9dfa1690a3d39018c3c9657,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723832353316994754,Labels:map[string]string{io.kub
ernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-256678,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec8451e8c42a3a85d47f8c1e58894360,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7c0d25b7b476bcadc1d1d410ec17838a8b45cedb0bb40fc76adc6ce146ce252,PodSandboxId:0d896848c3a212ee599c6e2393bec96f3c9acb22f16ba783572287d9335a59ca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723832353304207151,Labels:map[string]string
{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-256678,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d383f69673220a201da4925ed691535,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c862407ecc8544e43f5fe5a09b60d7fc3df75cd26ec1342c489eda6f3bdd32a,PodSandboxId:192b1742f764a2d69f974f54f922665a9bd21f8e11c4b3c15230d54af4956b90,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723832353298245359,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-256678,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faa47c22ff820064872f0dddee3e5397,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c2cdc235d0c89b96b197444acb5a9714191d8fe3722cbff5bcb5513a73de8ed,PodSandboxId:8e408789e7ce797b7059b60b63d172305a262759954de690dbd784330d52e507,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723832064378467122,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-256678,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faa47c22ff820064872f0dddee3e5397,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=972c4bd0-9068-4b21-90a4-43d05e66e150 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	44ffea7ac7a4f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   16 minutes ago      Running             coredns                   0                   befe37e881645       coredns-6f6b679f8f-t74vf
	8150e1ec7b21f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   16 minutes ago      Running             coredns                   0                   04fd86f5a1cdd       coredns-6f6b679f8f-hx7sb
	6e86814589080       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 minutes ago      Running             storage-provisioner       0                   3f5f107543d86       storage-provisioner
	172b97dc3d12c       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   16 minutes ago      Running             kube-proxy                0                   a9a4d48c479ae       kube-proxy-qsskg
	f18a0112d14e1       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   16 minutes ago      Running             etcd                      2                   4b2d9901dd4e3       etcd-default-k8s-diff-port-256678
	b09f30797f03b       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   16 minutes ago      Running             kube-scheduler            2                   6c4dbc8c596f1       kube-scheduler-default-k8s-diff-port-256678
	e7c0d25b7b476       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   16 minutes ago      Running             kube-controller-manager   2                   0d896848c3a21       kube-controller-manager-default-k8s-diff-port-256678
	4c862407ecc85       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   16 minutes ago      Running             kube-apiserver            2                   192b1742f764a       kube-apiserver-default-k8s-diff-port-256678
	6c2cdc235d0c8       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   21 minutes ago      Exited              kube-apiserver            1                   8e408789e7ce7       kube-apiserver-default-k8s-diff-port-256678
	
	
	==> coredns [44ffea7ac7a4fe457d2f8e864b98109d89702ab7760e8b91fa031af7842f3ee0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [8150e1ec7b21f4494ae4b9f9dd2874f68eac8136968363add1726c684a6ecfa2] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-256678
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-256678
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd
	                    minikube.k8s.io/name=default-k8s-diff-port-256678
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_16T18_19_19_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 18:19:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-256678
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 18:35:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 18:34:49 +0000   Fri, 16 Aug 2024 18:19:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 18:34:49 +0000   Fri, 16 Aug 2024 18:19:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 18:34:49 +0000   Fri, 16 Aug 2024 18:19:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 18:34:49 +0000   Fri, 16 Aug 2024 18:19:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.144
	  Hostname:    default-k8s-diff-port-256678
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3ac74fca6129435b88a4d0646225ea02
	  System UUID:                3ac74fca-6129-435b-88a4-d0646225ea02
	  Boot ID:                    ee2a0432-1e4d-4a1e-a4f0-5190b5e93053
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-hx7sb                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 coredns-6f6b679f8f-t74vf                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 etcd-default-k8s-diff-port-256678                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         16m
	  kube-system                 kube-apiserver-default-k8s-diff-port-256678             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-256678    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-qsskg                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-default-k8s-diff-port-256678             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 metrics-server-6867b74b74-vmt5v                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         16m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 16m   kube-proxy       
	  Normal  Starting                 16m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m   kubelet          Node default-k8s-diff-port-256678 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m   kubelet          Node default-k8s-diff-port-256678 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m   kubelet          Node default-k8s-diff-port-256678 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16m   node-controller  Node default-k8s-diff-port-256678 event: Registered Node default-k8s-diff-port-256678 in Controller
	
	
	==> dmesg <==
	[  +0.037204] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Aug16 18:14] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.944422] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.562124] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.963553] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.077412] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059571] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +0.216898] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +0.127168] systemd-fstab-generator[679]: Ignoring "noauto" option for root device
	[  +0.282903] systemd-fstab-generator[708]: Ignoring "noauto" option for root device
	[  +4.421739] systemd-fstab-generator[806]: Ignoring "noauto" option for root device
	[  +0.066664] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.676767] systemd-fstab-generator[925]: Ignoring "noauto" option for root device
	[  +5.591055] kauditd_printk_skb: 97 callbacks suppressed
	[  +7.635861] kauditd_printk_skb: 54 callbacks suppressed
	[ +23.329118] kauditd_printk_skb: 31 callbacks suppressed
	[Aug16 18:19] kauditd_printk_skb: 6 callbacks suppressed
	[  +2.009646] systemd-fstab-generator[2591]: Ignoring "noauto" option for root device
	[  +4.679864] kauditd_printk_skb: 54 callbacks suppressed
	[  +1.377937] systemd-fstab-generator[2914]: Ignoring "noauto" option for root device
	[  +5.809685] systemd-fstab-generator[3043]: Ignoring "noauto" option for root device
	[  +0.132143] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.543795] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [f18a0112d14e18ef684dc2fe9092d50c1f5512d044a42c2a6517cb0a45ad8fd9] <==
	{"level":"info","ts":"2024-08-16T18:19:14.644203Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-16T18:19:14.644213Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-16T18:19:14.645119Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T18:19:14.645782Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.144:2379"}
	{"level":"info","ts":"2024-08-16T18:19:14.646694Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T18:19:14.646937Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-16T18:19:14.646962Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-16T18:19:14.648442Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-16T18:29:14.682942Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":681}
	{"level":"info","ts":"2024-08-16T18:29:14.691755Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":681,"took":"8.18211ms","hash":877370136,"current-db-size-bytes":2220032,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2220032,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-08-16T18:29:14.691864Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":877370136,"revision":681,"compact-revision":-1}
	{"level":"info","ts":"2024-08-16T18:34:14.691407Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":925}
	{"level":"info","ts":"2024-08-16T18:34:14.703831Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":925,"took":"11.650787ms","hash":4017740076,"current-db-size-bytes":2220032,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":1568768,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-08-16T18:34:14.703934Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4017740076,"revision":925,"compact-revision":681}
	{"level":"warn","ts":"2024-08-16T18:34:48.063774Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"136.365775ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15328442632838047403 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.72.144\" mod_revision:1188 > success:<request_put:<key:\"/registry/masterleases/192.168.72.144\" value_size:67 lease:6105070595983271593 >> failure:<request_range:<key:\"/registry/masterleases/192.168.72.144\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-08-16T18:34:48.063952Z","caller":"traceutil/trace.go:171","msg":"trace[2004301692] linearizableReadLoop","detail":"{readStateIndex:1398; appliedIndex:1397; }","duration":"148.9006ms","start":"2024-08-16T18:34:47.915028Z","end":"2024-08-16T18:34:48.063929Z","steps":["trace[2004301692] 'read index received'  (duration: 11.330684ms)","trace[2004301692] 'applied index is now lower than readState.Index'  (duration: 137.568529ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-16T18:34:48.064026Z","caller":"traceutil/trace.go:171","msg":"trace[430754064] transaction","detail":"{read_only:false; response_revision:1196; number_of_response:1; }","duration":"262.065404ms","start":"2024-08-16T18:34:47.801951Z","end":"2024-08-16T18:34:48.064016Z","steps":["trace[430754064] 'process raft request'  (duration: 124.449633ms)","trace[430754064] 'compare'  (duration: 136.237483ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-16T18:34:48.064280Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"149.242698ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-16T18:34:48.064314Z","caller":"traceutil/trace.go:171","msg":"trace[1220735039] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1196; }","duration":"149.285185ms","start":"2024-08-16T18:34:47.915023Z","end":"2024-08-16T18:34:48.064308Z","steps":["trace[1220735039] 'agreement among raft nodes before linearized reading'  (duration: 149.153114ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T18:34:48.064443Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"133.466777ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/horizontalpodautoscalers/\" range_end:\"/registry/horizontalpodautoscalers0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-16T18:34:48.064474Z","caller":"traceutil/trace.go:171","msg":"trace[636984012] range","detail":"{range_begin:/registry/horizontalpodautoscalers/; range_end:/registry/horizontalpodautoscalers0; response_count:0; response_revision:1196; }","duration":"133.501674ms","start":"2024-08-16T18:34:47.930966Z","end":"2024-08-16T18:34:48.064468Z","steps":["trace[636984012] 'agreement among raft nodes before linearized reading'  (duration: 133.457482ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T18:34:48.609094Z","caller":"traceutil/trace.go:171","msg":"trace[738270467] transaction","detail":"{read_only:false; response_revision:1198; number_of_response:1; }","duration":"149.988981ms","start":"2024-08-16T18:34:48.459084Z","end":"2024-08-16T18:34:48.609073Z","steps":["trace[738270467] 'process raft request'  (duration: 149.839483ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T18:35:36.456319Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.755084ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1129"}
	{"level":"info","ts":"2024-08-16T18:35:36.456426Z","caller":"traceutil/trace.go:171","msg":"trace[1992450905] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1235; }","duration":"100.87123ms","start":"2024-08-16T18:35:36.355536Z","end":"2024-08-16T18:35:36.456407Z","steps":["trace[1992450905] 'range keys from in-memory index tree'  (duration: 100.683345ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T18:35:36.720703Z","caller":"traceutil/trace.go:171","msg":"trace[1961473010] transaction","detail":"{read_only:false; response_revision:1236; number_of_response:1; }","duration":"260.0639ms","start":"2024-08-16T18:35:36.460622Z","end":"2024-08-16T18:35:36.720686Z","steps":["trace[1961473010] 'process raft request'  (duration: 259.687832ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:35:43 up 21 min,  0 users,  load average: 0.07, 0.09, 0.12
	Linux default-k8s-diff-port-256678 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [4c862407ecc8544e43f5fe5a09b60d7fc3df75cd26ec1342c489eda6f3bdd32a] <==
	I0816 18:32:17.040507       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0816 18:32:17.040581       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0816 18:34:16.039219       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 18:34:16.039453       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0816 18:34:17.041208       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 18:34:17.041346       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0816 18:34:17.041432       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 18:34:17.041466       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0816 18:34:17.042501       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0816 18:34:17.042581       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0816 18:35:17.042802       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 18:35:17.042935       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0816 18:35:17.043026       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 18:35:17.043040       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0816 18:35:17.044079       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0816 18:35:17.044132       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [6c2cdc235d0c89b96b197444acb5a9714191d8fe3722cbff5bcb5513a73de8ed] <==
	W0816 18:19:04.810128       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:19:04.911173       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:19:04.921759       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:19:05.029094       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:19:05.101982       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:19:05.128998       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:19:05.201164       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:19:05.215409       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:19:05.267239       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:19:09.154311       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:19:09.193562       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:19:09.352225       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:19:09.402268       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:19:09.464759       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:19:09.533596       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:19:09.556668       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:19:09.562204       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:19:09.565715       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:19:09.631149       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:19:09.674643       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:19:09.732607       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:19:09.768782       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:19:09.798812       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:19:09.837773       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 18:19:09.879757       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [e7c0d25b7b476bcadc1d1d410ec17838a8b45cedb0bb40fc76adc6ce146ce252] <==
	I0816 18:30:23.533201       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0816 18:30:39.332708       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="234.494µs"
	I0816 18:30:50.335815       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="165.082µs"
	E0816 18:30:52.996637       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 18:30:53.542489       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 18:31:23.002337       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 18:31:23.550015       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 18:31:53.008553       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 18:31:53.557809       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 18:32:23.015213       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 18:32:23.565618       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 18:32:53.021529       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 18:32:53.572682       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 18:33:23.028140       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 18:33:23.580380       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 18:33:53.035560       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 18:33:53.588456       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 18:34:23.041951       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 18:34:23.598968       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0816 18:34:49.135500       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-256678"
	E0816 18:34:53.048755       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 18:34:53.609962       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 18:35:23.054525       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 18:35:23.617068       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0816 18:35:41.331316       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="478.915µs"
	
	
	==> kube-proxy [172b97dc3d12c4ee85db2aa377199c187b484e2cf6dd686fa942659c1c155a5a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0816 18:19:24.764045       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0816 18:19:24.776363       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.144"]
	E0816 18:19:24.779765       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0816 18:19:24.919416       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0816 18:19:24.919473       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0816 18:19:24.919505       1 server_linux.go:169] "Using iptables Proxier"
	I0816 18:19:24.926552       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0816 18:19:24.926835       1 server.go:483] "Version info" version="v1.31.0"
	I0816 18:19:24.926858       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 18:19:24.928613       1 config.go:197] "Starting service config controller"
	I0816 18:19:24.928643       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0816 18:19:24.928660       1 config.go:104] "Starting endpoint slice config controller"
	I0816 18:19:24.928663       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0816 18:19:24.929095       1 config.go:326] "Starting node config controller"
	I0816 18:19:24.929125       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0816 18:19:25.031495       1 shared_informer.go:320] Caches are synced for node config
	I0816 18:19:25.031587       1 shared_informer.go:320] Caches are synced for service config
	I0816 18:19:25.032448       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [b09f30797f03bc13b3cea0e942fad8ad2a711ee5bfc9ae535f3e636bf7801f4d] <==
	W0816 18:19:16.079057       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 18:19:16.079095       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0816 18:19:16.079140       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0816 18:19:16.079178       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 18:19:16.079146       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 18:19:16.079241       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 18:19:16.931449       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 18:19:16.931505       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0816 18:19:16.952914       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 18:19:16.953314       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 18:19:17.018831       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 18:19:17.018917       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0816 18:19:17.025436       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 18:19:17.025491       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 18:19:17.081370       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 18:19:17.081419       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 18:19:17.100126       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 18:19:17.100178       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 18:19:17.238986       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 18:19:17.239034       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0816 18:19:17.259577       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 18:19:17.259699       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 18:19:17.266963       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 18:19:17.267040       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0816 18:19:19.570544       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 16 18:34:48 default-k8s-diff-port-256678 kubelet[2921]: E0816 18:34:48.607771    2921 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723833288607426090,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:34:48 default-k8s-diff-port-256678 kubelet[2921]: E0816 18:34:48.607836    2921 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723833288607426090,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:34:54 default-k8s-diff-port-256678 kubelet[2921]: E0816 18:34:54.316784    2921 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-vmt5v" podUID="8446e983-380f-42a8-ab5b-ce9b6d67ebad"
	Aug 16 18:34:58 default-k8s-diff-port-256678 kubelet[2921]: E0816 18:34:58.610085    2921 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723833298609746414,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:34:58 default-k8s-diff-port-256678 kubelet[2921]: E0816 18:34:58.610379    2921 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723833298609746414,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:35:06 default-k8s-diff-port-256678 kubelet[2921]: E0816 18:35:06.316657    2921 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-vmt5v" podUID="8446e983-380f-42a8-ab5b-ce9b6d67ebad"
	Aug 16 18:35:08 default-k8s-diff-port-256678 kubelet[2921]: E0816 18:35:08.612154    2921 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723833308611486535,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:35:08 default-k8s-diff-port-256678 kubelet[2921]: E0816 18:35:08.612201    2921 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723833308611486535,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:35:17 default-k8s-diff-port-256678 kubelet[2921]: E0816 18:35:17.316782    2921 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-vmt5v" podUID="8446e983-380f-42a8-ab5b-ce9b6d67ebad"
	Aug 16 18:35:18 default-k8s-diff-port-256678 kubelet[2921]: E0816 18:35:18.337237    2921 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 18:35:18 default-k8s-diff-port-256678 kubelet[2921]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 18:35:18 default-k8s-diff-port-256678 kubelet[2921]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 18:35:18 default-k8s-diff-port-256678 kubelet[2921]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 18:35:18 default-k8s-diff-port-256678 kubelet[2921]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 18:35:18 default-k8s-diff-port-256678 kubelet[2921]: E0816 18:35:18.613594    2921 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723833318612820881,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:35:18 default-k8s-diff-port-256678 kubelet[2921]: E0816 18:35:18.613644    2921 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723833318612820881,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:35:28 default-k8s-diff-port-256678 kubelet[2921]: E0816 18:35:28.616356    2921 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723833328615266607,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:35:28 default-k8s-diff-port-256678 kubelet[2921]: E0816 18:35:28.616554    2921 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723833328615266607,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:35:30 default-k8s-diff-port-256678 kubelet[2921]: E0816 18:35:30.336214    2921 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Aug 16 18:35:30 default-k8s-diff-port-256678 kubelet[2921]: E0816 18:35:30.336289    2921 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Aug 16 18:35:30 default-k8s-diff-port-256678 kubelet[2921]: E0816 18:35:30.336518    2921 kuberuntime_manager.go:1272] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-swskk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountP
ropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile
:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-6867b74b74-vmt5v_kube-system(8446e983-380f-42a8-ab5b-ce9b6d67ebad): ErrImagePull: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Aug 16 18:35:30 default-k8s-diff-port-256678 kubelet[2921]: E0816 18:35:30.338072    2921 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-6867b74b74-vmt5v" podUID="8446e983-380f-42a8-ab5b-ce9b6d67ebad"
	Aug 16 18:35:38 default-k8s-diff-port-256678 kubelet[2921]: E0816 18:35:38.619226    2921 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723833338618798377,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:35:38 default-k8s-diff-port-256678 kubelet[2921]: E0816 18:35:38.619696    2921 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723833338618798377,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:35:41 default-k8s-diff-port-256678 kubelet[2921]: E0816 18:35:41.316708    2921 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-vmt5v" podUID="8446e983-380f-42a8-ab5b-ce9b6d67ebad"
	
	
	==> storage-provisioner [6e868145890802c78b2224210ec0fc9d6e76a46b800dfcf40d962dd8776c4d4c] <==
	I0816 18:19:25.473730       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0816 18:19:25.496086       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0816 18:19:25.498032       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0816 18:19:25.527943       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0816 18:19:25.528208       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-256678_48c84abd-b3ad-478d-8e7c-ddb17557069c!
	I0816 18:19:25.531820       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9e42718b-b3c0-450b-9e14-b9e25bb5af15", APIVersion:"v1", ResourceVersion:"387", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-256678_48c84abd-b3ad-478d-8e7c-ddb17557069c became leader
	I0816 18:19:25.631747       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-256678_48c84abd-b3ad-478d-8e7c-ddb17557069c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-256678 -n default-k8s-diff-port-256678
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-256678 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-vmt5v
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-256678 describe pod metrics-server-6867b74b74-vmt5v
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-256678 describe pod metrics-server-6867b74b74-vmt5v: exit status 1 (69.275197ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-vmt5v" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-256678 describe pod metrics-server-6867b74b74-vmt5v: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (423.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (367.85s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-777541 -n embed-certs-777541
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-08-16 18:34:45.863839933 +0000 UTC m=+6386.182384945
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-777541 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-777541 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.668µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-777541 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-777541 -n embed-certs-777541
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-777541 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-777541 logs -n 25: (3.394075548s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p custom-flannel-791304 sudo                          | custom-flannel-791304        | jenkins | v1.33.1 | 16 Aug 24 18:05 UTC | 16 Aug 24 18:05 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-791304 sudo                          | custom-flannel-791304        | jenkins | v1.33.1 | 16 Aug 24 18:05 UTC | 16 Aug 24 18:05 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-791304 sudo                          | custom-flannel-791304        | jenkins | v1.33.1 | 16 Aug 24 18:05 UTC | 16 Aug 24 18:05 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-791304 sudo                          | custom-flannel-791304        | jenkins | v1.33.1 | 16 Aug 24 18:05 UTC | 16 Aug 24 18:05 UTC |
	|         | find /etc/crio -type f -exec                           |                              |         |         |                     |                     |
	|         | sh -c 'echo {}; cat {}' \;                             |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-791304 sudo                          | custom-flannel-791304        | jenkins | v1.33.1 | 16 Aug 24 18:05 UTC | 16 Aug 24 18:05 UTC |
	|         | crio config                                            |                              |         |         |                     |                     |
	| delete  | -p custom-flannel-791304                               | custom-flannel-791304        | jenkins | v1.33.1 | 16 Aug 24 18:05 UTC | 16 Aug 24 18:05 UTC |
	| start   | -p                                                     | default-k8s-diff-port-256678 | jenkins | v1.33.1 | 16 Aug 24 18:05 UTC | 16 Aug 24 18:07 UTC |
	|         | default-k8s-diff-port-256678                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-777541            | embed-certs-777541           | jenkins | v1.33.1 | 16 Aug 24 18:06 UTC | 16 Aug 24 18:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-777541                                  | embed-certs-777541           | jenkins | v1.33.1 | 16 Aug 24 18:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-864476             | no-preload-864476            | jenkins | v1.33.1 | 16 Aug 24 18:07 UTC | 16 Aug 24 18:07 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-864476                                   | no-preload-864476            | jenkins | v1.33.1 | 16 Aug 24 18:07 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-256678  | default-k8s-diff-port-256678 | jenkins | v1.33.1 | 16 Aug 24 18:07 UTC | 16 Aug 24 18:07 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-256678 | jenkins | v1.33.1 | 16 Aug 24 18:07 UTC |                     |
	|         | default-k8s-diff-port-256678                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-777541                 | embed-certs-777541           | jenkins | v1.33.1 | 16 Aug 24 18:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-783465        | old-k8s-version-783465       | jenkins | v1.33.1 | 16 Aug 24 18:08 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-777541                                  | embed-certs-777541           | jenkins | v1.33.1 | 16 Aug 24 18:08 UTC | 16 Aug 24 18:19 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-864476                  | no-preload-864476            | jenkins | v1.33.1 | 16 Aug 24 18:09 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-864476                                   | no-preload-864476            | jenkins | v1.33.1 | 16 Aug 24 18:09 UTC | 16 Aug 24 18:19 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-256678       | default-k8s-diff-port-256678 | jenkins | v1.33.1 | 16 Aug 24 18:09 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-256678 | jenkins | v1.33.1 | 16 Aug 24 18:09 UTC | 16 Aug 24 18:19 UTC |
	|         | default-k8s-diff-port-256678                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-783465                              | old-k8s-version-783465       | jenkins | v1.33.1 | 16 Aug 24 18:10 UTC | 16 Aug 24 18:10 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-783465             | old-k8s-version-783465       | jenkins | v1.33.1 | 16 Aug 24 18:10 UTC | 16 Aug 24 18:10 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-783465                              | old-k8s-version-783465       | jenkins | v1.33.1 | 16 Aug 24 18:10 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-783465                              | old-k8s-version-783465       | jenkins | v1.33.1 | 16 Aug 24 18:34 UTC | 16 Aug 24 18:34 UTC |
	| start   | -p newest-cni-774287 --memory=2200 --alsologtostderr   | newest-cni-774287            | jenkins | v1.33.1 | 16 Aug 24 18:34 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 18:34:14
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 18:34:14.800399   81976 out.go:345] Setting OutFile to fd 1 ...
	I0816 18:34:14.800917   81976 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 18:34:14.800935   81976 out.go:358] Setting ErrFile to fd 2...
	I0816 18:34:14.800943   81976 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 18:34:14.801359   81976 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-9545/.minikube/bin
	I0816 18:34:14.802232   81976 out.go:352] Setting JSON to false
	I0816 18:34:14.803127   81976 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8153,"bootTime":1723825102,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0816 18:34:14.803184   81976 start.go:139] virtualization: kvm guest
	I0816 18:34:14.805422   81976 out.go:177] * [newest-cni-774287] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0816 18:34:14.806793   81976 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 18:34:14.806813   81976 notify.go:220] Checking for updates...
	I0816 18:34:14.809572   81976 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 18:34:14.810834   81976 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19461-9545/kubeconfig
	I0816 18:34:14.812152   81976 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19461-9545/.minikube
	I0816 18:34:14.813328   81976 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 18:34:14.814617   81976 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 18:34:14.816259   81976 config.go:182] Loaded profile config "default-k8s-diff-port-256678": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 18:34:14.816379   81976 config.go:182] Loaded profile config "embed-certs-777541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 18:34:14.816475   81976 config.go:182] Loaded profile config "no-preload-864476": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 18:34:14.816541   81976 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 18:34:14.851938   81976 out.go:177] * Using the kvm2 driver based on user configuration
	I0816 18:34:14.853195   81976 start.go:297] selected driver: kvm2
	I0816 18:34:14.853217   81976 start.go:901] validating driver "kvm2" against <nil>
	I0816 18:34:14.853232   81976 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 18:34:14.853918   81976 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 18:34:14.853993   81976 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19461-9545/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 18:34:14.868665   81976 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0816 18:34:14.868720   81976 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0816 18:34:14.868747   81976 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0816 18:34:14.868939   81976 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0816 18:34:14.868995   81976 cni.go:84] Creating CNI manager for ""
	I0816 18:34:14.869008   81976 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 18:34:14.869019   81976 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0816 18:34:14.869098   81976 start.go:340] cluster config:
	{Name:newest-cni-774287 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-774287 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 18:34:14.869231   81976 iso.go:125] acquiring lock: {Name:mke35866c1d0f078e1f027c2d727692722810e04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 18:34:14.870976   81976 out.go:177] * Starting "newest-cni-774287" primary control-plane node in "newest-cni-774287" cluster
	I0816 18:34:14.872003   81976 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 18:34:14.872030   81976 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0816 18:34:14.872037   81976 cache.go:56] Caching tarball of preloaded images
	I0816 18:34:14.872113   81976 preload.go:172] Found /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 18:34:14.872126   81976 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0816 18:34:14.872244   81976 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/newest-cni-774287/config.json ...
	I0816 18:34:14.872264   81976 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/newest-cni-774287/config.json: {Name:mk36d324910fe56cbc34dc45337a916147efc7d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:34:14.872404   81976 start.go:360] acquireMachinesLock for newest-cni-774287: {Name:mke1ffa1f2d6d714bdd85e184816ba8f4dfd08f1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 18:34:14.872431   81976 start.go:364] duration metric: took 14.058µs to acquireMachinesLock for "newest-cni-774287"
	I0816 18:34:14.872444   81976 start.go:93] Provisioning new machine with config: &{Name:newest-cni-774287 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0 ClusterName:newest-cni-774287 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 18:34:14.872501   81976 start.go:125] createHost starting for "" (driver="kvm2")
	I0816 18:34:14.873921   81976 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 18:34:14.874046   81976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:34:14.874086   81976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:34:14.890021   81976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32831
	I0816 18:34:14.890409   81976 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:34:14.890971   81976 main.go:141] libmachine: Using API Version  1
	I0816 18:34:14.890990   81976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:34:14.891321   81976 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:34:14.891533   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetMachineName
	I0816 18:34:14.891675   81976 main.go:141] libmachine: (newest-cni-774287) Calling .DriverName
	I0816 18:34:14.891816   81976 start.go:159] libmachine.API.Create for "newest-cni-774287" (driver="kvm2")
	I0816 18:34:14.891845   81976 client.go:168] LocalClient.Create starting
	I0816 18:34:14.891877   81976 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem
	I0816 18:34:14.891922   81976 main.go:141] libmachine: Decoding PEM data...
	I0816 18:34:14.891941   81976 main.go:141] libmachine: Parsing certificate...
	I0816 18:34:14.892019   81976 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem
	I0816 18:34:14.892043   81976 main.go:141] libmachine: Decoding PEM data...
	I0816 18:34:14.892060   81976 main.go:141] libmachine: Parsing certificate...
	I0816 18:34:14.892084   81976 main.go:141] libmachine: Running pre-create checks...
	I0816 18:34:14.892095   81976 main.go:141] libmachine: (newest-cni-774287) Calling .PreCreateCheck
	I0816 18:34:14.892428   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetConfigRaw
	I0816 18:34:14.892796   81976 main.go:141] libmachine: Creating machine...
	I0816 18:34:14.892811   81976 main.go:141] libmachine: (newest-cni-774287) Calling .Create
	I0816 18:34:14.892961   81976 main.go:141] libmachine: (newest-cni-774287) Creating KVM machine...
	I0816 18:34:14.894317   81976 main.go:141] libmachine: (newest-cni-774287) DBG | found existing default KVM network
	I0816 18:34:14.896019   81976 main.go:141] libmachine: (newest-cni-774287) DBG | I0816 18:34:14.895877   81999 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012dfa0}
	I0816 18:34:14.896040   81976 main.go:141] libmachine: (newest-cni-774287) DBG | created network xml: 
	I0816 18:34:14.896050   81976 main.go:141] libmachine: (newest-cni-774287) DBG | <network>
	I0816 18:34:14.896056   81976 main.go:141] libmachine: (newest-cni-774287) DBG |   <name>mk-newest-cni-774287</name>
	I0816 18:34:14.896062   81976 main.go:141] libmachine: (newest-cni-774287) DBG |   <dns enable='no'/>
	I0816 18:34:14.896066   81976 main.go:141] libmachine: (newest-cni-774287) DBG |   
	I0816 18:34:14.896073   81976 main.go:141] libmachine: (newest-cni-774287) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0816 18:34:14.896084   81976 main.go:141] libmachine: (newest-cni-774287) DBG |     <dhcp>
	I0816 18:34:14.896093   81976 main.go:141] libmachine: (newest-cni-774287) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0816 18:34:14.896100   81976 main.go:141] libmachine: (newest-cni-774287) DBG |     </dhcp>
	I0816 18:34:14.896111   81976 main.go:141] libmachine: (newest-cni-774287) DBG |   </ip>
	I0816 18:34:14.896117   81976 main.go:141] libmachine: (newest-cni-774287) DBG |   
	I0816 18:34:14.896125   81976 main.go:141] libmachine: (newest-cni-774287) DBG | </network>
	I0816 18:34:14.896136   81976 main.go:141] libmachine: (newest-cni-774287) DBG | 
	I0816 18:34:14.901237   81976 main.go:141] libmachine: (newest-cni-774287) DBG | trying to create private KVM network mk-newest-cni-774287 192.168.39.0/24...
	I0816 18:34:14.971625   81976 main.go:141] libmachine: (newest-cni-774287) DBG | private KVM network mk-newest-cni-774287 192.168.39.0/24 created
	I0816 18:34:14.971674   81976 main.go:141] libmachine: (newest-cni-774287) DBG | I0816 18:34:14.971583   81999 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19461-9545/.minikube
	I0816 18:34:14.971688   81976 main.go:141] libmachine: (newest-cni-774287) Setting up store path in /home/jenkins/minikube-integration/19461-9545/.minikube/machines/newest-cni-774287 ...
	I0816 18:34:14.971710   81976 main.go:141] libmachine: (newest-cni-774287) Building disk image from file:///home/jenkins/minikube-integration/19461-9545/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0816 18:34:14.971768   81976 main.go:141] libmachine: (newest-cni-774287) Downloading /home/jenkins/minikube-integration/19461-9545/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19461-9545/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0816 18:34:15.226744   81976 main.go:141] libmachine: (newest-cni-774287) DBG | I0816 18:34:15.226565   81999 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/newest-cni-774287/id_rsa...
	I0816 18:34:15.482647   81976 main.go:141] libmachine: (newest-cni-774287) DBG | I0816 18:34:15.482516   81999 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/newest-cni-774287/newest-cni-774287.rawdisk...
	I0816 18:34:15.482677   81976 main.go:141] libmachine: (newest-cni-774287) DBG | Writing magic tar header
	I0816 18:34:15.482691   81976 main.go:141] libmachine: (newest-cni-774287) DBG | Writing SSH key tar header
	I0816 18:34:15.482699   81976 main.go:141] libmachine: (newest-cni-774287) DBG | I0816 18:34:15.482631   81999 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19461-9545/.minikube/machines/newest-cni-774287 ...
	I0816 18:34:15.482727   81976 main.go:141] libmachine: (newest-cni-774287) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/newest-cni-774287
	I0816 18:34:15.482770   81976 main.go:141] libmachine: (newest-cni-774287) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19461-9545/.minikube/machines
	I0816 18:34:15.482792   81976 main.go:141] libmachine: (newest-cni-774287) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19461-9545/.minikube
	I0816 18:34:15.482807   81976 main.go:141] libmachine: (newest-cni-774287) Setting executable bit set on /home/jenkins/minikube-integration/19461-9545/.minikube/machines/newest-cni-774287 (perms=drwx------)
	I0816 18:34:15.482822   81976 main.go:141] libmachine: (newest-cni-774287) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19461-9545
	I0816 18:34:15.482833   81976 main.go:141] libmachine: (newest-cni-774287) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0816 18:34:15.482842   81976 main.go:141] libmachine: (newest-cni-774287) DBG | Checking permissions on dir: /home/jenkins
	I0816 18:34:15.482861   81976 main.go:141] libmachine: (newest-cni-774287) Setting executable bit set on /home/jenkins/minikube-integration/19461-9545/.minikube/machines (perms=drwxr-xr-x)
	I0816 18:34:15.482877   81976 main.go:141] libmachine: (newest-cni-774287) Setting executable bit set on /home/jenkins/minikube-integration/19461-9545/.minikube (perms=drwxr-xr-x)
	I0816 18:34:15.482891   81976 main.go:141] libmachine: (newest-cni-774287) Setting executable bit set on /home/jenkins/minikube-integration/19461-9545 (perms=drwxrwxr-x)
	I0816 18:34:15.482902   81976 main.go:141] libmachine: (newest-cni-774287) DBG | Checking permissions on dir: /home
	I0816 18:34:15.482914   81976 main.go:141] libmachine: (newest-cni-774287) DBG | Skipping /home - not owner
	I0816 18:34:15.482928   81976 main.go:141] libmachine: (newest-cni-774287) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0816 18:34:15.482940   81976 main.go:141] libmachine: (newest-cni-774287) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0816 18:34:15.482946   81976 main.go:141] libmachine: (newest-cni-774287) Creating domain...
	I0816 18:34:15.483916   81976 main.go:141] libmachine: (newest-cni-774287) define libvirt domain using xml: 
	I0816 18:34:15.483935   81976 main.go:141] libmachine: (newest-cni-774287) <domain type='kvm'>
	I0816 18:34:15.483945   81976 main.go:141] libmachine: (newest-cni-774287)   <name>newest-cni-774287</name>
	I0816 18:34:15.483953   81976 main.go:141] libmachine: (newest-cni-774287)   <memory unit='MiB'>2200</memory>
	I0816 18:34:15.483962   81976 main.go:141] libmachine: (newest-cni-774287)   <vcpu>2</vcpu>
	I0816 18:34:15.483969   81976 main.go:141] libmachine: (newest-cni-774287)   <features>
	I0816 18:34:15.483981   81976 main.go:141] libmachine: (newest-cni-774287)     <acpi/>
	I0816 18:34:15.483994   81976 main.go:141] libmachine: (newest-cni-774287)     <apic/>
	I0816 18:34:15.484004   81976 main.go:141] libmachine: (newest-cni-774287)     <pae/>
	I0816 18:34:15.484021   81976 main.go:141] libmachine: (newest-cni-774287)     
	I0816 18:34:15.484058   81976 main.go:141] libmachine: (newest-cni-774287)   </features>
	I0816 18:34:15.484090   81976 main.go:141] libmachine: (newest-cni-774287)   <cpu mode='host-passthrough'>
	I0816 18:34:15.484105   81976 main.go:141] libmachine: (newest-cni-774287)   
	I0816 18:34:15.484114   81976 main.go:141] libmachine: (newest-cni-774287)   </cpu>
	I0816 18:34:15.484124   81976 main.go:141] libmachine: (newest-cni-774287)   <os>
	I0816 18:34:15.484132   81976 main.go:141] libmachine: (newest-cni-774287)     <type>hvm</type>
	I0816 18:34:15.484141   81976 main.go:141] libmachine: (newest-cni-774287)     <boot dev='cdrom'/>
	I0816 18:34:15.484150   81976 main.go:141] libmachine: (newest-cni-774287)     <boot dev='hd'/>
	I0816 18:34:15.484159   81976 main.go:141] libmachine: (newest-cni-774287)     <bootmenu enable='no'/>
	I0816 18:34:15.484171   81976 main.go:141] libmachine: (newest-cni-774287)   </os>
	I0816 18:34:15.484191   81976 main.go:141] libmachine: (newest-cni-774287)   <devices>
	I0816 18:34:15.484210   81976 main.go:141] libmachine: (newest-cni-774287)     <disk type='file' device='cdrom'>
	I0816 18:34:15.484227   81976 main.go:141] libmachine: (newest-cni-774287)       <source file='/home/jenkins/minikube-integration/19461-9545/.minikube/machines/newest-cni-774287/boot2docker.iso'/>
	I0816 18:34:15.484239   81976 main.go:141] libmachine: (newest-cni-774287)       <target dev='hdc' bus='scsi'/>
	I0816 18:34:15.484252   81976 main.go:141] libmachine: (newest-cni-774287)       <readonly/>
	I0816 18:34:15.484263   81976 main.go:141] libmachine: (newest-cni-774287)     </disk>
	I0816 18:34:15.484275   81976 main.go:141] libmachine: (newest-cni-774287)     <disk type='file' device='disk'>
	I0816 18:34:15.484292   81976 main.go:141] libmachine: (newest-cni-774287)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0816 18:34:15.484316   81976 main.go:141] libmachine: (newest-cni-774287)       <source file='/home/jenkins/minikube-integration/19461-9545/.minikube/machines/newest-cni-774287/newest-cni-774287.rawdisk'/>
	I0816 18:34:15.484328   81976 main.go:141] libmachine: (newest-cni-774287)       <target dev='hda' bus='virtio'/>
	I0816 18:34:15.484344   81976 main.go:141] libmachine: (newest-cni-774287)     </disk>
	I0816 18:34:15.484360   81976 main.go:141] libmachine: (newest-cni-774287)     <interface type='network'>
	I0816 18:34:15.484372   81976 main.go:141] libmachine: (newest-cni-774287)       <source network='mk-newest-cni-774287'/>
	I0816 18:34:15.484391   81976 main.go:141] libmachine: (newest-cni-774287)       <model type='virtio'/>
	I0816 18:34:15.484404   81976 main.go:141] libmachine: (newest-cni-774287)     </interface>
	I0816 18:34:15.484411   81976 main.go:141] libmachine: (newest-cni-774287)     <interface type='network'>
	I0816 18:34:15.484419   81976 main.go:141] libmachine: (newest-cni-774287)       <source network='default'/>
	I0816 18:34:15.484431   81976 main.go:141] libmachine: (newest-cni-774287)       <model type='virtio'/>
	I0816 18:34:15.484440   81976 main.go:141] libmachine: (newest-cni-774287)     </interface>
	I0816 18:34:15.484448   81976 main.go:141] libmachine: (newest-cni-774287)     <serial type='pty'>
	I0816 18:34:15.484455   81976 main.go:141] libmachine: (newest-cni-774287)       <target port='0'/>
	I0816 18:34:15.484462   81976 main.go:141] libmachine: (newest-cni-774287)     </serial>
	I0816 18:34:15.484474   81976 main.go:141] libmachine: (newest-cni-774287)     <console type='pty'>
	I0816 18:34:15.484486   81976 main.go:141] libmachine: (newest-cni-774287)       <target type='serial' port='0'/>
	I0816 18:34:15.484504   81976 main.go:141] libmachine: (newest-cni-774287)     </console>
	I0816 18:34:15.484527   81976 main.go:141] libmachine: (newest-cni-774287)     <rng model='virtio'>
	I0816 18:34:15.484541   81976 main.go:141] libmachine: (newest-cni-774287)       <backend model='random'>/dev/random</backend>
	I0816 18:34:15.484549   81976 main.go:141] libmachine: (newest-cni-774287)     </rng>
	I0816 18:34:15.484556   81976 main.go:141] libmachine: (newest-cni-774287)     
	I0816 18:34:15.484565   81976 main.go:141] libmachine: (newest-cni-774287)     
	I0816 18:34:15.484581   81976 main.go:141] libmachine: (newest-cni-774287)   </devices>
	I0816 18:34:15.484591   81976 main.go:141] libmachine: (newest-cni-774287) </domain>
	I0816 18:34:15.484633   81976 main.go:141] libmachine: (newest-cni-774287) 
	I0816 18:34:15.489321   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:96:f4:f7 in network default
	I0816 18:34:15.489918   81976 main.go:141] libmachine: (newest-cni-774287) Ensuring networks are active...
	I0816 18:34:15.489947   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:15.490591   81976 main.go:141] libmachine: (newest-cni-774287) Ensuring network default is active
	I0816 18:34:15.490867   81976 main.go:141] libmachine: (newest-cni-774287) Ensuring network mk-newest-cni-774287 is active
	I0816 18:34:15.491446   81976 main.go:141] libmachine: (newest-cni-774287) Getting domain xml...
	I0816 18:34:15.492270   81976 main.go:141] libmachine: (newest-cni-774287) Creating domain...
	I0816 18:34:16.745025   81976 main.go:141] libmachine: (newest-cni-774287) Waiting to get IP...
	I0816 18:34:16.745811   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:16.746247   81976 main.go:141] libmachine: (newest-cni-774287) DBG | unable to find current IP address of domain newest-cni-774287 in network mk-newest-cni-774287
	I0816 18:34:16.746273   81976 main.go:141] libmachine: (newest-cni-774287) DBG | I0816 18:34:16.746226   81999 retry.go:31] will retry after 265.597921ms: waiting for machine to come up
	I0816 18:34:17.013618   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:17.014114   81976 main.go:141] libmachine: (newest-cni-774287) DBG | unable to find current IP address of domain newest-cni-774287 in network mk-newest-cni-774287
	I0816 18:34:17.014146   81976 main.go:141] libmachine: (newest-cni-774287) DBG | I0816 18:34:17.014065   81999 retry.go:31] will retry after 374.317465ms: waiting for machine to come up
	I0816 18:34:17.389569   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:17.390116   81976 main.go:141] libmachine: (newest-cni-774287) DBG | unable to find current IP address of domain newest-cni-774287 in network mk-newest-cni-774287
	I0816 18:34:17.390148   81976 main.go:141] libmachine: (newest-cni-774287) DBG | I0816 18:34:17.390067   81999 retry.go:31] will retry after 371.344854ms: waiting for machine to come up
	I0816 18:34:17.762470   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:17.762866   81976 main.go:141] libmachine: (newest-cni-774287) DBG | unable to find current IP address of domain newest-cni-774287 in network mk-newest-cni-774287
	I0816 18:34:17.762897   81976 main.go:141] libmachine: (newest-cni-774287) DBG | I0816 18:34:17.762818   81999 retry.go:31] will retry after 424.91842ms: waiting for machine to come up
	I0816 18:34:18.189428   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:18.189942   81976 main.go:141] libmachine: (newest-cni-774287) DBG | unable to find current IP address of domain newest-cni-774287 in network mk-newest-cni-774287
	I0816 18:34:18.189967   81976 main.go:141] libmachine: (newest-cni-774287) DBG | I0816 18:34:18.189901   81999 retry.go:31] will retry after 487.835028ms: waiting for machine to come up
	I0816 18:34:18.679759   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:18.680200   81976 main.go:141] libmachine: (newest-cni-774287) DBG | unable to find current IP address of domain newest-cni-774287 in network mk-newest-cni-774287
	I0816 18:34:18.680225   81976 main.go:141] libmachine: (newest-cni-774287) DBG | I0816 18:34:18.680155   81999 retry.go:31] will retry after 850.214847ms: waiting for machine to come up
	I0816 18:34:19.532156   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:19.532604   81976 main.go:141] libmachine: (newest-cni-774287) DBG | unable to find current IP address of domain newest-cni-774287 in network mk-newest-cni-774287
	I0816 18:34:19.532655   81976 main.go:141] libmachine: (newest-cni-774287) DBG | I0816 18:34:19.532552   81999 retry.go:31] will retry after 792.840893ms: waiting for machine to come up
	I0816 18:34:20.326950   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:20.327482   81976 main.go:141] libmachine: (newest-cni-774287) DBG | unable to find current IP address of domain newest-cni-774287 in network mk-newest-cni-774287
	I0816 18:34:20.327512   81976 main.go:141] libmachine: (newest-cni-774287) DBG | I0816 18:34:20.327427   81999 retry.go:31] will retry after 1.013314353s: waiting for machine to come up
	I0816 18:34:21.342627   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:21.343114   81976 main.go:141] libmachine: (newest-cni-774287) DBG | unable to find current IP address of domain newest-cni-774287 in network mk-newest-cni-774287
	I0816 18:34:21.343142   81976 main.go:141] libmachine: (newest-cni-774287) DBG | I0816 18:34:21.342998   81999 retry.go:31] will retry after 1.257401636s: waiting for machine to come up
	I0816 18:34:22.601621   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:22.602248   81976 main.go:141] libmachine: (newest-cni-774287) DBG | unable to find current IP address of domain newest-cni-774287 in network mk-newest-cni-774287
	I0816 18:34:22.602271   81976 main.go:141] libmachine: (newest-cni-774287) DBG | I0816 18:34:22.602191   81999 retry.go:31] will retry after 1.727032619s: waiting for machine to come up
	I0816 18:34:24.330884   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:24.331372   81976 main.go:141] libmachine: (newest-cni-774287) DBG | unable to find current IP address of domain newest-cni-774287 in network mk-newest-cni-774287
	I0816 18:34:24.331398   81976 main.go:141] libmachine: (newest-cni-774287) DBG | I0816 18:34:24.331343   81999 retry.go:31] will retry after 2.002119281s: waiting for machine to come up
	I0816 18:34:26.334731   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:26.335301   81976 main.go:141] libmachine: (newest-cni-774287) DBG | unable to find current IP address of domain newest-cni-774287 in network mk-newest-cni-774287
	I0816 18:34:26.335321   81976 main.go:141] libmachine: (newest-cni-774287) DBG | I0816 18:34:26.335251   81999 retry.go:31] will retry after 3.422510613s: waiting for machine to come up
	I0816 18:34:29.761853   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:29.762217   81976 main.go:141] libmachine: (newest-cni-774287) DBG | unable to find current IP address of domain newest-cni-774287 in network mk-newest-cni-774287
	I0816 18:34:29.762242   81976 main.go:141] libmachine: (newest-cni-774287) DBG | I0816 18:34:29.762173   81999 retry.go:31] will retry after 4.140861901s: waiting for machine to come up
	I0816 18:34:33.905830   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:33.906250   81976 main.go:141] libmachine: (newest-cni-774287) DBG | unable to find current IP address of domain newest-cni-774287 in network mk-newest-cni-774287
	I0816 18:34:33.906283   81976 main.go:141] libmachine: (newest-cni-774287) DBG | I0816 18:34:33.906189   81999 retry.go:31] will retry after 4.137136905s: waiting for machine to come up
	I0816 18:34:38.046346   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:38.047072   81976 main.go:141] libmachine: (newest-cni-774287) Found IP for machine: 192.168.39.194
	I0816 18:34:38.047107   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has current primary IP address 192.168.39.194 and MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:38.047119   81976 main.go:141] libmachine: (newest-cni-774287) Reserving static IP address...
	I0816 18:34:38.048060   81976 main.go:141] libmachine: (newest-cni-774287) DBG | unable to find host DHCP lease matching {name: "newest-cni-774287", mac: "52:54:00:2d:15:e2", ip: "192.168.39.194"} in network mk-newest-cni-774287
	I0816 18:34:38.126390   81976 main.go:141] libmachine: (newest-cni-774287) Reserved static IP address: 192.168.39.194
	I0816 18:34:38.126431   81976 main.go:141] libmachine: (newest-cni-774287) Waiting for SSH to be available...
	I0816 18:34:38.126443   81976 main.go:141] libmachine: (newest-cni-774287) DBG | Getting to WaitForSSH function...
	I0816 18:34:38.129678   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:38.130117   81976 main.go:141] libmachine: (newest-cni-774287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:15:e2", ip: ""} in network mk-newest-cni-774287: {Iface:virbr1 ExpiryTime:2024-08-16 19:34:29 +0000 UTC Type:0 Mac:52:54:00:2d:15:e2 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:minikube Clientid:01:52:54:00:2d:15:e2}
	I0816 18:34:38.130148   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined IP address 192.168.39.194 and MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:38.130291   81976 main.go:141] libmachine: (newest-cni-774287) DBG | Using SSH client type: external
	I0816 18:34:38.130333   81976 main.go:141] libmachine: (newest-cni-774287) DBG | Using SSH private key: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/newest-cni-774287/id_rsa (-rw-------)
	I0816 18:34:38.130393   81976 main.go:141] libmachine: (newest-cni-774287) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.194 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19461-9545/.minikube/machines/newest-cni-774287/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 18:34:38.130413   81976 main.go:141] libmachine: (newest-cni-774287) DBG | About to run SSH command:
	I0816 18:34:38.130431   81976 main.go:141] libmachine: (newest-cni-774287) DBG | exit 0
	I0816 18:34:38.261282   81976 main.go:141] libmachine: (newest-cni-774287) DBG | SSH cmd err, output: <nil>: 
	I0816 18:34:38.261587   81976 main.go:141] libmachine: (newest-cni-774287) KVM machine creation complete!
	I0816 18:34:38.261960   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetConfigRaw
	I0816 18:34:38.262482   81976 main.go:141] libmachine: (newest-cni-774287) Calling .DriverName
	I0816 18:34:38.262687   81976 main.go:141] libmachine: (newest-cni-774287) Calling .DriverName
	I0816 18:34:38.262887   81976 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0816 18:34:38.262907   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetState
	I0816 18:34:38.264106   81976 main.go:141] libmachine: Detecting operating system of created instance...
	I0816 18:34:38.264120   81976 main.go:141] libmachine: Waiting for SSH to be available...
	I0816 18:34:38.264128   81976 main.go:141] libmachine: Getting to WaitForSSH function...
	I0816 18:34:38.264156   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHHostname
	I0816 18:34:38.266644   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:38.266973   81976 main.go:141] libmachine: (newest-cni-774287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:15:e2", ip: ""} in network mk-newest-cni-774287: {Iface:virbr1 ExpiryTime:2024-08-16 19:34:29 +0000 UTC Type:0 Mac:52:54:00:2d:15:e2 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:newest-cni-774287 Clientid:01:52:54:00:2d:15:e2}
	I0816 18:34:38.267010   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined IP address 192.168.39.194 and MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:38.267183   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHPort
	I0816 18:34:38.267359   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHKeyPath
	I0816 18:34:38.267527   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHKeyPath
	I0816 18:34:38.267642   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHUsername
	I0816 18:34:38.267806   81976 main.go:141] libmachine: Using SSH client type: native
	I0816 18:34:38.268044   81976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0816 18:34:38.268059   81976 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0816 18:34:38.384019   81976 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 18:34:38.384044   81976 main.go:141] libmachine: Detecting the provisioner...
	I0816 18:34:38.384053   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHHostname
	I0816 18:34:38.387470   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:38.387991   81976 main.go:141] libmachine: (newest-cni-774287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:15:e2", ip: ""} in network mk-newest-cni-774287: {Iface:virbr1 ExpiryTime:2024-08-16 19:34:29 +0000 UTC Type:0 Mac:52:54:00:2d:15:e2 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:newest-cni-774287 Clientid:01:52:54:00:2d:15:e2}
	I0816 18:34:38.388027   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined IP address 192.168.39.194 and MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:38.388192   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHPort
	I0816 18:34:38.388363   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHKeyPath
	I0816 18:34:38.388500   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHKeyPath
	I0816 18:34:38.388678   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHUsername
	I0816 18:34:38.388850   81976 main.go:141] libmachine: Using SSH client type: native
	I0816 18:34:38.389024   81976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0816 18:34:38.389035   81976 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0816 18:34:38.505538   81976 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0816 18:34:38.505659   81976 main.go:141] libmachine: found compatible host: buildroot
	I0816 18:34:38.505681   81976 main.go:141] libmachine: Provisioning with buildroot...
	I0816 18:34:38.505694   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetMachineName
	I0816 18:34:38.505963   81976 buildroot.go:166] provisioning hostname "newest-cni-774287"
	I0816 18:34:38.505985   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetMachineName
	I0816 18:34:38.506208   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHHostname
	I0816 18:34:38.508968   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:38.509327   81976 main.go:141] libmachine: (newest-cni-774287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:15:e2", ip: ""} in network mk-newest-cni-774287: {Iface:virbr1 ExpiryTime:2024-08-16 19:34:29 +0000 UTC Type:0 Mac:52:54:00:2d:15:e2 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:newest-cni-774287 Clientid:01:52:54:00:2d:15:e2}
	I0816 18:34:38.509346   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined IP address 192.168.39.194 and MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:38.509558   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHPort
	I0816 18:34:38.509748   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHKeyPath
	I0816 18:34:38.509912   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHKeyPath
	I0816 18:34:38.510044   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHUsername
	I0816 18:34:38.510192   81976 main.go:141] libmachine: Using SSH client type: native
	I0816 18:34:38.510394   81976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0816 18:34:38.510408   81976 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-774287 && echo "newest-cni-774287" | sudo tee /etc/hostname
	I0816 18:34:38.639166   81976 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-774287
	
	I0816 18:34:38.639190   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHHostname
	I0816 18:34:38.642270   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:38.642699   81976 main.go:141] libmachine: (newest-cni-774287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:15:e2", ip: ""} in network mk-newest-cni-774287: {Iface:virbr1 ExpiryTime:2024-08-16 19:34:29 +0000 UTC Type:0 Mac:52:54:00:2d:15:e2 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:newest-cni-774287 Clientid:01:52:54:00:2d:15:e2}
	I0816 18:34:38.642720   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined IP address 192.168.39.194 and MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:38.642975   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHPort
	I0816 18:34:38.643182   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHKeyPath
	I0816 18:34:38.643333   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHKeyPath
	I0816 18:34:38.643496   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHUsername
	I0816 18:34:38.643695   81976 main.go:141] libmachine: Using SSH client type: native
	I0816 18:34:38.643909   81976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0816 18:34:38.643927   81976 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-774287' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-774287/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-774287' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 18:34:38.764958   81976 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 18:34:38.764995   81976 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19461-9545/.minikube CaCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19461-9545/.minikube}
	I0816 18:34:38.765019   81976 buildroot.go:174] setting up certificates
	I0816 18:34:38.765033   81976 provision.go:84] configureAuth start
	I0816 18:34:38.765049   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetMachineName
	I0816 18:34:38.765348   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetIP
	I0816 18:34:38.768384   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:38.768734   81976 main.go:141] libmachine: (newest-cni-774287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:15:e2", ip: ""} in network mk-newest-cni-774287: {Iface:virbr1 ExpiryTime:2024-08-16 19:34:29 +0000 UTC Type:0 Mac:52:54:00:2d:15:e2 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:newest-cni-774287 Clientid:01:52:54:00:2d:15:e2}
	I0816 18:34:38.768762   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined IP address 192.168.39.194 and MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:38.769013   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHHostname
	I0816 18:34:38.771573   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:38.771990   81976 main.go:141] libmachine: (newest-cni-774287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:15:e2", ip: ""} in network mk-newest-cni-774287: {Iface:virbr1 ExpiryTime:2024-08-16 19:34:29 +0000 UTC Type:0 Mac:52:54:00:2d:15:e2 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:newest-cni-774287 Clientid:01:52:54:00:2d:15:e2}
	I0816 18:34:38.772021   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined IP address 192.168.39.194 and MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:38.772208   81976 provision.go:143] copyHostCerts
	I0816 18:34:38.772295   81976 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem, removing ...
	I0816 18:34:38.772319   81976 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem
	I0816 18:34:38.772403   81976 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem (1082 bytes)
	I0816 18:34:38.772557   81976 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem, removing ...
	I0816 18:34:38.772569   81976 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem
	I0816 18:34:38.772604   81976 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem (1123 bytes)
	I0816 18:34:38.772725   81976 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem, removing ...
	I0816 18:34:38.772735   81976 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem
	I0816 18:34:38.772764   81976 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem (1679 bytes)
	I0816 18:34:38.772841   81976 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem org=jenkins.newest-cni-774287 san=[127.0.0.1 192.168.39.194 localhost minikube newest-cni-774287]
	I0816 18:34:39.063813   81976 provision.go:177] copyRemoteCerts
	I0816 18:34:39.063874   81976 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 18:34:39.063898   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHHostname
	I0816 18:34:39.066633   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:39.067097   81976 main.go:141] libmachine: (newest-cni-774287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:15:e2", ip: ""} in network mk-newest-cni-774287: {Iface:virbr1 ExpiryTime:2024-08-16 19:34:29 +0000 UTC Type:0 Mac:52:54:00:2d:15:e2 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:newest-cni-774287 Clientid:01:52:54:00:2d:15:e2}
	I0816 18:34:39.067132   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined IP address 192.168.39.194 and MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:39.067288   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHPort
	I0816 18:34:39.067470   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHKeyPath
	I0816 18:34:39.067625   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHUsername
	I0816 18:34:39.067788   81976 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/newest-cni-774287/id_rsa Username:docker}
	I0816 18:34:39.154901   81976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 18:34:39.178395   81976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 18:34:39.200786   81976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0816 18:34:39.223737   81976 provision.go:87] duration metric: took 458.687372ms to configureAuth
	I0816 18:34:39.223765   81976 buildroot.go:189] setting minikube options for container-runtime
	I0816 18:34:39.223959   81976 config.go:182] Loaded profile config "newest-cni-774287": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 18:34:39.224043   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHHostname
	I0816 18:34:39.226949   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:39.227378   81976 main.go:141] libmachine: (newest-cni-774287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:15:e2", ip: ""} in network mk-newest-cni-774287: {Iface:virbr1 ExpiryTime:2024-08-16 19:34:29 +0000 UTC Type:0 Mac:52:54:00:2d:15:e2 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:newest-cni-774287 Clientid:01:52:54:00:2d:15:e2}
	I0816 18:34:39.227413   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined IP address 192.168.39.194 and MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:39.227579   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHPort
	I0816 18:34:39.227784   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHKeyPath
	I0816 18:34:39.227958   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHKeyPath
	I0816 18:34:39.228132   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHUsername
	I0816 18:34:39.228327   81976 main.go:141] libmachine: Using SSH client type: native
	I0816 18:34:39.228534   81976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0816 18:34:39.228569   81976 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 18:34:39.514003   81976 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 18:34:39.514040   81976 main.go:141] libmachine: Checking connection to Docker...
	I0816 18:34:39.514052   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetURL
	I0816 18:34:39.515607   81976 main.go:141] libmachine: (newest-cni-774287) DBG | Using libvirt version 6000000
	I0816 18:34:39.518012   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:39.518416   81976 main.go:141] libmachine: (newest-cni-774287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:15:e2", ip: ""} in network mk-newest-cni-774287: {Iface:virbr1 ExpiryTime:2024-08-16 19:34:29 +0000 UTC Type:0 Mac:52:54:00:2d:15:e2 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:newest-cni-774287 Clientid:01:52:54:00:2d:15:e2}
	I0816 18:34:39.518437   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined IP address 192.168.39.194 and MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:39.518699   81976 main.go:141] libmachine: Docker is up and running!
	I0816 18:34:39.518717   81976 main.go:141] libmachine: Reticulating splines...
	I0816 18:34:39.518725   81976 client.go:171] duration metric: took 24.626872631s to LocalClient.Create
	I0816 18:34:39.518751   81976 start.go:167] duration metric: took 24.626937052s to libmachine.API.Create "newest-cni-774287"
	I0816 18:34:39.518760   81976 start.go:293] postStartSetup for "newest-cni-774287" (driver="kvm2")
	I0816 18:34:39.518772   81976 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 18:34:39.518792   81976 main.go:141] libmachine: (newest-cni-774287) Calling .DriverName
	I0816 18:34:39.519063   81976 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 18:34:39.519090   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHHostname
	I0816 18:34:39.521609   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:39.521925   81976 main.go:141] libmachine: (newest-cni-774287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:15:e2", ip: ""} in network mk-newest-cni-774287: {Iface:virbr1 ExpiryTime:2024-08-16 19:34:29 +0000 UTC Type:0 Mac:52:54:00:2d:15:e2 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:newest-cni-774287 Clientid:01:52:54:00:2d:15:e2}
	I0816 18:34:39.521958   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined IP address 192.168.39.194 and MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:39.522039   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHPort
	I0816 18:34:39.522214   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHKeyPath
	I0816 18:34:39.522374   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHUsername
	I0816 18:34:39.522489   81976 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/newest-cni-774287/id_rsa Username:docker}
	I0816 18:34:39.610884   81976 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 18:34:39.614996   81976 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 18:34:39.615023   81976 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/addons for local assets ...
	I0816 18:34:39.615082   81976 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/files for local assets ...
	I0816 18:34:39.615151   81976 filesync.go:149] local asset: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem -> 167532.pem in /etc/ssl/certs
	I0816 18:34:39.615260   81976 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 18:34:39.624399   81976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /etc/ssl/certs/167532.pem (1708 bytes)
	I0816 18:34:39.648986   81976 start.go:296] duration metric: took 130.2114ms for postStartSetup
	I0816 18:34:39.649037   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetConfigRaw
	I0816 18:34:39.649632   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetIP
	I0816 18:34:39.652258   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:39.652593   81976 main.go:141] libmachine: (newest-cni-774287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:15:e2", ip: ""} in network mk-newest-cni-774287: {Iface:virbr1 ExpiryTime:2024-08-16 19:34:29 +0000 UTC Type:0 Mac:52:54:00:2d:15:e2 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:newest-cni-774287 Clientid:01:52:54:00:2d:15:e2}
	I0816 18:34:39.652643   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined IP address 192.168.39.194 and MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:39.652925   81976 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/newest-cni-774287/config.json ...
	I0816 18:34:39.653140   81976 start.go:128] duration metric: took 24.780630072s to createHost
	I0816 18:34:39.653166   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHHostname
	I0816 18:34:39.655622   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:39.655955   81976 main.go:141] libmachine: (newest-cni-774287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:15:e2", ip: ""} in network mk-newest-cni-774287: {Iface:virbr1 ExpiryTime:2024-08-16 19:34:29 +0000 UTC Type:0 Mac:52:54:00:2d:15:e2 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:newest-cni-774287 Clientid:01:52:54:00:2d:15:e2}
	I0816 18:34:39.656010   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined IP address 192.168.39.194 and MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:39.656103   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHPort
	I0816 18:34:39.656356   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHKeyPath
	I0816 18:34:39.656537   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHKeyPath
	I0816 18:34:39.656710   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHUsername
	I0816 18:34:39.656859   81976 main.go:141] libmachine: Using SSH client type: native
	I0816 18:34:39.657018   81976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0816 18:34:39.657037   81976 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 18:34:39.777271   81976 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723833279.749567986
	
	I0816 18:34:39.777292   81976 fix.go:216] guest clock: 1723833279.749567986
	I0816 18:34:39.777298   81976 fix.go:229] Guest: 2024-08-16 18:34:39.749567986 +0000 UTC Remote: 2024-08-16 18:34:39.653152847 +0000 UTC m=+24.886950896 (delta=96.415139ms)
	I0816 18:34:39.777346   81976 fix.go:200] guest clock delta is within tolerance: 96.415139ms
	I0816 18:34:39.777354   81976 start.go:83] releasing machines lock for "newest-cni-774287", held for 24.904916568s
	I0816 18:34:39.777384   81976 main.go:141] libmachine: (newest-cni-774287) Calling .DriverName
	I0816 18:34:39.777658   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetIP
	I0816 18:34:39.780903   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:39.781313   81976 main.go:141] libmachine: (newest-cni-774287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:15:e2", ip: ""} in network mk-newest-cni-774287: {Iface:virbr1 ExpiryTime:2024-08-16 19:34:29 +0000 UTC Type:0 Mac:52:54:00:2d:15:e2 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:newest-cni-774287 Clientid:01:52:54:00:2d:15:e2}
	I0816 18:34:39.781343   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined IP address 192.168.39.194 and MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:39.781470   81976 main.go:141] libmachine: (newest-cni-774287) Calling .DriverName
	I0816 18:34:39.781967   81976 main.go:141] libmachine: (newest-cni-774287) Calling .DriverName
	I0816 18:34:39.782154   81976 main.go:141] libmachine: (newest-cni-774287) Calling .DriverName
	I0816 18:34:39.782247   81976 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 18:34:39.782286   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHHostname
	I0816 18:34:39.782401   81976 ssh_runner.go:195] Run: cat /version.json
	I0816 18:34:39.782425   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHHostname
	I0816 18:34:39.784819   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:39.785157   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:39.785263   81976 main.go:141] libmachine: (newest-cni-774287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:15:e2", ip: ""} in network mk-newest-cni-774287: {Iface:virbr1 ExpiryTime:2024-08-16 19:34:29 +0000 UTC Type:0 Mac:52:54:00:2d:15:e2 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:newest-cni-774287 Clientid:01:52:54:00:2d:15:e2}
	I0816 18:34:39.785296   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined IP address 192.168.39.194 and MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:39.785471   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHPort
	I0816 18:34:39.785568   81976 main.go:141] libmachine: (newest-cni-774287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:15:e2", ip: ""} in network mk-newest-cni-774287: {Iface:virbr1 ExpiryTime:2024-08-16 19:34:29 +0000 UTC Type:0 Mac:52:54:00:2d:15:e2 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:newest-cni-774287 Clientid:01:52:54:00:2d:15:e2}
	I0816 18:34:39.785594   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined IP address 192.168.39.194 and MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:39.785628   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHKeyPath
	I0816 18:34:39.785782   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHPort
	I0816 18:34:39.785792   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHUsername
	I0816 18:34:39.785965   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHKeyPath
	I0816 18:34:39.785962   81976 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/newest-cni-774287/id_rsa Username:docker}
	I0816 18:34:39.786115   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetSSHUsername
	I0816 18:34:39.786259   81976 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/newest-cni-774287/id_rsa Username:docker}
	I0816 18:34:39.905988   81976 ssh_runner.go:195] Run: systemctl --version
	I0816 18:34:39.912029   81976 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 18:34:40.073666   81976 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 18:34:40.079320   81976 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 18:34:40.079396   81976 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 18:34:40.094734   81976 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 18:34:40.094760   81976 start.go:495] detecting cgroup driver to use...
	I0816 18:34:40.094812   81976 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 18:34:40.110377   81976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 18:34:40.123825   81976 docker.go:217] disabling cri-docker service (if available) ...
	I0816 18:34:40.123886   81976 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 18:34:40.137975   81976 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 18:34:40.150867   81976 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 18:34:40.273358   81976 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 18:34:40.433238   81976 docker.go:233] disabling docker service ...
	I0816 18:34:40.433314   81976 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 18:34:40.447429   81976 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 18:34:40.462059   81976 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 18:34:40.592974   81976 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 18:34:40.722449   81976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 18:34:40.736925   81976 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 18:34:40.755887   81976 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 18:34:40.755957   81976 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:34:40.766221   81976 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 18:34:40.766281   81976 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:34:40.777391   81976 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:34:40.787248   81976 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:34:40.798119   81976 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 18:34:40.808410   81976 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:34:40.818609   81976 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:34:40.836177   81976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:34:40.847019   81976 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 18:34:40.856589   81976 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 18:34:40.856678   81976 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 18:34:40.870035   81976 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 18:34:40.879791   81976 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:34:41.012032   81976 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 18:34:41.149016   81976 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 18:34:41.149106   81976 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 18:34:41.153646   81976 start.go:563] Will wait 60s for crictl version
	I0816 18:34:41.153710   81976 ssh_runner.go:195] Run: which crictl
	I0816 18:34:41.158088   81976 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 18:34:41.199450   81976 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 18:34:41.199520   81976 ssh_runner.go:195] Run: crio --version
	I0816 18:34:41.227587   81976 ssh_runner.go:195] Run: crio --version
	I0816 18:34:41.255800   81976 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 18:34:41.257025   81976 main.go:141] libmachine: (newest-cni-774287) Calling .GetIP
	I0816 18:34:41.259537   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:41.259924   81976 main.go:141] libmachine: (newest-cni-774287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:15:e2", ip: ""} in network mk-newest-cni-774287: {Iface:virbr1 ExpiryTime:2024-08-16 19:34:29 +0000 UTC Type:0 Mac:52:54:00:2d:15:e2 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:newest-cni-774287 Clientid:01:52:54:00:2d:15:e2}
	I0816 18:34:41.259954   81976 main.go:141] libmachine: (newest-cni-774287) DBG | domain newest-cni-774287 has defined IP address 192.168.39.194 and MAC address 52:54:00:2d:15:e2 in network mk-newest-cni-774287
	I0816 18:34:41.260129   81976 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0816 18:34:41.264282   81976 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 18:34:41.278214   81976 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0816 18:34:41.279347   81976 kubeadm.go:883] updating cluster {Name:newest-cni-774287 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:newest-cni-774287 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.194 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 18:34:41.279486   81976 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 18:34:41.279554   81976 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 18:34:41.311048   81976 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0816 18:34:41.311127   81976 ssh_runner.go:195] Run: which lz4
	I0816 18:34:41.315060   81976 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 18:34:41.319427   81976 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 18:34:41.319463   81976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0816 18:34:42.497551   81976 crio.go:462] duration metric: took 1.182531558s to copy over tarball
	I0816 18:34:42.497631   81976 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 18:34:44.549865   81976 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.05220237s)
	I0816 18:34:44.549898   81976 crio.go:469] duration metric: took 2.052316244s to extract the tarball
	I0816 18:34:44.549908   81976 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 18:34:44.589205   81976 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 18:34:44.633492   81976 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 18:34:44.633513   81976 cache_images.go:84] Images are preloaded, skipping loading
	I0816 18:34:44.633520   81976 kubeadm.go:934] updating node { 192.168.39.194 8443 v1.31.0 crio true true} ...
	I0816 18:34:44.633634   81976 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-774287 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.194
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:newest-cni-774287 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 18:34:44.633723   81976 ssh_runner.go:195] Run: crio config
	I0816 18:34:44.676555   81976 cni.go:84] Creating CNI manager for ""
	I0816 18:34:44.676585   81976 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 18:34:44.676599   81976 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0816 18:34:44.676645   81976 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.39.194 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-774287 NodeName:newest-cni-774287 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.194"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:ma
p[] NodeIP:192.168.39.194 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 18:34:44.676823   81976 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.194
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-774287"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.194
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.194"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 18:34:44.676895   81976 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 18:34:44.687664   81976 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 18:34:44.687731   81976 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 18:34:44.697996   81976 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I0816 18:34:44.714412   81976 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 18:34:44.730623   81976 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2285 bytes)
	I0816 18:34:44.746941   81976 ssh_runner.go:195] Run: grep 192.168.39.194	control-plane.minikube.internal$ /etc/hosts
	I0816 18:34:44.750540   81976 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.194	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 18:34:44.762447   81976 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	
	
	==> CRI-O <==
	Aug 16 18:34:47 embed-certs-777541 crio[736]: time="2024-08-16 18:34:47.665651562Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723833287665630084,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a4ffbd2f-e4ed-4233-8def-b1def2af8546 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 18:34:47 embed-certs-777541 crio[736]: time="2024-08-16 18:34:47.666199997Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cedd948d-811e-4e41-ba48-064fc9e43fae name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:34:47 embed-certs-777541 crio[736]: time="2024-08-16 18:34:47.666262834Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cedd948d-811e-4e41-ba48-064fc9e43fae name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:34:47 embed-certs-777541 crio[736]: time="2024-08-16 18:34:47.666450540Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970,PodSandboxId:663df6db7136a976826aaaf88c4e1823067edfed6bf8c598f8f6d136918acf15,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723832140172178791,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fc6c4da-0e0f-45cc-84a6-bd4907f5e852,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb524cb685d6bf3dc37257140f2e9a94f5bb5bd0bba0637396282b003e70175e,PodSandboxId:2022c533c0df1a055930e1ce1a93a252a21a4005c3c2701897c30ae194b0c47f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723832119804513910,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb629961-107a-4695-8482-6072d7bab160,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d,PodSandboxId:e25c7a557e9b5d93671dfb881d1122e6d91fa6853444a85157abda8a2c13cfe6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723832116930817598,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-8njs2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f29c31e1-4c2a-4dd8-ba60-62998504c55e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf,PodSandboxId:d2f7a4d8ee312c29d90db2c136370ed244c30e957652e73807bbcdc31c8245c7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723832109408363708,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j5rl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcbc8903-6fa2-4f55-9
ec0-92b77e21fb08,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e,PodSandboxId:663df6db7136a976826aaaf88c4e1823067edfed6bf8c598f8f6d136918acf15,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723832109359982498,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fc6c4da-0e0f-45cc-84a6-bd4907f5e8
52,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468,PodSandboxId:84d93b538a2582bcd546399dc7a0fae9489d5c294e7e3ba490e59ab62a796b5b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723832104832811736,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-777541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f22074243edb8e08ecfa486f630ccc29,},Annotations:map[string]string{io.kube
rnetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc,PodSandboxId:b9f249361f3f5270bd416e8c14197235c7e922feb092ccf237b361abd9b2148b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723832104828044327,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-777541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96f8ddd7fc45d6d6753c8a9d4ff3a367,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c,PodSandboxId:3376ece6a713c0dd5d7a72c91ef1b6c79ed390ac94da860ba4ffbde38e6b8c23,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723832104844608255,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-777541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50b85a095b7258a42f869852fe62b607,},Annotations:map[string]string{io.k
ubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b,PodSandboxId:420ec3a2ffdea6bbd41b8799792cc17ee49c35c3d7c0ed9dd775c3c5cca8bb64,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723832104818800668,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-777541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6efe5d70b9d3c0a5949e8858ebf4ca8d,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cedd948d-811e-4e41-ba48-064fc9e43fae name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:34:47 embed-certs-777541 crio[736]: time="2024-08-16 18:34:47.702643279Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0bbcd5c9-ebe2-4280-b422-ba08c11c5088 name=/runtime.v1.RuntimeService/Version
	Aug 16 18:34:47 embed-certs-777541 crio[736]: time="2024-08-16 18:34:47.702729933Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0bbcd5c9-ebe2-4280-b422-ba08c11c5088 name=/runtime.v1.RuntimeService/Version
	Aug 16 18:34:47 embed-certs-777541 crio[736]: time="2024-08-16 18:34:47.703946746Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=74e23ced-9e06-4b6c-9103-d545d54128c7 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 18:34:47 embed-certs-777541 crio[736]: time="2024-08-16 18:34:47.704531134Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723833287704506609,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=74e23ced-9e06-4b6c-9103-d545d54128c7 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 18:34:47 embed-certs-777541 crio[736]: time="2024-08-16 18:34:47.704992863Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9dcb883a-eb0f-4743-b9d3-16e7fc6b3109 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:34:47 embed-certs-777541 crio[736]: time="2024-08-16 18:34:47.705096545Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9dcb883a-eb0f-4743-b9d3-16e7fc6b3109 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:34:47 embed-certs-777541 crio[736]: time="2024-08-16 18:34:47.705370264Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970,PodSandboxId:663df6db7136a976826aaaf88c4e1823067edfed6bf8c598f8f6d136918acf15,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723832140172178791,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fc6c4da-0e0f-45cc-84a6-bd4907f5e852,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb524cb685d6bf3dc37257140f2e9a94f5bb5bd0bba0637396282b003e70175e,PodSandboxId:2022c533c0df1a055930e1ce1a93a252a21a4005c3c2701897c30ae194b0c47f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723832119804513910,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb629961-107a-4695-8482-6072d7bab160,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d,PodSandboxId:e25c7a557e9b5d93671dfb881d1122e6d91fa6853444a85157abda8a2c13cfe6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723832116930817598,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-8njs2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f29c31e1-4c2a-4dd8-ba60-62998504c55e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf,PodSandboxId:d2f7a4d8ee312c29d90db2c136370ed244c30e957652e73807bbcdc31c8245c7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723832109408363708,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j5rl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcbc8903-6fa2-4f55-9
ec0-92b77e21fb08,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e,PodSandboxId:663df6db7136a976826aaaf88c4e1823067edfed6bf8c598f8f6d136918acf15,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723832109359982498,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fc6c4da-0e0f-45cc-84a6-bd4907f5e8
52,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468,PodSandboxId:84d93b538a2582bcd546399dc7a0fae9489d5c294e7e3ba490e59ab62a796b5b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723832104832811736,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-777541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f22074243edb8e08ecfa486f630ccc29,},Annotations:map[string]string{io.kube
rnetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc,PodSandboxId:b9f249361f3f5270bd416e8c14197235c7e922feb092ccf237b361abd9b2148b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723832104828044327,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-777541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96f8ddd7fc45d6d6753c8a9d4ff3a367,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c,PodSandboxId:3376ece6a713c0dd5d7a72c91ef1b6c79ed390ac94da860ba4ffbde38e6b8c23,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723832104844608255,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-777541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50b85a095b7258a42f869852fe62b607,},Annotations:map[string]string{io.k
ubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b,PodSandboxId:420ec3a2ffdea6bbd41b8799792cc17ee49c35c3d7c0ed9dd775c3c5cca8bb64,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723832104818800668,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-777541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6efe5d70b9d3c0a5949e8858ebf4ca8d,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9dcb883a-eb0f-4743-b9d3-16e7fc6b3109 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:34:47 embed-certs-777541 crio[736]: time="2024-08-16 18:34:47.744856415Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ee272b33-4096-4dc3-978f-1bc8f8574aa3 name=/runtime.v1.RuntimeService/Version
	Aug 16 18:34:47 embed-certs-777541 crio[736]: time="2024-08-16 18:34:47.744953742Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ee272b33-4096-4dc3-978f-1bc8f8574aa3 name=/runtime.v1.RuntimeService/Version
	Aug 16 18:34:47 embed-certs-777541 crio[736]: time="2024-08-16 18:34:47.746225086Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f6ae81bb-fc03-4e16-ab35-1c459b834dfd name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 18:34:47 embed-certs-777541 crio[736]: time="2024-08-16 18:34:47.746657638Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723833287746631848,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f6ae81bb-fc03-4e16-ab35-1c459b834dfd name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 18:34:47 embed-certs-777541 crio[736]: time="2024-08-16 18:34:47.747085835Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0ad4ee07-dd64-48e0-a81a-8179cfcf7d3e name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:34:47 embed-certs-777541 crio[736]: time="2024-08-16 18:34:47.747201528Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0ad4ee07-dd64-48e0-a81a-8179cfcf7d3e name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:34:47 embed-certs-777541 crio[736]: time="2024-08-16 18:34:47.747414419Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970,PodSandboxId:663df6db7136a976826aaaf88c4e1823067edfed6bf8c598f8f6d136918acf15,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723832140172178791,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fc6c4da-0e0f-45cc-84a6-bd4907f5e852,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb524cb685d6bf3dc37257140f2e9a94f5bb5bd0bba0637396282b003e70175e,PodSandboxId:2022c533c0df1a055930e1ce1a93a252a21a4005c3c2701897c30ae194b0c47f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723832119804513910,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb629961-107a-4695-8482-6072d7bab160,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d,PodSandboxId:e25c7a557e9b5d93671dfb881d1122e6d91fa6853444a85157abda8a2c13cfe6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723832116930817598,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-8njs2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f29c31e1-4c2a-4dd8-ba60-62998504c55e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf,PodSandboxId:d2f7a4d8ee312c29d90db2c136370ed244c30e957652e73807bbcdc31c8245c7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723832109408363708,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j5rl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcbc8903-6fa2-4f55-9
ec0-92b77e21fb08,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e,PodSandboxId:663df6db7136a976826aaaf88c4e1823067edfed6bf8c598f8f6d136918acf15,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723832109359982498,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fc6c4da-0e0f-45cc-84a6-bd4907f5e8
52,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468,PodSandboxId:84d93b538a2582bcd546399dc7a0fae9489d5c294e7e3ba490e59ab62a796b5b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723832104832811736,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-777541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f22074243edb8e08ecfa486f630ccc29,},Annotations:map[string]string{io.kube
rnetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc,PodSandboxId:b9f249361f3f5270bd416e8c14197235c7e922feb092ccf237b361abd9b2148b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723832104828044327,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-777541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96f8ddd7fc45d6d6753c8a9d4ff3a367,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c,PodSandboxId:3376ece6a713c0dd5d7a72c91ef1b6c79ed390ac94da860ba4ffbde38e6b8c23,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723832104844608255,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-777541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50b85a095b7258a42f869852fe62b607,},Annotations:map[string]string{io.k
ubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b,PodSandboxId:420ec3a2ffdea6bbd41b8799792cc17ee49c35c3d7c0ed9dd775c3c5cca8bb64,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723832104818800668,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-777541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6efe5d70b9d3c0a5949e8858ebf4ca8d,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0ad4ee07-dd64-48e0-a81a-8179cfcf7d3e name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:34:47 embed-certs-777541 crio[736]: time="2024-08-16 18:34:47.778824873Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=15d52100-6cd5-4647-8365-36064ffad395 name=/runtime.v1.RuntimeService/Version
	Aug 16 18:34:47 embed-certs-777541 crio[736]: time="2024-08-16 18:34:47.778894946Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=15d52100-6cd5-4647-8365-36064ffad395 name=/runtime.v1.RuntimeService/Version
	Aug 16 18:34:47 embed-certs-777541 crio[736]: time="2024-08-16 18:34:47.780062148Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d8878f5f-17db-4e36-978f-31fe7a9e5d95 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 18:34:47 embed-certs-777541 crio[736]: time="2024-08-16 18:34:47.780585949Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723833287780562244,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d8878f5f-17db-4e36-978f-31fe7a9e5d95 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 18:34:47 embed-certs-777541 crio[736]: time="2024-08-16 18:34:47.781074294Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e937f730-9b2c-4907-8021-a88df24b4fe3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:34:47 embed-certs-777541 crio[736]: time="2024-08-16 18:34:47.781174729Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e937f730-9b2c-4907-8021-a88df24b4fe3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:34:47 embed-certs-777541 crio[736]: time="2024-08-16 18:34:47.781370952Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970,PodSandboxId:663df6db7136a976826aaaf88c4e1823067edfed6bf8c598f8f6d136918acf15,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723832140172178791,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fc6c4da-0e0f-45cc-84a6-bd4907f5e852,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb524cb685d6bf3dc37257140f2e9a94f5bb5bd0bba0637396282b003e70175e,PodSandboxId:2022c533c0df1a055930e1ce1a93a252a21a4005c3c2701897c30ae194b0c47f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723832119804513910,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb629961-107a-4695-8482-6072d7bab160,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d,PodSandboxId:e25c7a557e9b5d93671dfb881d1122e6d91fa6853444a85157abda8a2c13cfe6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723832116930817598,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-8njs2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f29c31e1-4c2a-4dd8-ba60-62998504c55e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf,PodSandboxId:d2f7a4d8ee312c29d90db2c136370ed244c30e957652e73807bbcdc31c8245c7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723832109408363708,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j5rl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcbc8903-6fa2-4f55-9
ec0-92b77e21fb08,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e,PodSandboxId:663df6db7136a976826aaaf88c4e1823067edfed6bf8c598f8f6d136918acf15,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723832109359982498,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fc6c4da-0e0f-45cc-84a6-bd4907f5e8
52,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468,PodSandboxId:84d93b538a2582bcd546399dc7a0fae9489d5c294e7e3ba490e59ab62a796b5b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723832104832811736,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-777541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f22074243edb8e08ecfa486f630ccc29,},Annotations:map[string]string{io.kube
rnetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc,PodSandboxId:b9f249361f3f5270bd416e8c14197235c7e922feb092ccf237b361abd9b2148b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723832104828044327,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-777541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96f8ddd7fc45d6d6753c8a9d4ff3a367,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c,PodSandboxId:3376ece6a713c0dd5d7a72c91ef1b6c79ed390ac94da860ba4ffbde38e6b8c23,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723832104844608255,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-777541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50b85a095b7258a42f869852fe62b607,},Annotations:map[string]string{io.k
ubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b,PodSandboxId:420ec3a2ffdea6bbd41b8799792cc17ee49c35c3d7c0ed9dd775c3c5cca8bb64,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723832104818800668,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-777541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6efe5d70b9d3c0a5949e8858ebf4ca8d,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e937f730-9b2c-4907-8021-a88df24b4fe3 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	08db52c38328f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Running             storage-provisioner       2                   663df6db7136a       storage-provisioner
	eb524cb685d6b       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   19 minutes ago      Running             busybox                   1                   2022c533c0df1       busybox
	3918f8eb004ee       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      19 minutes ago      Running             coredns                   1                   e25c7a557e9b5       coredns-6f6b679f8f-8njs2
	92401f8df7e94       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      19 minutes ago      Running             kube-proxy                1                   d2f7a4d8ee312       kube-proxy-j5rl7
	81f4d0a570266       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Exited              storage-provisioner       1                   663df6db7136a       storage-provisioner
	72d29c313c76c       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      19 minutes ago      Running             kube-controller-manager   1                   3376ece6a713c       kube-controller-manager-embed-certs-777541
	fd0d63ff38eb4       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      19 minutes ago      Running             etcd                      1                   84d93b538a258       etcd-embed-certs-777541
	99d68f23b3bc9       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      19 minutes ago      Running             kube-scheduler            1                   b9f249361f3f5       kube-scheduler-embed-certs-777541
	8c78984b6e3a7       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      19 minutes ago      Running             kube-apiserver            1                   420ec3a2ffdea       kube-apiserver-embed-certs-777541
	
	
	==> coredns [3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:34987 - 15764 "HINFO IN 2476056286808905898.6093248778645637882. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017274135s
	
	
	==> describe nodes <==
	Name:               embed-certs-777541
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-777541
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd
	                    minikube.k8s.io/name=embed-certs-777541
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_16T18_05_32_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 18:05:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-777541
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 18:34:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 18:30:56 +0000   Fri, 16 Aug 2024 18:05:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 18:30:56 +0000   Fri, 16 Aug 2024 18:05:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 18:30:56 +0000   Fri, 16 Aug 2024 18:05:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 18:30:56 +0000   Fri, 16 Aug 2024 18:15:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.218
	  Hostname:    embed-certs-777541
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b2459fc1d07041d0b6f59364f1497951
	  System UUID:                b2459fc1-d070-41d0-b6f5-9364f1497951
	  Boot ID:                    ece19e17-996b-42c3-b7d3-9e5df75bd9fe
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 coredns-6f6b679f8f-8njs2                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     29m
	  kube-system                 etcd-embed-certs-777541                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         29m
	  kube-system                 kube-apiserver-embed-certs-777541             250m (12%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-controller-manager-embed-certs-777541    200m (10%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-proxy-j5rl7                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-scheduler-embed-certs-777541             100m (5%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 metrics-server-6867b74b74-6hkzb               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         28m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29m                kube-proxy       
	  Normal  Starting                 19m                kube-proxy       
	  Normal  NodeHasSufficientPID     29m                kubelet          Node embed-certs-777541 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node embed-certs-777541 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node embed-certs-777541 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeReady                29m                kubelet          Node embed-certs-777541 status is now: NodeReady
	  Normal  RegisteredNode           29m                node-controller  Node embed-certs-777541 event: Registered Node embed-certs-777541 in Controller
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node embed-certs-777541 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node embed-certs-777541 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node embed-certs-777541 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           19m                node-controller  Node embed-certs-777541 event: Registered Node embed-certs-777541 in Controller
	
	
	==> dmesg <==
	[Aug16 18:14] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.054069] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042273] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.035837] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.950396] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.408864] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.741474] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +0.054029] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054801] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +0.182881] systemd-fstab-generator[675]: Ignoring "noauto" option for root device
	[  +0.118336] systemd-fstab-generator[687]: Ignoring "noauto" option for root device
	[  +0.252108] systemd-fstab-generator[720]: Ignoring "noauto" option for root device
	[  +3.935901] systemd-fstab-generator[817]: Ignoring "noauto" option for root device
	[Aug16 18:15] systemd-fstab-generator[937]: Ignoring "noauto" option for root device
	[  +0.061674] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.526971] kauditd_printk_skb: 69 callbacks suppressed
	[  +2.396369] systemd-fstab-generator[1554]: Ignoring "noauto" option for root device
	[  +3.318516] kauditd_printk_skb: 64 callbacks suppressed
	[ +25.186739] kauditd_printk_skb: 49 callbacks suppressed
	
	
	==> etcd [fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468] <==
	{"level":"info","ts":"2024-08-16T18:15:05.378052Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-16T18:15:07.206046Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"73f9a34abd6fe987 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-16T18:15:07.206100Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"73f9a34abd6fe987 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-16T18:15:07.206160Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"73f9a34abd6fe987 received MsgPreVoteResp from 73f9a34abd6fe987 at term 2"}
	{"level":"info","ts":"2024-08-16T18:15:07.206177Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"73f9a34abd6fe987 became candidate at term 3"}
	{"level":"info","ts":"2024-08-16T18:15:07.206183Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"73f9a34abd6fe987 received MsgVoteResp from 73f9a34abd6fe987 at term 3"}
	{"level":"info","ts":"2024-08-16T18:15:07.206192Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"73f9a34abd6fe987 became leader at term 3"}
	{"level":"info","ts":"2024-08-16T18:15:07.206199Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 73f9a34abd6fe987 elected leader 73f9a34abd6fe987 at term 3"}
	{"level":"info","ts":"2024-08-16T18:15:07.214901Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"73f9a34abd6fe987","local-member-attributes":"{Name:embed-certs-777541 ClientURLs:[https://192.168.61.218:2379]}","request-path":"/0/members/73f9a34abd6fe987/attributes","cluster-id":"2cb457bdfb3a296b","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-16T18:15:07.215281Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-16T18:15:07.215538Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-16T18:15:07.215599Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-16T18:15:07.215748Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-16T18:15:07.216783Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T18:15:07.217181Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T18:15:07.217676Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.218:2379"}
	{"level":"info","ts":"2024-08-16T18:15:07.218513Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-16T18:25:07.245313Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":859}
	{"level":"info","ts":"2024-08-16T18:25:07.255343Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":859,"took":"9.666948ms","hash":1634259651,"current-db-size-bytes":2732032,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2732032,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-08-16T18:25:07.255422Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1634259651,"revision":859,"compact-revision":-1}
	{"level":"info","ts":"2024-08-16T18:30:07.252864Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1101}
	{"level":"info","ts":"2024-08-16T18:30:07.257074Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1101,"took":"3.586434ms","hash":2763535844,"current-db-size-bytes":2732032,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":1650688,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2024-08-16T18:30:07.257205Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2763535844,"revision":1101,"compact-revision":859}
	{"level":"warn","ts":"2024-08-16T18:34:47.568584Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"276.990696ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-16T18:34:47.568821Z","caller":"traceutil/trace.go:171","msg":"trace[1573112935] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1571; }","duration":"277.314913ms","start":"2024-08-16T18:34:47.291477Z","end":"2024-08-16T18:34:47.568792Z","steps":["trace[1573112935] 'range keys from in-memory index tree'  (duration: 276.941759ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:34:49 up 20 min,  0 users,  load average: 0.06, 0.10, 0.09
	Linux embed-certs-777541 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0816 18:30:09.475035       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 18:30:09.475049       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0816 18:30:09.476198       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0816 18:30:09.476350       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0816 18:31:09.476465       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 18:31:09.476806       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0816 18:31:09.476695       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 18:31:09.476922       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0816 18:31:09.478042       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0816 18:31:09.478087       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0816 18:33:09.479010       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 18:33:09.479207       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0816 18:33:09.479279       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 18:33:09.479294       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0816 18:33:09.480724       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0816 18:33:09.480757       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c] <==
	E0816 18:29:42.210300       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 18:29:42.673036       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 18:30:12.216362       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 18:30:12.680705       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 18:30:42.222239       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 18:30:42.687736       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0816 18:30:56.982510       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-777541"
	I0816 18:31:10.994649       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="1.179869ms"
	E0816 18:31:12.228850       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 18:31:12.695333       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0816 18:31:24.994010       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="86.276µs"
	E0816 18:31:42.235024       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 18:31:42.703357       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 18:32:12.241461       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 18:32:12.711013       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 18:32:42.248213       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 18:32:42.720703       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 18:33:12.254652       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 18:33:12.728512       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 18:33:42.260302       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 18:33:42.738583       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 18:34:12.266885       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 18:34:12.747067       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 18:34:42.275237       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 18:34:42.755228       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0816 18:15:09.705563       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0816 18:15:09.715958       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.218"]
	E0816 18:15:09.716033       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0816 18:15:09.743816       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0816 18:15:09.743851       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0816 18:15:09.743880       1 server_linux.go:169] "Using iptables Proxier"
	I0816 18:15:09.746001       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0816 18:15:09.746265       1 server.go:483] "Version info" version="v1.31.0"
	I0816 18:15:09.746287       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 18:15:09.751445       1 config.go:197] "Starting service config controller"
	I0816 18:15:09.751514       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0816 18:15:09.751555       1 config.go:104] "Starting endpoint slice config controller"
	I0816 18:15:09.751577       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0816 18:15:09.752561       1 config.go:326] "Starting node config controller"
	I0816 18:15:09.752849       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0816 18:15:09.851695       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0816 18:15:09.851797       1 shared_informer.go:320] Caches are synced for service config
	I0816 18:15:09.853218       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc] <==
	I0816 18:15:06.361493       1 serving.go:386] Generated self-signed cert in-memory
	W0816 18:15:08.432686       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0816 18:15:08.432809       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0816 18:15:08.432839       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0816 18:15:08.432901       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0816 18:15:08.479820       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0816 18:15:08.479859       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 18:15:08.482198       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0816 18:15:08.482264       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0816 18:15:08.482219       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0816 18:15:08.482312       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0816 18:15:08.582917       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 16 18:33:33 embed-certs-777541 kubelet[944]: E0816 18:33:33.977668     944 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-6hkzb" podUID="3e01da8d-7ddf-47cc-9079-5162cf2c2b53"
	Aug 16 18:33:43 embed-certs-777541 kubelet[944]: E0816 18:33:43.266211     944 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723833223265878376,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:33:43 embed-certs-777541 kubelet[944]: E0816 18:33:43.266256     944 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723833223265878376,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:33:46 embed-certs-777541 kubelet[944]: E0816 18:33:46.977913     944 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-6hkzb" podUID="3e01da8d-7ddf-47cc-9079-5162cf2c2b53"
	Aug 16 18:33:53 embed-certs-777541 kubelet[944]: E0816 18:33:53.267854     944 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723833233267588056,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:33:53 embed-certs-777541 kubelet[944]: E0816 18:33:53.267893     944 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723833233267588056,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:34:01 embed-certs-777541 kubelet[944]: E0816 18:34:01.977638     944 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-6hkzb" podUID="3e01da8d-7ddf-47cc-9079-5162cf2c2b53"
	Aug 16 18:34:02 embed-certs-777541 kubelet[944]: E0816 18:34:02.998676     944 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 18:34:02 embed-certs-777541 kubelet[944]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 18:34:02 embed-certs-777541 kubelet[944]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 18:34:02 embed-certs-777541 kubelet[944]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 18:34:02 embed-certs-777541 kubelet[944]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 18:34:03 embed-certs-777541 kubelet[944]: E0816 18:34:03.269691     944 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723833243269173789,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:34:03 embed-certs-777541 kubelet[944]: E0816 18:34:03.269775     944 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723833243269173789,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:34:13 embed-certs-777541 kubelet[944]: E0816 18:34:13.271417     944 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723833253270950646,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:34:13 embed-certs-777541 kubelet[944]: E0816 18:34:13.271778     944 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723833253270950646,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:34:15 embed-certs-777541 kubelet[944]: E0816 18:34:15.978183     944 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-6hkzb" podUID="3e01da8d-7ddf-47cc-9079-5162cf2c2b53"
	Aug 16 18:34:23 embed-certs-777541 kubelet[944]: E0816 18:34:23.273847     944 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723833263273553512,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:34:23 embed-certs-777541 kubelet[944]: E0816 18:34:23.273895     944 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723833263273553512,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:34:26 embed-certs-777541 kubelet[944]: E0816 18:34:26.977778     944 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-6hkzb" podUID="3e01da8d-7ddf-47cc-9079-5162cf2c2b53"
	Aug 16 18:34:33 embed-certs-777541 kubelet[944]: E0816 18:34:33.275906     944 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723833273275641157,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:34:33 embed-certs-777541 kubelet[944]: E0816 18:34:33.276293     944 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723833273275641157,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:34:40 embed-certs-777541 kubelet[944]: E0816 18:34:40.979042     944 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-6hkzb" podUID="3e01da8d-7ddf-47cc-9079-5162cf2c2b53"
	Aug 16 18:34:43 embed-certs-777541 kubelet[944]: E0816 18:34:43.278074     944 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723833283277744406,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 18:34:43 embed-certs-777541 kubelet[944]: E0816 18:34:43.278407     944 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723833283277744406,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970] <==
	I0816 18:15:40.254869       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0816 18:15:40.262831       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0816 18:15:40.262908       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0816 18:15:57.665960       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0816 18:15:57.666602       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"81595c3c-39f4-4f4e-a45f-e2659ab69722", APIVersion:"v1", ResourceVersion:"642", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-777541_31124775-bfcb-43b2-b7cc-5d32dd9342a4 became leader
	I0816 18:15:57.668218       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-777541_31124775-bfcb-43b2-b7cc-5d32dd9342a4!
	I0816 18:15:57.768771       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-777541_31124775-bfcb-43b2-b7cc-5d32dd9342a4!
	
	
	==> storage-provisioner [81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e] <==
	I0816 18:15:09.530878       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0816 18:15:39.534335       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-777541 -n embed-certs-777541
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-777541 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-6hkzb
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-777541 describe pod metrics-server-6867b74b74-6hkzb
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-777541 describe pod metrics-server-6867b74b74-6hkzb: exit status 1 (85.46388ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-6hkzb" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-777541 describe pod metrics-server-6867b74b74-6hkzb: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (367.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (141.72s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0816 18:32:48.775761   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/flannel-791304/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0816 18:33:21.061674   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/functional-654639/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0816 18:33:29.900212   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/enable-default-cni-791304/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0816 18:33:43.679796   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/bridge-791304/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-783465 -n old-k8s-version-783465
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-783465 -n old-k8s-version-783465: exit status 2 (224.847208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-783465" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-783465 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-783465 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.408µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-783465 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-783465 -n old-k8s-version-783465
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-783465 -n old-k8s-version-783465: exit status 2 (215.003043ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-783465 logs -n 25
E0816 18:34:11.231429   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/calico-791304/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-783465 logs -n 25: (1.615634074s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p custom-flannel-791304 sudo cat                      | custom-flannel-791304        | jenkins | v1.33.1 | 16 Aug 24 18:05 UTC | 16 Aug 24 18:05 UTC |
	|         | /lib/systemd/system/containerd.service                 |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-791304                               | custom-flannel-791304        | jenkins | v1.33.1 | 16 Aug 24 18:05 UTC | 16 Aug 24 18:05 UTC |
	|         | sudo cat                                               |                              |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-791304 sudo                          | custom-flannel-791304        | jenkins | v1.33.1 | 16 Aug 24 18:05 UTC | 16 Aug 24 18:05 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-791304 sudo                          | custom-flannel-791304        | jenkins | v1.33.1 | 16 Aug 24 18:05 UTC | 16 Aug 24 18:05 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-791304 sudo                          | custom-flannel-791304        | jenkins | v1.33.1 | 16 Aug 24 18:05 UTC | 16 Aug 24 18:05 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-791304 sudo                          | custom-flannel-791304        | jenkins | v1.33.1 | 16 Aug 24 18:05 UTC | 16 Aug 24 18:05 UTC |
	|         | find /etc/crio -type f -exec                           |                              |         |         |                     |                     |
	|         | sh -c 'echo {}; cat {}' \;                             |                              |         |         |                     |                     |
	| ssh     | -p custom-flannel-791304 sudo                          | custom-flannel-791304        | jenkins | v1.33.1 | 16 Aug 24 18:05 UTC | 16 Aug 24 18:05 UTC |
	|         | crio config                                            |                              |         |         |                     |                     |
	| delete  | -p custom-flannel-791304                               | custom-flannel-791304        | jenkins | v1.33.1 | 16 Aug 24 18:05 UTC | 16 Aug 24 18:05 UTC |
	| start   | -p                                                     | default-k8s-diff-port-256678 | jenkins | v1.33.1 | 16 Aug 24 18:05 UTC | 16 Aug 24 18:07 UTC |
	|         | default-k8s-diff-port-256678                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-777541            | embed-certs-777541           | jenkins | v1.33.1 | 16 Aug 24 18:06 UTC | 16 Aug 24 18:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-777541                                  | embed-certs-777541           | jenkins | v1.33.1 | 16 Aug 24 18:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-864476             | no-preload-864476            | jenkins | v1.33.1 | 16 Aug 24 18:07 UTC | 16 Aug 24 18:07 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-864476                                   | no-preload-864476            | jenkins | v1.33.1 | 16 Aug 24 18:07 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-256678  | default-k8s-diff-port-256678 | jenkins | v1.33.1 | 16 Aug 24 18:07 UTC | 16 Aug 24 18:07 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-256678 | jenkins | v1.33.1 | 16 Aug 24 18:07 UTC |                     |
	|         | default-k8s-diff-port-256678                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-777541                 | embed-certs-777541           | jenkins | v1.33.1 | 16 Aug 24 18:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-783465        | old-k8s-version-783465       | jenkins | v1.33.1 | 16 Aug 24 18:08 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-777541                                  | embed-certs-777541           | jenkins | v1.33.1 | 16 Aug 24 18:08 UTC | 16 Aug 24 18:19 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-864476                  | no-preload-864476            | jenkins | v1.33.1 | 16 Aug 24 18:09 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-864476                                   | no-preload-864476            | jenkins | v1.33.1 | 16 Aug 24 18:09 UTC | 16 Aug 24 18:19 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-256678       | default-k8s-diff-port-256678 | jenkins | v1.33.1 | 16 Aug 24 18:09 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-256678 | jenkins | v1.33.1 | 16 Aug 24 18:09 UTC | 16 Aug 24 18:19 UTC |
	|         | default-k8s-diff-port-256678                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-783465                              | old-k8s-version-783465       | jenkins | v1.33.1 | 16 Aug 24 18:10 UTC | 16 Aug 24 18:10 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-783465             | old-k8s-version-783465       | jenkins | v1.33.1 | 16 Aug 24 18:10 UTC | 16 Aug 24 18:10 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-783465                              | old-k8s-version-783465       | jenkins | v1.33.1 | 16 Aug 24 18:10 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 18:10:53
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 18:10:53.101149   75402 out.go:345] Setting OutFile to fd 1 ...
	I0816 18:10:53.101401   75402 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 18:10:53.101412   75402 out.go:358] Setting ErrFile to fd 2...
	I0816 18:10:53.101418   75402 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 18:10:53.101600   75402 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-9545/.minikube/bin
	I0816 18:10:53.102131   75402 out.go:352] Setting JSON to false
	I0816 18:10:53.103018   75402 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6751,"bootTime":1723825102,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0816 18:10:53.103076   75402 start.go:139] virtualization: kvm guest
	I0816 18:10:53.105216   75402 out.go:177] * [old-k8s-version-783465] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0816 18:10:53.106496   75402 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 18:10:53.106504   75402 notify.go:220] Checking for updates...
	I0816 18:10:53.109235   75402 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 18:10:53.110572   75402 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19461-9545/kubeconfig
	I0816 18:10:53.111747   75402 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19461-9545/.minikube
	I0816 18:10:53.113164   75402 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 18:10:53.114589   75402 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 18:10:53.116284   75402 config.go:182] Loaded profile config "old-k8s-version-783465": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0816 18:10:53.116746   75402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:10:53.116806   75402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:10:53.132445   75402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46631
	I0816 18:10:53.132886   75402 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:10:53.133456   75402 main.go:141] libmachine: Using API Version  1
	I0816 18:10:53.133494   75402 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:10:53.133836   75402 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:10:53.134015   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	I0816 18:10:53.135791   75402 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0816 18:10:53.136942   75402 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 18:10:53.137229   75402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:10:53.137260   75402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:10:53.151853   75402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34617
	I0816 18:10:53.152327   75402 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:10:53.152881   75402 main.go:141] libmachine: Using API Version  1
	I0816 18:10:53.152905   75402 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:10:53.153159   75402 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:10:53.153307   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	I0816 18:10:53.188002   75402 out.go:177] * Using the kvm2 driver based on existing profile
	I0816 18:10:53.189287   75402 start.go:297] selected driver: kvm2
	I0816 18:10:53.189309   75402 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-783465 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-783465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 18:10:53.189432   75402 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 18:10:53.190098   75402 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 18:10:53.190187   75402 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19461-9545/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 18:10:53.205024   75402 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0816 18:10:53.205386   75402 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 18:10:53.205417   75402 cni.go:84] Creating CNI manager for ""
	I0816 18:10:53.205425   75402 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 18:10:53.205458   75402 start.go:340] cluster config:
	{Name:old-k8s-version-783465 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-783465 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 18:10:53.205557   75402 iso.go:125] acquiring lock: {Name:mke35866c1d0f078e1f027c2d727692722810e04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 18:10:53.207241   75402 out.go:177] * Starting "old-k8s-version-783465" primary control-plane node in "old-k8s-version-783465" cluster
	I0816 18:10:53.208254   75402 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0816 18:10:53.208286   75402 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0816 18:10:53.208298   75402 cache.go:56] Caching tarball of preloaded images
	I0816 18:10:53.208386   75402 preload.go:172] Found /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 18:10:53.208400   75402 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0816 18:10:53.208510   75402 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/config.json ...
	I0816 18:10:53.208736   75402 start.go:360] acquireMachinesLock for old-k8s-version-783465: {Name:mke1ffa1f2d6d714bdd85e184816ba8f4dfd08f1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 18:10:54.604889   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:10:57.676891   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:03.756940   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:06.828911   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:12.908885   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:15.980925   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:22.060891   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:25.132961   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:31.212919   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:34.284876   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:40.365032   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:43.436910   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:49.516914   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:52.588969   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:11:58.668915   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:01.740965   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:07.820898   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:10.892922   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:16.972913   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:20.044913   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:26.124921   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:29.196968   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:35.276952   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:38.348971   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:44.428932   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:47.500897   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:53.580923   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:12:56.652927   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:13:02.732992   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:13:05.804929   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:13:11.884953   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:13:14.956943   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:13:21.036963   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:13:24.108915   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:13:30.188851   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:13:33.260936   74510 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.218:22: connect: no route to host
	I0816 18:13:36.264963   74828 start.go:364] duration metric: took 4m2.37855556s to acquireMachinesLock for "no-preload-864476"
	I0816 18:13:36.265020   74828 start.go:96] Skipping create...Using existing machine configuration
	I0816 18:13:36.265027   74828 fix.go:54] fixHost starting: 
	I0816 18:13:36.265379   74828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:13:36.265409   74828 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:13:36.280707   74828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35227
	I0816 18:13:36.281167   74828 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:13:36.281747   74828 main.go:141] libmachine: Using API Version  1
	I0816 18:13:36.281778   74828 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:13:36.282122   74828 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:13:36.282330   74828 main.go:141] libmachine: (no-preload-864476) Calling .DriverName
	I0816 18:13:36.282457   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetState
	I0816 18:13:36.284064   74828 fix.go:112] recreateIfNeeded on no-preload-864476: state=Stopped err=<nil>
	I0816 18:13:36.284084   74828 main.go:141] libmachine: (no-preload-864476) Calling .DriverName
	W0816 18:13:36.284217   74828 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 18:13:36.286749   74828 out.go:177] * Restarting existing kvm2 VM for "no-preload-864476" ...
	I0816 18:13:36.262619   74510 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 18:13:36.262654   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetMachineName
	I0816 18:13:36.262944   74510 buildroot.go:166] provisioning hostname "embed-certs-777541"
	I0816 18:13:36.262967   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetMachineName
	I0816 18:13:36.263222   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:13:36.264803   74510 machine.go:96] duration metric: took 4m37.429582668s to provisionDockerMachine
	I0816 18:13:36.264858   74510 fix.go:56] duration metric: took 4m37.449862851s for fixHost
	I0816 18:13:36.264867   74510 start.go:83] releasing machines lock for "embed-certs-777541", held for 4m37.449881856s
	W0816 18:13:36.264895   74510 start.go:714] error starting host: provision: host is not running
	W0816 18:13:36.264994   74510 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0816 18:13:36.265005   74510 start.go:729] Will try again in 5 seconds ...
	I0816 18:13:36.288329   74828 main.go:141] libmachine: (no-preload-864476) Calling .Start
	I0816 18:13:36.288484   74828 main.go:141] libmachine: (no-preload-864476) Ensuring networks are active...
	I0816 18:13:36.289285   74828 main.go:141] libmachine: (no-preload-864476) Ensuring network default is active
	I0816 18:13:36.289912   74828 main.go:141] libmachine: (no-preload-864476) Ensuring network mk-no-preload-864476 is active
	I0816 18:13:36.290318   74828 main.go:141] libmachine: (no-preload-864476) Getting domain xml...
	I0816 18:13:36.291176   74828 main.go:141] libmachine: (no-preload-864476) Creating domain...
	I0816 18:13:37.504191   74828 main.go:141] libmachine: (no-preload-864476) Waiting to get IP...
	I0816 18:13:37.505110   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:37.505575   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:37.505621   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:37.505543   75973 retry.go:31] will retry after 308.411866ms: waiting for machine to come up
	I0816 18:13:37.816219   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:37.816877   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:37.816931   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:37.816852   75973 retry.go:31] will retry after 321.445064ms: waiting for machine to come up
	I0816 18:13:38.140594   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:38.141059   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:38.141082   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:38.141018   75973 retry.go:31] will retry after 337.935433ms: waiting for machine to come up
	I0816 18:13:38.480699   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:38.481110   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:38.481135   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:38.481033   75973 retry.go:31] will retry after 449.775503ms: waiting for machine to come up
	I0816 18:13:41.266589   74510 start.go:360] acquireMachinesLock for embed-certs-777541: {Name:mke1ffa1f2d6d714bdd85e184816ba8f4dfd08f1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 18:13:38.932812   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:38.933232   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:38.933259   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:38.933171   75973 retry.go:31] will retry after 482.676832ms: waiting for machine to come up
	I0816 18:13:39.417939   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:39.418323   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:39.418350   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:39.418276   75973 retry.go:31] will retry after 740.37516ms: waiting for machine to come up
	I0816 18:13:40.160491   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:40.160917   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:40.160942   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:40.160867   75973 retry.go:31] will retry after 1.10464436s: waiting for machine to come up
	I0816 18:13:41.267213   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:41.267654   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:41.267680   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:41.267613   75973 retry.go:31] will retry after 1.395131164s: waiting for machine to come up
	I0816 18:13:42.664731   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:42.665229   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:42.665252   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:42.665181   75973 retry.go:31] will retry after 1.560403289s: waiting for machine to come up
	I0816 18:13:44.226847   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:44.227375   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:44.227404   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:44.227342   75973 retry.go:31] will retry after 1.647944685s: waiting for machine to come up
	I0816 18:13:45.876965   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:45.877411   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:45.877440   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:45.877366   75973 retry.go:31] will retry after 1.971325886s: waiting for machine to come up
	I0816 18:13:47.849950   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:47.850457   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:47.850490   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:47.850383   75973 retry.go:31] will retry after 2.95642392s: waiting for machine to come up
	I0816 18:13:50.810560   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:50.811013   74828 main.go:141] libmachine: (no-preload-864476) DBG | unable to find current IP address of domain no-preload-864476 in network mk-no-preload-864476
	I0816 18:13:50.811045   74828 main.go:141] libmachine: (no-preload-864476) DBG | I0816 18:13:50.810930   75973 retry.go:31] will retry after 4.510008193s: waiting for machine to come up
	I0816 18:13:56.529339   75006 start.go:364] duration metric: took 4m6.515818295s to acquireMachinesLock for "default-k8s-diff-port-256678"
	I0816 18:13:56.529444   75006 start.go:96] Skipping create...Using existing machine configuration
	I0816 18:13:56.529459   75006 fix.go:54] fixHost starting: 
	I0816 18:13:56.529851   75006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:13:56.529890   75006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:13:56.547077   75006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45661
	I0816 18:13:56.547585   75006 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:13:56.548068   75006 main.go:141] libmachine: Using API Version  1
	I0816 18:13:56.548091   75006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:13:56.548421   75006 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:13:56.548610   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .DriverName
	I0816 18:13:56.548766   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetState
	I0816 18:13:56.550373   75006 fix.go:112] recreateIfNeeded on default-k8s-diff-port-256678: state=Stopped err=<nil>
	I0816 18:13:56.550414   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .DriverName
	W0816 18:13:56.550604   75006 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 18:13:56.552781   75006 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-256678" ...
	I0816 18:13:55.326062   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.326558   74828 main.go:141] libmachine: (no-preload-864476) Found IP for machine: 192.168.50.50
	I0816 18:13:55.326576   74828 main.go:141] libmachine: (no-preload-864476) Reserving static IP address...
	I0816 18:13:55.326593   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has current primary IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.327109   74828 main.go:141] libmachine: (no-preload-864476) Reserved static IP address: 192.168.50.50
	I0816 18:13:55.327142   74828 main.go:141] libmachine: (no-preload-864476) Waiting for SSH to be available...
	I0816 18:13:55.327167   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "no-preload-864476", mac: "52:54:00:f3:50:53", ip: "192.168.50.50"} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:55.327191   74828 main.go:141] libmachine: (no-preload-864476) DBG | skip adding static IP to network mk-no-preload-864476 - found existing host DHCP lease matching {name: "no-preload-864476", mac: "52:54:00:f3:50:53", ip: "192.168.50.50"}
	I0816 18:13:55.327205   74828 main.go:141] libmachine: (no-preload-864476) DBG | Getting to WaitForSSH function...
	I0816 18:13:55.329001   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.329350   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:55.329378   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.329534   74828 main.go:141] libmachine: (no-preload-864476) DBG | Using SSH client type: external
	I0816 18:13:55.329574   74828 main.go:141] libmachine: (no-preload-864476) DBG | Using SSH private key: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/no-preload-864476/id_rsa (-rw-------)
	I0816 18:13:55.329604   74828 main.go:141] libmachine: (no-preload-864476) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.50 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19461-9545/.minikube/machines/no-preload-864476/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 18:13:55.329622   74828 main.go:141] libmachine: (no-preload-864476) DBG | About to run SSH command:
	I0816 18:13:55.329636   74828 main.go:141] libmachine: (no-preload-864476) DBG | exit 0
	I0816 18:13:55.452553   74828 main.go:141] libmachine: (no-preload-864476) DBG | SSH cmd err, output: <nil>: 
	I0816 18:13:55.452964   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetConfigRaw
	I0816 18:13:55.453557   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetIP
	I0816 18:13:55.455951   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.456334   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:55.456370   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.456564   74828 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/no-preload-864476/config.json ...
	I0816 18:13:55.456782   74828 machine.go:93] provisionDockerMachine start ...
	I0816 18:13:55.456801   74828 main.go:141] libmachine: (no-preload-864476) Calling .DriverName
	I0816 18:13:55.456983   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:13:55.459149   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.459547   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:55.459570   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.459730   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHPort
	I0816 18:13:55.459918   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:55.460068   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:55.460207   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHUsername
	I0816 18:13:55.460418   74828 main.go:141] libmachine: Using SSH client type: native
	I0816 18:13:55.460603   74828 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.50 22 <nil> <nil>}
	I0816 18:13:55.460637   74828 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 18:13:55.564875   74828 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 18:13:55.564903   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetMachineName
	I0816 18:13:55.565203   74828 buildroot.go:166] provisioning hostname "no-preload-864476"
	I0816 18:13:55.565229   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetMachineName
	I0816 18:13:55.565455   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:13:55.568114   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.568578   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:55.568612   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.568777   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHPort
	I0816 18:13:55.568912   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:55.569023   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:55.569200   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHUsername
	I0816 18:13:55.569448   74828 main.go:141] libmachine: Using SSH client type: native
	I0816 18:13:55.569649   74828 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.50 22 <nil> <nil>}
	I0816 18:13:55.569667   74828 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-864476 && echo "no-preload-864476" | sudo tee /etc/hostname
	I0816 18:13:55.686349   74828 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-864476
	
	I0816 18:13:55.686378   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:13:55.689171   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.689572   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:55.689608   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.689792   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHPort
	I0816 18:13:55.690008   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:55.690183   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:55.690418   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHUsername
	I0816 18:13:55.690623   74828 main.go:141] libmachine: Using SSH client type: native
	I0816 18:13:55.690782   74828 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.50 22 <nil> <nil>}
	I0816 18:13:55.690798   74828 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-864476' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-864476/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-864476' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 18:13:55.800352   74828 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 18:13:55.800386   74828 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19461-9545/.minikube CaCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19461-9545/.minikube}
	I0816 18:13:55.800436   74828 buildroot.go:174] setting up certificates
	I0816 18:13:55.800452   74828 provision.go:84] configureAuth start
	I0816 18:13:55.800470   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetMachineName
	I0816 18:13:55.800793   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetIP
	I0816 18:13:55.803388   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.803786   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:55.803822   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.804025   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:13:55.806567   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.806977   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:55.807003   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.807129   74828 provision.go:143] copyHostCerts
	I0816 18:13:55.807178   74828 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem, removing ...
	I0816 18:13:55.807198   74828 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem
	I0816 18:13:55.807286   74828 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem (1082 bytes)
	I0816 18:13:55.807401   74828 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem, removing ...
	I0816 18:13:55.807412   74828 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem
	I0816 18:13:55.807439   74828 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem (1123 bytes)
	I0816 18:13:55.807554   74828 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem, removing ...
	I0816 18:13:55.807565   74828 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem
	I0816 18:13:55.807588   74828 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem (1679 bytes)
	I0816 18:13:55.807648   74828 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem org=jenkins.no-preload-864476 san=[127.0.0.1 192.168.50.50 localhost minikube no-preload-864476]
	I0816 18:13:55.881474   74828 provision.go:177] copyRemoteCerts
	I0816 18:13:55.881529   74828 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 18:13:55.881558   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:13:55.884424   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.884952   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:55.884983   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:55.885138   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHPort
	I0816 18:13:55.885335   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:55.885486   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHUsername
	I0816 18:13:55.885669   74828 sshutil.go:53] new ssh client: &{IP:192.168.50.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/no-preload-864476/id_rsa Username:docker}
	I0816 18:13:55.966915   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0816 18:13:55.989812   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 18:13:56.011744   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 18:13:56.032745   74828 provision.go:87] duration metric: took 232.276991ms to configureAuth
	I0816 18:13:56.032778   74828 buildroot.go:189] setting minikube options for container-runtime
	I0816 18:13:56.033001   74828 config.go:182] Loaded profile config "no-preload-864476": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 18:13:56.033096   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:13:56.035919   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:56.036283   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:56.036311   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:56.036499   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHPort
	I0816 18:13:56.036713   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:56.036861   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:56.036975   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHUsername
	I0816 18:13:56.037100   74828 main.go:141] libmachine: Using SSH client type: native
	I0816 18:13:56.037275   74828 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.50 22 <nil> <nil>}
	I0816 18:13:56.037294   74828 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 18:13:56.296112   74828 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 18:13:56.296140   74828 machine.go:96] duration metric: took 839.343895ms to provisionDockerMachine
	I0816 18:13:56.296152   74828 start.go:293] postStartSetup for "no-preload-864476" (driver="kvm2")
	I0816 18:13:56.296162   74828 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 18:13:56.296177   74828 main.go:141] libmachine: (no-preload-864476) Calling .DriverName
	I0816 18:13:56.296537   74828 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 18:13:56.296570   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:13:56.299838   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:56.300364   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:56.300396   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:56.300603   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHPort
	I0816 18:13:56.300833   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:56.300985   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHUsername
	I0816 18:13:56.301187   74828 sshutil.go:53] new ssh client: &{IP:192.168.50.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/no-preload-864476/id_rsa Username:docker}
	I0816 18:13:56.383095   74828 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 18:13:56.387172   74828 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 18:13:56.387200   74828 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/addons for local assets ...
	I0816 18:13:56.387286   74828 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/files for local assets ...
	I0816 18:13:56.387392   74828 filesync.go:149] local asset: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem -> 167532.pem in /etc/ssl/certs
	I0816 18:13:56.387550   74828 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 18:13:56.396072   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /etc/ssl/certs/167532.pem (1708 bytes)
	I0816 18:13:56.419470   74828 start.go:296] duration metric: took 123.306644ms for postStartSetup
	I0816 18:13:56.419509   74828 fix.go:56] duration metric: took 20.154482872s for fixHost
	I0816 18:13:56.419529   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:13:56.422047   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:56.422454   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:56.422503   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:56.422573   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHPort
	I0816 18:13:56.422764   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:56.422963   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:56.423150   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHUsername
	I0816 18:13:56.423388   74828 main.go:141] libmachine: Using SSH client type: native
	I0816 18:13:56.423597   74828 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.50 22 <nil> <nil>}
	I0816 18:13:56.423610   74828 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 18:13:56.529164   74828 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723832036.506687395
	
	I0816 18:13:56.529190   74828 fix.go:216] guest clock: 1723832036.506687395
	I0816 18:13:56.529200   74828 fix.go:229] Guest: 2024-08-16 18:13:56.506687395 +0000 UTC Remote: 2024-08-16 18:13:56.419513163 +0000 UTC m=+262.671840210 (delta=87.174232ms)
	I0816 18:13:56.529229   74828 fix.go:200] guest clock delta is within tolerance: 87.174232ms
	I0816 18:13:56.529246   74828 start.go:83] releasing machines lock for "no-preload-864476", held for 20.264231324s
	I0816 18:13:56.529276   74828 main.go:141] libmachine: (no-preload-864476) Calling .DriverName
	I0816 18:13:56.529645   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetIP
	I0816 18:13:56.532279   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:56.532599   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:56.532660   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:56.532824   74828 main.go:141] libmachine: (no-preload-864476) Calling .DriverName
	I0816 18:13:56.533348   74828 main.go:141] libmachine: (no-preload-864476) Calling .DriverName
	I0816 18:13:56.533522   74828 main.go:141] libmachine: (no-preload-864476) Calling .DriverName
	I0816 18:13:56.533604   74828 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 18:13:56.533663   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:13:56.533759   74828 ssh_runner.go:195] Run: cat /version.json
	I0816 18:13:56.533786   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:13:56.536427   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:56.536711   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:56.536822   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:56.536845   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:56.536996   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHPort
	I0816 18:13:56.537071   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:56.537105   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:56.537191   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:56.537334   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHUsername
	I0816 18:13:56.537430   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHPort
	I0816 18:13:56.537497   74828 sshutil.go:53] new ssh client: &{IP:192.168.50.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/no-preload-864476/id_rsa Username:docker}
	I0816 18:13:56.537582   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:13:56.537728   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHUsername
	I0816 18:13:56.537964   74828 sshutil.go:53] new ssh client: &{IP:192.168.50.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/no-preload-864476/id_rsa Username:docker}
	I0816 18:13:56.654319   74828 ssh_runner.go:195] Run: systemctl --version
	I0816 18:13:56.660640   74828 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 18:13:56.806359   74828 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 18:13:56.812415   74828 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 18:13:56.812489   74828 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 18:13:56.828095   74828 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 18:13:56.828122   74828 start.go:495] detecting cgroup driver to use...
	I0816 18:13:56.828186   74828 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 18:13:56.843041   74828 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 18:13:56.856322   74828 docker.go:217] disabling cri-docker service (if available) ...
	I0816 18:13:56.856386   74828 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 18:13:56.869899   74828 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 18:13:56.884609   74828 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 18:13:56.990986   74828 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 18:13:57.134218   74828 docker.go:233] disabling docker service ...
	I0816 18:13:57.134283   74828 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 18:13:57.156415   74828 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 18:13:57.172969   74828 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 18:13:57.328279   74828 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 18:13:57.448217   74828 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 18:13:57.461630   74828 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 18:13:57.478199   74828 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 18:13:57.478271   74828 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:13:57.487845   74828 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 18:13:57.487918   74828 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:13:57.497895   74828 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:13:57.509260   74828 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:13:57.519090   74828 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 18:13:57.529351   74828 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:13:57.539816   74828 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:13:57.559271   74828 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:13:57.573027   74828 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 18:13:57.583410   74828 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 18:13:57.583490   74828 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 18:13:57.598762   74828 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 18:13:57.609589   74828 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:13:57.727016   74828 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 18:13:57.876815   74828 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 18:13:57.876876   74828 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 18:13:57.882172   74828 start.go:563] Will wait 60s for crictl version
	I0816 18:13:57.882241   74828 ssh_runner.go:195] Run: which crictl
	I0816 18:13:57.885706   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 18:13:57.926981   74828 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 18:13:57.927070   74828 ssh_runner.go:195] Run: crio --version
	I0816 18:13:57.957802   74828 ssh_runner.go:195] Run: crio --version
	I0816 18:13:57.984920   74828 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 18:13:57.986450   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetIP
	I0816 18:13:57.989584   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:57.990205   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:13:57.990257   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:13:57.990552   74828 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0816 18:13:57.994584   74828 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 18:13:58.007996   74828 kubeadm.go:883] updating cluster {Name:no-preload-864476 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-864476 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.50 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 18:13:58.008137   74828 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 18:13:58.008184   74828 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 18:13:58.041643   74828 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0816 18:13:58.041672   74828 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0816 18:13:58.041751   74828 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:13:58.041778   74828 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0816 18:13:58.041794   74828 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0816 18:13:58.041741   74828 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0816 18:13:58.041779   74828 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 18:13:58.041899   74828 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0816 18:13:58.041918   74828 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0816 18:13:58.041798   74828 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0816 18:13:58.043387   74828 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0816 18:13:58.043471   74828 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0816 18:13:58.043386   74828 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:13:58.043471   74828 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 18:13:58.043388   74828 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0816 18:13:58.043387   74828 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0816 18:13:58.043386   74828 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0816 18:13:58.043394   74828 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0816 18:13:58.289223   74828 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0816 18:13:58.299125   74828 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0816 18:13:58.308703   74828 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0816 18:13:58.339031   74828 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 18:13:58.351467   74828 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0816 18:13:58.351514   74828 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0816 18:13:58.351572   74828 ssh_runner.go:195] Run: which crictl
	I0816 18:13:58.358019   74828 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0816 18:13:58.359198   74828 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0816 18:13:58.385487   74828 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0816 18:13:58.385529   74828 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0816 18:13:58.385571   74828 ssh_runner.go:195] Run: which crictl
	I0816 18:13:58.392417   74828 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0816 18:13:58.506834   74828 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0816 18:13:58.506886   74828 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 18:13:58.506896   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0816 18:13:58.506924   74828 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0816 18:13:58.506963   74828 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0816 18:13:58.507003   74828 ssh_runner.go:195] Run: which crictl
	I0816 18:13:58.506928   74828 ssh_runner.go:195] Run: which crictl
	I0816 18:13:58.507072   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0816 18:13:58.507004   74828 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0816 18:13:58.507099   74828 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0816 18:13:58.507124   74828 ssh_runner.go:195] Run: which crictl
	I0816 18:13:58.507160   74828 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0816 18:13:58.507181   74828 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0816 18:13:58.507228   74828 ssh_runner.go:195] Run: which crictl
	I0816 18:13:58.562410   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0816 18:13:58.562469   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0816 18:13:58.562481   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0816 18:13:58.562554   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 18:13:58.562590   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0816 18:13:58.562628   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0816 18:13:58.686069   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0816 18:13:58.690288   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0816 18:13:58.690352   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0816 18:13:58.692851   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 18:13:58.692911   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0816 18:13:58.693027   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0816 18:13:58.777263   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0816 18:13:56.554238   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .Start
	I0816 18:13:56.554426   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Ensuring networks are active...
	I0816 18:13:56.555221   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Ensuring network default is active
	I0816 18:13:56.555599   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Ensuring network mk-default-k8s-diff-port-256678 is active
	I0816 18:13:56.556004   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Getting domain xml...
	I0816 18:13:56.556809   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Creating domain...
	I0816 18:13:57.825641   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting to get IP...
	I0816 18:13:57.826681   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:13:57.827158   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:13:57.827219   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:13:57.827129   76107 retry.go:31] will retry after 267.923612ms: waiting for machine to come up
	I0816 18:13:58.096794   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:13:58.097184   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:13:58.097219   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:13:58.097158   76107 retry.go:31] will retry after 286.726817ms: waiting for machine to come up
	I0816 18:13:58.386213   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:13:58.386757   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:13:58.386782   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:13:58.386704   76107 retry.go:31] will retry after 386.697374ms: waiting for machine to come up
	I0816 18:13:58.775483   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:13:58.775989   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:13:58.776014   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:13:58.775949   76107 retry.go:31] will retry after 554.398617ms: waiting for machine to come up
	I0816 18:13:59.331517   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:13:59.332002   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:13:59.332024   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:13:59.331943   76107 retry.go:31] will retry after 589.24333ms: waiting for machine to come up
	I0816 18:13:58.823309   74828 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0816 18:13:58.823318   74828 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0816 18:13:58.823410   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 18:13:58.823434   74828 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0816 18:13:58.823437   74828 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0816 18:13:58.823549   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0816 18:13:58.836312   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0816 18:13:58.894363   74828 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0816 18:13:58.894428   74828 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0816 18:13:58.894447   74828 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0816 18:13:58.894495   74828 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0816 18:13:58.894495   74828 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0816 18:13:58.933183   74828 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0816 18:13:58.933290   74828 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0816 18:13:58.934389   74828 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0816 18:13:58.934456   74828 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0816 18:13:58.934491   74828 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0816 18:13:58.934550   74828 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0816 18:13:58.934569   74828 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0816 18:13:58.934682   74828 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:14:00.792156   74828 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (1.897633034s)
	I0816 18:14:00.792196   74828 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0816 18:14:00.792224   74828 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (1.89763588s)
	I0816 18:14:00.792257   74828 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0816 18:14:00.792230   74828 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0816 18:14:00.792281   74828 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.858968807s)
	I0816 18:14:00.792300   74828 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0816 18:14:00.792317   74828 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0816 18:14:00.792355   74828 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1: (1.85778817s)
	I0816 18:14:00.792370   74828 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0816 18:14:00.792415   74828 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.857704749s)
	I0816 18:14:00.792422   74828 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.857843473s)
	I0816 18:14:00.792436   74828 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0816 18:14:00.792457   74828 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0816 18:14:00.792491   74828 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:14:00.792528   74828 ssh_runner.go:195] Run: which crictl
	I0816 18:14:00.797103   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:14:03.171070   74828 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.378727123s)
	I0816 18:14:03.171118   74828 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0816 18:14:03.171149   74828 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.374004458s)
	I0816 18:14:03.171155   74828 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0816 18:14:03.171274   74828 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0816 18:14:03.171225   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:13:59.922834   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:13:59.923439   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:13:59.923467   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:13:59.923368   76107 retry.go:31] will retry after 779.656786ms: waiting for machine to come up
	I0816 18:14:00.704929   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:00.705395   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:14:00.705417   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:14:00.705344   76107 retry.go:31] will retry after 790.87115ms: waiting for machine to come up
	I0816 18:14:01.497557   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:01.497999   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:14:01.498052   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:14:01.497981   76107 retry.go:31] will retry after 919.825072ms: waiting for machine to come up
	I0816 18:14:02.419821   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:02.420280   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:14:02.420312   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:14:02.420227   76107 retry.go:31] will retry after 1.304504009s: waiting for machine to come up
	I0816 18:14:03.725928   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:03.726378   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:14:03.726400   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:14:03.726344   76107 retry.go:31] will retry after 2.105251359s: waiting for machine to come up
	I0816 18:14:06.879864   74828 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.708558161s)
	I0816 18:14:06.879904   74828 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0816 18:14:06.879905   74828 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.708563338s)
	I0816 18:14:06.879935   74828 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0816 18:14:06.879981   74828 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:14:06.879991   74828 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0816 18:14:08.769077   74828 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.889063218s)
	I0816 18:14:08.769114   74828 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0816 18:14:08.769145   74828 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0816 18:14:08.769231   74828 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0816 18:14:08.769146   74828 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.889146748s)
	I0816 18:14:08.769343   74828 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0816 18:14:08.769431   74828 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0816 18:14:05.833605   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:05.834078   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:14:05.834109   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:14:05.834025   76107 retry.go:31] will retry after 2.042421539s: waiting for machine to come up
	I0816 18:14:07.878000   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:07.878510   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:14:07.878541   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:14:07.878432   76107 retry.go:31] will retry after 2.777402825s: waiting for machine to come up
	I0816 18:14:10.627286   74828 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.858028746s)
	I0816 18:14:10.627331   74828 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0816 18:14:10.627346   74828 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.857891086s)
	I0816 18:14:10.627358   74828 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0816 18:14:10.627378   74828 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0816 18:14:10.627402   74828 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0816 18:14:11.977277   74828 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.349851948s)
	I0816 18:14:11.977314   74828 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0816 18:14:11.977339   74828 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0816 18:14:11.977389   74828 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0816 18:14:12.630939   74828 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0816 18:14:12.630999   74828 cache_images.go:123] Successfully loaded all cached images
	I0816 18:14:12.631004   74828 cache_images.go:92] duration metric: took 14.589319022s to LoadCachedImages
	I0816 18:14:12.631016   74828 kubeadm.go:934] updating node { 192.168.50.50 8443 v1.31.0 crio true true} ...
	I0816 18:14:12.631132   74828 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-864476 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.50
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-864476 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 18:14:12.631207   74828 ssh_runner.go:195] Run: crio config
	I0816 18:14:12.683072   74828 cni.go:84] Creating CNI manager for ""
	I0816 18:14:12.683094   74828 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 18:14:12.683107   74828 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 18:14:12.683129   74828 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.50 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-864476 NodeName:no-preload-864476 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.50"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.50 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 18:14:12.683276   74828 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.50
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-864476"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.50
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.50"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 18:14:12.683345   74828 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 18:14:12.693879   74828 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 18:14:12.693941   74828 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 18:14:12.702601   74828 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0816 18:14:12.718235   74828 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 18:14:12.733455   74828 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0816 18:14:12.748878   74828 ssh_runner.go:195] Run: grep 192.168.50.50	control-plane.minikube.internal$ /etc/hosts
	I0816 18:14:12.752276   74828 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.50	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 18:14:12.763390   74828 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:14:12.872450   74828 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 18:14:12.888531   74828 certs.go:68] Setting up /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/no-preload-864476 for IP: 192.168.50.50
	I0816 18:14:12.888569   74828 certs.go:194] generating shared ca certs ...
	I0816 18:14:12.888589   74828 certs.go:226] acquiring lock for ca certs: {Name:mk1e000575c94c138513704c2900b8a68810eb65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:14:12.888783   74828 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key
	I0816 18:14:12.888845   74828 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key
	I0816 18:14:12.888860   74828 certs.go:256] generating profile certs ...
	I0816 18:14:12.888971   74828 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/no-preload-864476/client.key
	I0816 18:14:12.889070   74828 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/no-preload-864476/apiserver.key.30cf6dcb
	I0816 18:14:12.889136   74828 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/no-preload-864476/proxy-client.key
	I0816 18:14:12.889298   74828 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem (1338 bytes)
	W0816 18:14:12.889339   74828 certs.go:480] ignoring /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753_empty.pem, impossibly tiny 0 bytes
	I0816 18:14:12.889351   74828 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 18:14:12.889391   74828 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem (1082 bytes)
	I0816 18:14:12.889421   74828 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem (1123 bytes)
	I0816 18:14:12.889452   74828 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem (1679 bytes)
	I0816 18:14:12.889507   74828 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem (1708 bytes)
	I0816 18:14:12.890441   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 18:14:12.919571   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 18:14:12.947375   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 18:14:12.975197   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 18:14:13.007308   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/no-preload-864476/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0816 18:14:13.056151   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/no-preload-864476/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 18:14:13.080317   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/no-preload-864476/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 18:14:13.102231   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/no-preload-864476/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 18:14:13.124045   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 18:14:13.145312   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem --> /usr/share/ca-certificates/16753.pem (1338 bytes)
	I0816 18:14:13.166806   74828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /usr/share/ca-certificates/167532.pem (1708 bytes)
	I0816 18:14:13.188173   74828 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 18:14:13.203594   74828 ssh_runner.go:195] Run: openssl version
	I0816 18:14:13.209148   74828 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16753.pem && ln -fs /usr/share/ca-certificates/16753.pem /etc/ssl/certs/16753.pem"
	I0816 18:14:13.220266   74828 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16753.pem
	I0816 18:14:13.224569   74828 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 17:00 /usr/share/ca-certificates/16753.pem
	I0816 18:14:13.224635   74828 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16753.pem
	I0816 18:14:13.230141   74828 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16753.pem /etc/ssl/certs/51391683.0"
	I0816 18:14:13.241362   74828 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167532.pem && ln -fs /usr/share/ca-certificates/167532.pem /etc/ssl/certs/167532.pem"
	I0816 18:14:13.252437   74828 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167532.pem
	I0816 18:14:13.256658   74828 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 17:00 /usr/share/ca-certificates/167532.pem
	I0816 18:14:13.256712   74828 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167532.pem
	I0816 18:14:13.262006   74828 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167532.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 18:14:13.273168   74828 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 18:14:13.284518   74828 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:14:13.288566   74828 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:49 /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:14:13.288611   74828 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:14:13.293944   74828 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 18:14:13.305148   74828 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 18:14:13.309460   74828 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 18:14:13.315123   74828 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 18:14:13.320854   74828 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 18:14:13.326676   74828 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 18:14:13.332183   74828 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 18:14:13.337794   74828 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 18:14:13.343369   74828 kubeadm.go:392] StartCluster: {Name:no-preload-864476 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-864476 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.50 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 18:14:13.343470   74828 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 18:14:13.343527   74828 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 18:14:13.384490   74828 cri.go:89] found id: ""
	I0816 18:14:13.384567   74828 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 18:14:13.395094   74828 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 18:14:13.395116   74828 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 18:14:13.395183   74828 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 18:14:13.406605   74828 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 18:14:13.407898   74828 kubeconfig.go:125] found "no-preload-864476" server: "https://192.168.50.50:8443"
	I0816 18:14:13.410808   74828 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 18:14:13.420516   74828 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.50
	I0816 18:14:13.420541   74828 kubeadm.go:1160] stopping kube-system containers ...
	I0816 18:14:13.420554   74828 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 18:14:13.420589   74828 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 18:14:13.459174   74828 cri.go:89] found id: ""
	I0816 18:14:13.459242   74828 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 18:14:13.475598   74828 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 18:14:13.484685   74828 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 18:14:13.484707   74828 kubeadm.go:157] found existing configuration files:
	
	I0816 18:14:13.484756   74828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 18:14:13.493092   74828 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 18:14:13.493147   74828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 18:14:13.501649   74828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 18:14:13.509987   74828 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 18:14:13.510028   74828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 18:14:13.518500   74828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 18:14:13.526689   74828 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 18:14:13.526737   74828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 18:14:13.535606   74828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 18:14:13.545130   74828 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 18:14:13.545185   74828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 18:14:13.553947   74828 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 18:14:13.562763   74828 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:13.663383   74828 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:10.657652   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:10.658062   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | unable to find current IP address of domain default-k8s-diff-port-256678 in network mk-default-k8s-diff-port-256678
	I0816 18:14:10.658105   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | I0816 18:14:10.657999   76107 retry.go:31] will retry after 3.856225979s: waiting for machine to come up
	I0816 18:14:14.518358   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:14.518875   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Found IP for machine: 192.168.72.144
	I0816 18:14:14.518896   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Reserving static IP address...
	I0816 18:14:14.518915   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has current primary IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:14.519296   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Reserved static IP address: 192.168.72.144
	I0816 18:14:14.519334   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-256678", mac: "52:54:00:76:32:d8", ip: "192.168.72.144"} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:14.519346   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Waiting for SSH to be available...
	I0816 18:14:14.519377   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | skip adding static IP to network mk-default-k8s-diff-port-256678 - found existing host DHCP lease matching {name: "default-k8s-diff-port-256678", mac: "52:54:00:76:32:d8", ip: "192.168.72.144"}
	I0816 18:14:14.519391   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | Getting to WaitForSSH function...
	I0816 18:14:14.521566   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:14.521926   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:14.521969   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:14.522133   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | Using SSH client type: external
	I0816 18:14:14.522160   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | Using SSH private key: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/default-k8s-diff-port-256678/id_rsa (-rw-------)
	I0816 18:14:14.522202   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.144 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19461-9545/.minikube/machines/default-k8s-diff-port-256678/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 18:14:14.522221   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | About to run SSH command:
	I0816 18:14:14.522235   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | exit 0
	I0816 18:14:14.648603   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | SSH cmd err, output: <nil>: 
	I0816 18:14:14.649005   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetConfigRaw
	I0816 18:14:14.649616   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetIP
	I0816 18:14:14.652340   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:14.652767   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:14.652796   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:14.653116   75006 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/default-k8s-diff-port-256678/config.json ...
	I0816 18:14:14.653337   75006 machine.go:93] provisionDockerMachine start ...
	I0816 18:14:14.653361   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .DriverName
	I0816 18:14:14.653598   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:14:14.656062   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:14.656412   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:14.656442   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:14.656565   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHPort
	I0816 18:14:14.656757   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:14.656895   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:14.657015   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHUsername
	I0816 18:14:14.657128   75006 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:14.657312   75006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I0816 18:14:14.657321   75006 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 18:14:14.768721   75006 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 18:14:14.768749   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetMachineName
	I0816 18:14:14.768990   75006 buildroot.go:166] provisioning hostname "default-k8s-diff-port-256678"
	I0816 18:14:14.769021   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetMachineName
	I0816 18:14:14.769246   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:14:14.772310   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:14.772675   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:14.772704   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:14.772922   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHPort
	I0816 18:14:14.773084   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:14.773242   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:14.773361   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHUsername
	I0816 18:14:14.773564   75006 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:14.773764   75006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I0816 18:14:14.773783   75006 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-256678 && echo "default-k8s-diff-port-256678" | sudo tee /etc/hostname
	I0816 18:14:14.894016   75006 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-256678
	
	I0816 18:14:14.894047   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:14:14.896797   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:14.897150   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:14.897184   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:14.897424   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHPort
	I0816 18:14:14.897613   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:14.897800   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:14.897933   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHUsername
	I0816 18:14:14.898124   75006 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:14.898286   75006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I0816 18:14:14.898303   75006 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-256678' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-256678/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-256678' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 18:14:15.814480   75402 start.go:364] duration metric: took 3m22.605706427s to acquireMachinesLock for "old-k8s-version-783465"
	I0816 18:14:15.814546   75402 start.go:96] Skipping create...Using existing machine configuration
	I0816 18:14:15.814554   75402 fix.go:54] fixHost starting: 
	I0816 18:14:15.815001   75402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:14:15.815062   75402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:14:15.834710   75402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46611
	I0816 18:14:15.835124   75402 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:14:15.835653   75402 main.go:141] libmachine: Using API Version  1
	I0816 18:14:15.835676   75402 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:14:15.836005   75402 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:14:15.836258   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	I0816 18:14:15.836392   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetState
	I0816 18:14:15.838010   75402 fix.go:112] recreateIfNeeded on old-k8s-version-783465: state=Stopped err=<nil>
	I0816 18:14:15.838043   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	W0816 18:14:15.838200   75402 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 18:14:15.840214   75402 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-783465" ...
	I0816 18:14:15.016150   75006 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 18:14:15.016176   75006 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19461-9545/.minikube CaCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19461-9545/.minikube}
	I0816 18:14:15.016200   75006 buildroot.go:174] setting up certificates
	I0816 18:14:15.016213   75006 provision.go:84] configureAuth start
	I0816 18:14:15.016231   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetMachineName
	I0816 18:14:15.016518   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetIP
	I0816 18:14:15.019132   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.019687   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:15.019725   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.019907   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:14:15.022758   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.023192   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:15.023233   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.023408   75006 provision.go:143] copyHostCerts
	I0816 18:14:15.023468   75006 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem, removing ...
	I0816 18:14:15.023489   75006 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem
	I0816 18:14:15.023552   75006 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem (1082 bytes)
	I0816 18:14:15.023649   75006 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem, removing ...
	I0816 18:14:15.023659   75006 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem
	I0816 18:14:15.023681   75006 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem (1123 bytes)
	I0816 18:14:15.023733   75006 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem, removing ...
	I0816 18:14:15.023740   75006 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem
	I0816 18:14:15.023756   75006 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem (1679 bytes)
	I0816 18:14:15.023802   75006 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-256678 san=[127.0.0.1 192.168.72.144 default-k8s-diff-port-256678 localhost minikube]
	I0816 18:14:15.142549   75006 provision.go:177] copyRemoteCerts
	I0816 18:14:15.142601   75006 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 18:14:15.142625   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:14:15.145515   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.145867   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:15.145903   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.146029   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHPort
	I0816 18:14:15.146250   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:15.146436   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHUsername
	I0816 18:14:15.146604   75006 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/default-k8s-diff-port-256678/id_rsa Username:docker}
	I0816 18:14:15.230785   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 18:14:15.258450   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0816 18:14:15.286008   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 18:14:15.308690   75006 provision.go:87] duration metric: took 292.45797ms to configureAuth
	I0816 18:14:15.308725   75006 buildroot.go:189] setting minikube options for container-runtime
	I0816 18:14:15.308927   75006 config.go:182] Loaded profile config "default-k8s-diff-port-256678": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 18:14:15.308996   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:14:15.311959   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.312310   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:15.312332   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.312492   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHPort
	I0816 18:14:15.312713   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:15.312890   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:15.313028   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHUsername
	I0816 18:14:15.313184   75006 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:15.313369   75006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I0816 18:14:15.313387   75006 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 18:14:15.574487   75006 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 18:14:15.574517   75006 machine.go:96] duration metric: took 921.166622ms to provisionDockerMachine
	I0816 18:14:15.574529   75006 start.go:293] postStartSetup for "default-k8s-diff-port-256678" (driver="kvm2")
	I0816 18:14:15.574538   75006 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 18:14:15.574552   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .DriverName
	I0816 18:14:15.574835   75006 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 18:14:15.574854   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:14:15.577944   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.578266   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:15.578295   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.578469   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHPort
	I0816 18:14:15.578651   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:15.578800   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHUsername
	I0816 18:14:15.578912   75006 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/default-k8s-diff-port-256678/id_rsa Username:docker}
	I0816 18:14:15.664404   75006 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 18:14:15.668362   75006 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 18:14:15.668389   75006 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/addons for local assets ...
	I0816 18:14:15.668459   75006 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/files for local assets ...
	I0816 18:14:15.668562   75006 filesync.go:149] local asset: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem -> 167532.pem in /etc/ssl/certs
	I0816 18:14:15.668705   75006 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 18:14:15.678830   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /etc/ssl/certs/167532.pem (1708 bytes)
	I0816 18:14:15.702087   75006 start.go:296] duration metric: took 127.545675ms for postStartSetup
	I0816 18:14:15.702129   75006 fix.go:56] duration metric: took 19.172678011s for fixHost
	I0816 18:14:15.702152   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:14:15.704680   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.705117   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:15.705154   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.705288   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHPort
	I0816 18:14:15.705479   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:15.705643   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:15.705766   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHUsername
	I0816 18:14:15.705922   75006 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:15.706084   75006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I0816 18:14:15.706095   75006 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 18:14:15.814313   75006 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723832055.788948458
	
	I0816 18:14:15.814337   75006 fix.go:216] guest clock: 1723832055.788948458
	I0816 18:14:15.814348   75006 fix.go:229] Guest: 2024-08-16 18:14:15.788948458 +0000 UTC Remote: 2024-08-16 18:14:15.702133997 +0000 UTC m=+265.826862410 (delta=86.814461ms)
	I0816 18:14:15.814372   75006 fix.go:200] guest clock delta is within tolerance: 86.814461ms
	I0816 18:14:15.814382   75006 start.go:83] releasing machines lock for "default-k8s-diff-port-256678", held for 19.284958633s
	I0816 18:14:15.814416   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .DriverName
	I0816 18:14:15.814723   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetIP
	I0816 18:14:15.817995   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.818426   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:15.818467   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.818620   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .DriverName
	I0816 18:14:15.819299   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .DriverName
	I0816 18:14:15.819518   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .DriverName
	I0816 18:14:15.819616   75006 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 18:14:15.819656   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:14:15.819840   75006 ssh_runner.go:195] Run: cat /version.json
	I0816 18:14:15.819869   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:14:15.822797   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.823189   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.823478   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:15.823521   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.823659   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHPort
	I0816 18:14:15.823804   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:15.823811   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:15.823828   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:15.823965   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHUsername
	I0816 18:14:15.824064   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHPort
	I0816 18:14:15.824177   75006 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/default-k8s-diff-port-256678/id_rsa Username:docker}
	I0816 18:14:15.824234   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:14:15.824368   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHUsername
	I0816 18:14:15.824486   75006 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/default-k8s-diff-port-256678/id_rsa Username:docker}
	I0816 18:14:15.948709   75006 ssh_runner.go:195] Run: systemctl --version
	I0816 18:14:15.956239   75006 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 18:14:16.103538   75006 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 18:14:16.109299   75006 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 18:14:16.109385   75006 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 18:14:16.125056   75006 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 18:14:16.125092   75006 start.go:495] detecting cgroup driver to use...
	I0816 18:14:16.125188   75006 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 18:14:16.141741   75006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 18:14:16.158917   75006 docker.go:217] disabling cri-docker service (if available) ...
	I0816 18:14:16.158993   75006 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 18:14:16.173256   75006 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 18:14:16.187026   75006 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 18:14:16.332452   75006 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 18:14:16.503181   75006 docker.go:233] disabling docker service ...
	I0816 18:14:16.503254   75006 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 18:14:16.517961   75006 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 18:14:16.535991   75006 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 18:14:16.667874   75006 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 18:14:16.799300   75006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 18:14:16.813852   75006 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 18:14:16.832891   75006 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 18:14:16.832953   75006 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:16.845621   75006 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 18:14:16.845716   75006 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:16.856045   75006 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:16.866117   75006 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:16.877586   75006 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 18:14:16.887643   75006 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:16.897164   75006 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:16.915247   75006 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:16.924887   75006 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 18:14:16.933645   75006 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 18:14:16.933709   75006 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 18:14:16.946920   75006 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 18:14:16.955928   75006 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:14:17.090148   75006 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 18:14:17.241434   75006 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 18:14:17.241531   75006 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 18:14:17.246730   75006 start.go:563] Will wait 60s for crictl version
	I0816 18:14:17.246796   75006 ssh_runner.go:195] Run: which crictl
	I0816 18:14:17.250397   75006 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 18:14:17.289194   75006 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 18:14:17.289295   75006 ssh_runner.go:195] Run: crio --version
	I0816 18:14:17.324401   75006 ssh_runner.go:195] Run: crio --version
	I0816 18:14:17.361220   75006 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 18:14:15.841411   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .Start
	I0816 18:14:15.841576   75402 main.go:141] libmachine: (old-k8s-version-783465) Ensuring networks are active...
	I0816 18:14:15.842263   75402 main.go:141] libmachine: (old-k8s-version-783465) Ensuring network default is active
	I0816 18:14:15.842609   75402 main.go:141] libmachine: (old-k8s-version-783465) Ensuring network mk-old-k8s-version-783465 is active
	I0816 18:14:15.843023   75402 main.go:141] libmachine: (old-k8s-version-783465) Getting domain xml...
	I0816 18:14:15.844141   75402 main.go:141] libmachine: (old-k8s-version-783465) Creating domain...
	I0816 18:14:17.215163   75402 main.go:141] libmachine: (old-k8s-version-783465) Waiting to get IP...
	I0816 18:14:17.216445   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:17.216933   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:17.217029   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:17.216922   76298 retry.go:31] will retry after 286.243503ms: waiting for machine to come up
	I0816 18:14:17.504645   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:17.505240   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:17.505262   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:17.505175   76298 retry.go:31] will retry after 275.715235ms: waiting for machine to come up
	I0816 18:14:17.782804   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:17.783365   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:17.783392   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:17.783292   76298 retry.go:31] will retry after 343.088129ms: waiting for machine to come up
	I0816 18:14:14.936549   74828 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.273126441s)
	I0816 18:14:14.936584   74828 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:15.139778   74828 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:15.201814   74828 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:15.270552   74828 api_server.go:52] waiting for apiserver process to appear ...
	I0816 18:14:15.270667   74828 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:15.771379   74828 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:16.271296   74828 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:16.335242   74828 api_server.go:72] duration metric: took 1.064710561s to wait for apiserver process to appear ...
	I0816 18:14:16.335265   74828 api_server.go:88] waiting for apiserver healthz status ...
	I0816 18:14:16.335282   74828 api_server.go:253] Checking apiserver healthz at https://192.168.50.50:8443/healthz ...
	I0816 18:14:16.335727   74828 api_server.go:269] stopped: https://192.168.50.50:8443/healthz: Get "https://192.168.50.50:8443/healthz": dial tcp 192.168.50.50:8443: connect: connection refused
	I0816 18:14:16.835361   74828 api_server.go:253] Checking apiserver healthz at https://192.168.50.50:8443/healthz ...
	I0816 18:14:17.362436   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetIP
	I0816 18:14:17.365728   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:17.366122   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:14:17.366154   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:14:17.366403   75006 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0816 18:14:17.370322   75006 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 18:14:17.383153   75006 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-256678 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-256678 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.144 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 18:14:17.383303   75006 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 18:14:17.383364   75006 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 18:14:17.420269   75006 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0816 18:14:17.420339   75006 ssh_runner.go:195] Run: which lz4
	I0816 18:14:17.424477   75006 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 18:14:17.428507   75006 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 18:14:17.428547   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0816 18:14:18.717202   75006 crio.go:462] duration metric: took 1.292754157s to copy over tarball
	I0816 18:14:18.717278   75006 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 18:14:19.241691   74828 api_server.go:279] https://192.168.50.50:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 18:14:19.241729   74828 api_server.go:103] status: https://192.168.50.50:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 18:14:19.241746   74828 api_server.go:253] Checking apiserver healthz at https://192.168.50.50:8443/healthz ...
	I0816 18:14:19.292883   74828 api_server.go:279] https://192.168.50.50:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 18:14:19.292924   74828 api_server.go:103] status: https://192.168.50.50:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 18:14:19.336097   74828 api_server.go:253] Checking apiserver healthz at https://192.168.50.50:8443/healthz ...
	I0816 18:14:19.363715   74828 api_server.go:279] https://192.168.50.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:14:19.363753   74828 api_server.go:103] status: https://192.168.50.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:14:19.835848   74828 api_server.go:253] Checking apiserver healthz at https://192.168.50.50:8443/healthz ...
	I0816 18:14:19.840615   74828 api_server.go:279] https://192.168.50.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:14:19.840666   74828 api_server.go:103] status: https://192.168.50.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:14:20.336291   74828 api_server.go:253] Checking apiserver healthz at https://192.168.50.50:8443/healthz ...
	I0816 18:14:20.343751   74828 api_server.go:279] https://192.168.50.50:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:14:20.343785   74828 api_server.go:103] status: https://192.168.50.50:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:14:20.835470   74828 api_server.go:253] Checking apiserver healthz at https://192.168.50.50:8443/healthz ...
	I0816 18:14:20.841217   74828 api_server.go:279] https://192.168.50.50:8443/healthz returned 200:
	ok
	I0816 18:14:20.849609   74828 api_server.go:141] control plane version: v1.31.0
	I0816 18:14:20.849642   74828 api_server.go:131] duration metric: took 4.514370955s to wait for apiserver health ...
	I0816 18:14:20.849653   74828 cni.go:84] Creating CNI manager for ""
	I0816 18:14:20.849662   74828 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 18:14:20.851828   74828 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 18:14:18.127538   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:18.128044   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:18.128077   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:18.127958   76298 retry.go:31] will retry after 543.91951ms: waiting for machine to come up
	I0816 18:14:18.673778   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:18.674328   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:18.674351   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:18.674274   76298 retry.go:31] will retry after 694.978788ms: waiting for machine to come up
	I0816 18:14:19.370976   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:19.371577   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:19.371605   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:19.371538   76298 retry.go:31] will retry after 578.640883ms: waiting for machine to come up
	I0816 18:14:19.952328   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:19.952917   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:19.952941   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:19.952863   76298 retry.go:31] will retry after 820.19233ms: waiting for machine to come up
	I0816 18:14:20.774767   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:20.775175   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:20.775200   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:20.775134   76298 retry.go:31] will retry after 1.262201815s: waiting for machine to come up
	I0816 18:14:22.038872   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:22.039357   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:22.039385   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:22.039302   76298 retry.go:31] will retry after 1.164593889s: waiting for machine to come up
	I0816 18:14:20.853121   74828 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 18:14:20.866117   74828 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 18:14:20.888451   74828 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 18:14:20.902482   74828 system_pods.go:59] 8 kube-system pods found
	I0816 18:14:20.902530   74828 system_pods.go:61] "coredns-6f6b679f8f-w9cbm" [9b50c913-f492-4432-a50a-e0f727a7b856] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 18:14:20.902545   74828 system_pods.go:61] "etcd-no-preload-864476" [e45a11b8-fa3e-4a6e-9d06-5d82fdaf20dc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 18:14:20.902557   74828 system_pods.go:61] "kube-apiserver-no-preload-864476" [1cf82575-b520-4bc0-9e90-d40c02b4468d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 18:14:20.902568   74828 system_pods.go:61] "kube-controller-manager-no-preload-864476" [8c9123e0-16a4-4940-8464-4bec383bac90] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 18:14:20.902577   74828 system_pods.go:61] "kube-proxy-vdqxz" [0332e87e-5c0c-41f5-88a9-31b7f8494eb6] Running
	I0816 18:14:20.902587   74828 system_pods.go:61] "kube-scheduler-no-preload-864476" [6139753f-b5cf-4af5-a9fa-03fb220e3dc9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 18:14:20.902606   74828 system_pods.go:61] "metrics-server-6867b74b74-rxtwg" [f0d04fc9-24c0-47e3-afdc-f250ef07900c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 18:14:20.902620   74828 system_pods.go:61] "storage-provisioner" [65303dd8-27d7-4bf3-ae58-ff5fe556f17f] Running
	I0816 18:14:20.902631   74828 system_pods.go:74] duration metric: took 14.150825ms to wait for pod list to return data ...
	I0816 18:14:20.902645   74828 node_conditions.go:102] verifying NodePressure condition ...
	I0816 18:14:20.909305   74828 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 18:14:20.909342   74828 node_conditions.go:123] node cpu capacity is 2
	I0816 18:14:20.909355   74828 node_conditions.go:105] duration metric: took 6.699359ms to run NodePressure ...
	I0816 18:14:20.909377   74828 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:21.193348   74828 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0816 18:14:21.198555   74828 kubeadm.go:739] kubelet initialised
	I0816 18:14:21.198585   74828 kubeadm.go:740] duration metric: took 5.20722ms waiting for restarted kubelet to initialise ...
	I0816 18:14:21.198595   74828 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:14:21.204695   74828 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-w9cbm" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:21.212855   74828 pod_ready.go:98] node "no-preload-864476" hosting pod "coredns-6f6b679f8f-w9cbm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-864476" has status "Ready":"False"
	I0816 18:14:21.212877   74828 pod_ready.go:82] duration metric: took 8.157781ms for pod "coredns-6f6b679f8f-w9cbm" in "kube-system" namespace to be "Ready" ...
	E0816 18:14:21.212889   74828 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-864476" hosting pod "coredns-6f6b679f8f-w9cbm" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-864476" has status "Ready":"False"
	I0816 18:14:21.212899   74828 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:21.220125   74828 pod_ready.go:98] node "no-preload-864476" hosting pod "etcd-no-preload-864476" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-864476" has status "Ready":"False"
	I0816 18:14:21.220150   74828 pod_ready.go:82] duration metric: took 7.241861ms for pod "etcd-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	E0816 18:14:21.220158   74828 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-864476" hosting pod "etcd-no-preload-864476" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-864476" has status "Ready":"False"
	I0816 18:14:21.220166   74828 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:21.226930   74828 pod_ready.go:98] node "no-preload-864476" hosting pod "kube-apiserver-no-preload-864476" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-864476" has status "Ready":"False"
	I0816 18:14:21.226957   74828 pod_ready.go:82] duration metric: took 6.783402ms for pod "kube-apiserver-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	E0816 18:14:21.226967   74828 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-864476" hosting pod "kube-apiserver-no-preload-864476" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-864476" has status "Ready":"False"
	I0816 18:14:21.226976   74828 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:21.292011   74828 pod_ready.go:98] node "no-preload-864476" hosting pod "kube-controller-manager-no-preload-864476" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-864476" has status "Ready":"False"
	I0816 18:14:21.292054   74828 pod_ready.go:82] duration metric: took 65.066708ms for pod "kube-controller-manager-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	E0816 18:14:21.292066   74828 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-864476" hosting pod "kube-controller-manager-no-preload-864476" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-864476" has status "Ready":"False"
	I0816 18:14:21.292075   74828 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-vdqxz" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:21.692536   74828 pod_ready.go:93] pod "kube-proxy-vdqxz" in "kube-system" namespace has status "Ready":"True"
	I0816 18:14:21.692564   74828 pod_ready.go:82] duration metric: took 400.476293ms for pod "kube-proxy-vdqxz" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:21.692577   74828 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:21.155261   75006 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.437939279s)
	I0816 18:14:21.155296   75006 crio.go:469] duration metric: took 2.438065212s to extract the tarball
	I0816 18:14:21.155325   75006 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 18:14:21.199451   75006 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 18:14:21.249963   75006 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 18:14:21.249990   75006 cache_images.go:84] Images are preloaded, skipping loading
	I0816 18:14:21.250002   75006 kubeadm.go:934] updating node { 192.168.72.144 8444 v1.31.0 crio true true} ...
	I0816 18:14:21.250129   75006 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-256678 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.144
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-256678 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 18:14:21.250211   75006 ssh_runner.go:195] Run: crio config
	I0816 18:14:21.299619   75006 cni.go:84] Creating CNI manager for ""
	I0816 18:14:21.299644   75006 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 18:14:21.299663   75006 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 18:14:21.299684   75006 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.144 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-256678 NodeName:default-k8s-diff-port-256678 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.144"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.144 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 18:14:21.299813   75006 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.144
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-256678"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.144
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.144"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 18:14:21.299880   75006 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 18:14:21.310127   75006 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 18:14:21.310205   75006 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 18:14:21.319566   75006 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0816 18:14:21.337043   75006 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 18:14:21.352319   75006 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0816 18:14:21.370117   75006 ssh_runner.go:195] Run: grep 192.168.72.144	control-plane.minikube.internal$ /etc/hosts
	I0816 18:14:21.373986   75006 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.144	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 18:14:21.386518   75006 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:14:21.508855   75006 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 18:14:21.525184   75006 certs.go:68] Setting up /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/default-k8s-diff-port-256678 for IP: 192.168.72.144
	I0816 18:14:21.525209   75006 certs.go:194] generating shared ca certs ...
	I0816 18:14:21.525230   75006 certs.go:226] acquiring lock for ca certs: {Name:mk1e000575c94c138513704c2900b8a68810eb65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:14:21.525413   75006 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key
	I0816 18:14:21.525468   75006 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key
	I0816 18:14:21.525481   75006 certs.go:256] generating profile certs ...
	I0816 18:14:21.525604   75006 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/default-k8s-diff-port-256678/client.key
	I0816 18:14:21.525688   75006 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/default-k8s-diff-port-256678/apiserver.key.ac6d83aa
	I0816 18:14:21.525738   75006 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/default-k8s-diff-port-256678/proxy-client.key
	I0816 18:14:21.525888   75006 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem (1338 bytes)
	W0816 18:14:21.525931   75006 certs.go:480] ignoring /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753_empty.pem, impossibly tiny 0 bytes
	I0816 18:14:21.525944   75006 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 18:14:21.525991   75006 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem (1082 bytes)
	I0816 18:14:21.526028   75006 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem (1123 bytes)
	I0816 18:14:21.526052   75006 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem (1679 bytes)
	I0816 18:14:21.526101   75006 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem (1708 bytes)
	I0816 18:14:21.526719   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 18:14:21.556992   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 18:14:21.590311   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 18:14:21.624782   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 18:14:21.655118   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/default-k8s-diff-port-256678/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0816 18:14:21.695431   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/default-k8s-diff-port-256678/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 18:14:21.722575   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/default-k8s-diff-port-256678/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 18:14:21.744870   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/default-k8s-diff-port-256678/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 18:14:21.770850   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 18:14:21.793906   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem --> /usr/share/ca-certificates/16753.pem (1338 bytes)
	I0816 18:14:21.817643   75006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /usr/share/ca-certificates/167532.pem (1708 bytes)
	I0816 18:14:21.839584   75006 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 18:14:21.856447   75006 ssh_runner.go:195] Run: openssl version
	I0816 18:14:21.862104   75006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16753.pem && ln -fs /usr/share/ca-certificates/16753.pem /etc/ssl/certs/16753.pem"
	I0816 18:14:21.872584   75006 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16753.pem
	I0816 18:14:21.876886   75006 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 17:00 /usr/share/ca-certificates/16753.pem
	I0816 18:14:21.876945   75006 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16753.pem
	I0816 18:14:21.882424   75006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16753.pem /etc/ssl/certs/51391683.0"
	I0816 18:14:21.892761   75006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167532.pem && ln -fs /usr/share/ca-certificates/167532.pem /etc/ssl/certs/167532.pem"
	I0816 18:14:21.904506   75006 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167532.pem
	I0816 18:14:21.909624   75006 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 17:00 /usr/share/ca-certificates/167532.pem
	I0816 18:14:21.909687   75006 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167532.pem
	I0816 18:14:21.915765   75006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167532.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 18:14:21.927160   75006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 18:14:21.937381   75006 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:14:21.941423   75006 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:49 /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:14:21.941477   75006 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:14:21.946741   75006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 18:14:21.958082   75006 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 18:14:21.962431   75006 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 18:14:21.969889   75006 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 18:14:21.977302   75006 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 18:14:21.983468   75006 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 18:14:21.989115   75006 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 18:14:21.994569   75006 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 18:14:21.999962   75006 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-256678 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-256678 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.144 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 18:14:22.000090   75006 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 18:14:22.000139   75006 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 18:14:22.034063   75006 cri.go:89] found id: ""
	I0816 18:14:22.034158   75006 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 18:14:22.043988   75006 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 18:14:22.044003   75006 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 18:14:22.044040   75006 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 18:14:22.053276   75006 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 18:14:22.054255   75006 kubeconfig.go:125] found "default-k8s-diff-port-256678" server: "https://192.168.72.144:8444"
	I0816 18:14:22.056408   75006 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 18:14:22.065394   75006 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.144
	I0816 18:14:22.065429   75006 kubeadm.go:1160] stopping kube-system containers ...
	I0816 18:14:22.065443   75006 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 18:14:22.065496   75006 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 18:14:22.112797   75006 cri.go:89] found id: ""
	I0816 18:14:22.112889   75006 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 18:14:22.130231   75006 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 18:14:22.139432   75006 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 18:14:22.139451   75006 kubeadm.go:157] found existing configuration files:
	
	I0816 18:14:22.139493   75006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0816 18:14:22.148118   75006 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 18:14:22.148168   75006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 18:14:22.158088   75006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0816 18:14:22.166741   75006 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 18:14:22.166803   75006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 18:14:22.175578   75006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0816 18:14:22.185238   75006 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 18:14:22.185286   75006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 18:14:22.194074   75006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0816 18:14:22.205053   75006 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 18:14:22.205105   75006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 18:14:22.216506   75006 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 18:14:22.228754   75006 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:22.344597   75006 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:23.006750   75006 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:23.275587   75006 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:23.356515   75006 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:23.432890   75006 api_server.go:52] waiting for apiserver process to appear ...
	I0816 18:14:23.432991   75006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:23.933834   75006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:24.433736   75006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:23.205567   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:23.206051   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:23.206078   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:23.206007   76298 retry.go:31] will retry after 2.304886921s: waiting for machine to come up
	I0816 18:14:25.512748   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:25.513295   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:25.513321   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:25.513261   76298 retry.go:31] will retry after 2.603393394s: waiting for machine to come up
	I0816 18:14:23.801346   74828 pod_ready.go:103] pod "kube-scheduler-no-preload-864476" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:26.199045   74828 pod_ready.go:103] pod "kube-scheduler-no-preload-864476" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:28.205981   74828 pod_ready.go:103] pod "kube-scheduler-no-preload-864476" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:24.933846   75006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:24.954190   75006 api_server.go:72] duration metric: took 1.521307594s to wait for apiserver process to appear ...
	I0816 18:14:24.954219   75006 api_server.go:88] waiting for apiserver healthz status ...
	I0816 18:14:24.954242   75006 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I0816 18:14:27.835517   75006 api_server.go:279] https://192.168.72.144:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W0816 18:14:27.835552   75006 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I0816 18:14:27.835567   75006 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I0816 18:14:27.842961   75006 api_server.go:279] https://192.168.72.144:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W0816 18:14:27.842992   75006 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I0816 18:14:27.954290   75006 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I0816 18:14:27.963372   75006 api_server.go:279] https://192.168.72.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:14:27.963400   75006 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:14:28.455035   75006 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I0816 18:14:28.460244   75006 api_server.go:279] https://192.168.72.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:14:28.460279   75006 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:14:28.954475   75006 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I0816 18:14:28.962766   75006 api_server.go:279] https://192.168.72.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:14:28.962802   75006 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:14:29.454298   75006 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I0816 18:14:29.458650   75006 api_server.go:279] https://192.168.72.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:14:29.458681   75006 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:14:29.954582   75006 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I0816 18:14:29.959359   75006 api_server.go:279] https://192.168.72.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:14:29.959384   75006 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:14:30.455077   75006 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I0816 18:14:30.461068   75006 api_server.go:279] https://192.168.72.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:14:30.461099   75006 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:14:30.954772   75006 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I0816 18:14:30.960557   75006 api_server.go:279] https://192.168.72.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:14:30.960588   75006 api_server.go:103] status: https://192.168.72.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:14:31.455232   75006 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I0816 18:14:31.460157   75006 api_server.go:279] https://192.168.72.144:8444/healthz returned 200:
	ok
	I0816 18:14:31.471015   75006 api_server.go:141] control plane version: v1.31.0
	I0816 18:14:31.471046   75006 api_server.go:131] duration metric: took 6.516819341s to wait for apiserver health ...
	I0816 18:14:31.471056   75006 cni.go:84] Creating CNI manager for ""
	I0816 18:14:31.471064   75006 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 18:14:31.472930   75006 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 18:14:28.118105   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:28.118675   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:28.118706   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:28.118637   76298 retry.go:31] will retry after 2.400714985s: waiting for machine to come up
	I0816 18:14:30.521623   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:30.522157   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | unable to find current IP address of domain old-k8s-version-783465 in network mk-old-k8s-version-783465
	I0816 18:14:30.522196   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | I0816 18:14:30.522111   76298 retry.go:31] will retry after 3.210603239s: waiting for machine to come up
	I0816 18:14:30.699930   74828 pod_ready.go:103] pod "kube-scheduler-no-preload-864476" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:33.200755   74828 pod_ready.go:103] pod "kube-scheduler-no-preload-864476" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:31.474388   75006 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 18:14:31.484723   75006 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 18:14:31.502094   75006 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 18:14:31.511169   75006 system_pods.go:59] 8 kube-system pods found
	I0816 18:14:31.511207   75006 system_pods.go:61] "coredns-6f6b679f8f-2sgmk" [3c98207c-ab70-435e-a725-3d6b108515d9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 18:14:31.511215   75006 system_pods.go:61] "etcd-default-k8s-diff-port-256678" [c6d0dbe2-8b80-4fb2-8408-7b2e668cf4cc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 18:14:31.511221   75006 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-256678" [4506e38e-6685-41f8-98b1-738b35476ad7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 18:14:31.511228   75006 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-256678" [14282ea5-2ebc-4ea6-8e06-829e86296333] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 18:14:31.511232   75006 system_pods.go:61] "kube-proxy-l4lr2" [880ceec6-c3d1-4934-b02a-7a175ded8a02] Running
	I0816 18:14:31.511236   75006 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-256678" [b122d1cd-12e8-4b87-a179-c50baf4c89d4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 18:14:31.511241   75006 system_pods.go:61] "metrics-server-6867b74b74-fc4h4" [3cb9624e-98b4-4edb-a2de-d6a971520cac] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 18:14:31.511244   75006 system_pods.go:61] "storage-provisioner" [79442d12-c28b-447e-ae96-e4c2ddb5c4da] Running
	I0816 18:14:31.511250   75006 system_pods.go:74] duration metric: took 9.137933ms to wait for pod list to return data ...
	I0816 18:14:31.511256   75006 node_conditions.go:102] verifying NodePressure condition ...
	I0816 18:14:31.515339   75006 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 18:14:31.515361   75006 node_conditions.go:123] node cpu capacity is 2
	I0816 18:14:31.515370   75006 node_conditions.go:105] duration metric: took 4.110442ms to run NodePressure ...
	I0816 18:14:31.515387   75006 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:31.774197   75006 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0816 18:14:31.778258   75006 kubeadm.go:739] kubelet initialised
	I0816 18:14:31.778276   75006 kubeadm.go:740] duration metric: took 4.052927ms waiting for restarted kubelet to initialise ...
	I0816 18:14:31.778283   75006 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:14:31.782595   75006 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-2sgmk" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:33.788205   75006 pod_ready.go:103] pod "coredns-6f6b679f8f-2sgmk" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:35.053312   74510 start.go:364] duration metric: took 53.786665535s to acquireMachinesLock for "embed-certs-777541"
	I0816 18:14:35.053367   74510 start.go:96] Skipping create...Using existing machine configuration
	I0816 18:14:35.053372   74510 fix.go:54] fixHost starting: 
	I0816 18:14:35.053687   74510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:14:35.053718   74510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:14:35.073509   74510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33223
	I0816 18:14:35.073935   74510 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:14:35.074396   74510 main.go:141] libmachine: Using API Version  1
	I0816 18:14:35.074420   74510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:14:35.074749   74510 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:14:35.074928   74510 main.go:141] libmachine: (embed-certs-777541) Calling .DriverName
	I0816 18:14:35.075102   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetState
	I0816 18:14:35.076710   74510 fix.go:112] recreateIfNeeded on embed-certs-777541: state=Stopped err=<nil>
	I0816 18:14:35.076738   74510 main.go:141] libmachine: (embed-certs-777541) Calling .DriverName
	W0816 18:14:35.076903   74510 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 18:14:35.078759   74510 out.go:177] * Restarting existing kvm2 VM for "embed-certs-777541" ...
	I0816 18:14:33.735394   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:33.735898   75402 main.go:141] libmachine: (old-k8s-version-783465) Found IP for machine: 192.168.39.211
	I0816 18:14:33.735925   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has current primary IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:33.735933   75402 main.go:141] libmachine: (old-k8s-version-783465) Reserving static IP address...
	I0816 18:14:33.736407   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "old-k8s-version-783465", mac: "52:54:00:d1:97:35", ip: "192.168.39.211"} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:33.736439   75402 main.go:141] libmachine: (old-k8s-version-783465) Reserved static IP address: 192.168.39.211
	I0816 18:14:33.736459   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | skip adding static IP to network mk-old-k8s-version-783465 - found existing host DHCP lease matching {name: "old-k8s-version-783465", mac: "52:54:00:d1:97:35", ip: "192.168.39.211"}
	I0816 18:14:33.736478   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | Getting to WaitForSSH function...
	I0816 18:14:33.736492   75402 main.go:141] libmachine: (old-k8s-version-783465) Waiting for SSH to be available...
	I0816 18:14:33.739028   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:33.739377   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:33.739397   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:33.739596   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | Using SSH client type: external
	I0816 18:14:33.739689   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | Using SSH private key: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/old-k8s-version-783465/id_rsa (-rw-------)
	I0816 18:14:33.739724   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.211 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19461-9545/.minikube/machines/old-k8s-version-783465/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 18:14:33.739747   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | About to run SSH command:
	I0816 18:14:33.739785   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | exit 0
	I0816 18:14:33.861036   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | SSH cmd err, output: <nil>: 
	I0816 18:14:33.861405   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetConfigRaw
	I0816 18:14:33.862105   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetIP
	I0816 18:14:33.864850   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:33.865245   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:33.865272   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:33.865542   75402 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/config.json ...
	I0816 18:14:33.865796   75402 machine.go:93] provisionDockerMachine start ...
	I0816 18:14:33.865820   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	I0816 18:14:33.866053   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:14:33.868422   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:33.868761   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:33.868795   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:33.868911   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHPort
	I0816 18:14:33.869095   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:33.869267   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:33.869415   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHUsername
	I0816 18:14:33.869579   75402 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:33.869796   75402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0816 18:14:33.869810   75402 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 18:14:33.972880   75402 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 18:14:33.972907   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetMachineName
	I0816 18:14:33.973141   75402 buildroot.go:166] provisioning hostname "old-k8s-version-783465"
	I0816 18:14:33.973172   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetMachineName
	I0816 18:14:33.973378   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:14:33.976198   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:33.976530   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:33.976563   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:33.976747   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHPort
	I0816 18:14:33.976945   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:33.977086   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:33.977228   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHUsername
	I0816 18:14:33.977369   75402 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:33.977529   75402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0816 18:14:33.977540   75402 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-783465 && echo "old-k8s-version-783465" | sudo tee /etc/hostname
	I0816 18:14:34.086092   75402 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-783465
	
	I0816 18:14:34.086123   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:14:34.088785   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.089107   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:34.089132   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.089285   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHPort
	I0816 18:14:34.089527   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:34.089684   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:34.089828   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHUsername
	I0816 18:14:34.089997   75402 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:34.090152   75402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0816 18:14:34.090168   75402 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-783465' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-783465/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-783465' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 18:14:34.200744   75402 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 18:14:34.200779   75402 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19461-9545/.minikube CaCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19461-9545/.minikube}
	I0816 18:14:34.200834   75402 buildroot.go:174] setting up certificates
	I0816 18:14:34.200848   75402 provision.go:84] configureAuth start
	I0816 18:14:34.200862   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetMachineName
	I0816 18:14:34.201175   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetIP
	I0816 18:14:34.203868   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.204297   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:34.204344   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.204506   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:14:34.207067   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.207441   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:34.207464   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.207810   75402 provision.go:143] copyHostCerts
	I0816 18:14:34.207869   75402 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem, removing ...
	I0816 18:14:34.207892   75402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem
	I0816 18:14:34.207951   75402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem (1082 bytes)
	I0816 18:14:34.208058   75402 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem, removing ...
	I0816 18:14:34.208069   75402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem
	I0816 18:14:34.208103   75402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem (1123 bytes)
	I0816 18:14:34.208180   75402 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem, removing ...
	I0816 18:14:34.208192   75402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem
	I0816 18:14:34.208220   75402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem (1679 bytes)
	I0816 18:14:34.208291   75402 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-783465 san=[127.0.0.1 192.168.39.211 localhost minikube old-k8s-version-783465]
	I0816 18:14:34.413800   75402 provision.go:177] copyRemoteCerts
	I0816 18:14:34.413857   75402 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 18:14:34.413881   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:14:34.416724   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.417138   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:34.417173   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.417345   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHPort
	I0816 18:14:34.417673   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:34.417894   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHUsername
	I0816 18:14:34.418089   75402 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/old-k8s-version-783465/id_rsa Username:docker}
	I0816 18:14:34.495519   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 18:14:34.517414   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0816 18:14:34.540423   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 18:14:34.563983   75402 provision.go:87] duration metric: took 363.122639ms to configureAuth
	I0816 18:14:34.564019   75402 buildroot.go:189] setting minikube options for container-runtime
	I0816 18:14:34.564229   75402 config.go:182] Loaded profile config "old-k8s-version-783465": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0816 18:14:34.564299   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:14:34.567149   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.567550   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:34.567580   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.567753   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHPort
	I0816 18:14:34.567935   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:34.568098   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:34.568255   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHUsername
	I0816 18:14:34.568448   75402 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:34.568659   75402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0816 18:14:34.568680   75402 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 18:14:34.824064   75402 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 18:14:34.824091   75402 machine.go:96] duration metric: took 958.278616ms to provisionDockerMachine
	I0816 18:14:34.824106   75402 start.go:293] postStartSetup for "old-k8s-version-783465" (driver="kvm2")
	I0816 18:14:34.824120   75402 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 18:14:34.824169   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	I0816 18:14:34.824556   75402 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 18:14:34.824599   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:14:34.827203   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.827517   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:34.827547   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.827677   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHPort
	I0816 18:14:34.827869   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:34.828033   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHUsername
	I0816 18:14:34.828171   75402 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/old-k8s-version-783465/id_rsa Username:docker}
	I0816 18:14:34.912148   75402 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 18:14:34.916652   75402 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 18:14:34.916681   75402 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/addons for local assets ...
	I0816 18:14:34.916755   75402 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/files for local assets ...
	I0816 18:14:34.916864   75402 filesync.go:149] local asset: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem -> 167532.pem in /etc/ssl/certs
	I0816 18:14:34.916989   75402 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 18:14:34.927061   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /etc/ssl/certs/167532.pem (1708 bytes)
	I0816 18:14:34.949703   75402 start.go:296] duration metric: took 125.581331ms for postStartSetup
	I0816 18:14:34.949743   75402 fix.go:56] duration metric: took 19.13519024s for fixHost
	I0816 18:14:34.949763   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:14:34.952740   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.953090   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:34.953124   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:34.953307   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHPort
	I0816 18:14:34.953532   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:34.953715   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:34.953861   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHUsername
	I0816 18:14:34.954029   75402 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:34.954229   75402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0816 18:14:34.954242   75402 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 18:14:35.053143   75402 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723832075.025252523
	
	I0816 18:14:35.053171   75402 fix.go:216] guest clock: 1723832075.025252523
	I0816 18:14:35.053180   75402 fix.go:229] Guest: 2024-08-16 18:14:35.025252523 +0000 UTC Remote: 2024-08-16 18:14:34.949747236 +0000 UTC m=+221.880938919 (delta=75.505287ms)
	I0816 18:14:35.053204   75402 fix.go:200] guest clock delta is within tolerance: 75.505287ms
	I0816 18:14:35.053211   75402 start.go:83] releasing machines lock for "old-k8s-version-783465", held for 19.238692888s
	I0816 18:14:35.053243   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	I0816 18:14:35.053549   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetIP
	I0816 18:14:35.056365   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:35.056792   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:35.056823   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:35.057009   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	I0816 18:14:35.057509   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	I0816 18:14:35.057731   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .DriverName
	I0816 18:14:35.057831   75402 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 18:14:35.057892   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:14:35.057951   75402 ssh_runner.go:195] Run: cat /version.json
	I0816 18:14:35.057972   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHHostname
	I0816 18:14:35.060543   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:35.060733   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:35.060987   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:35.061016   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:35.061126   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:35.061148   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:35.061154   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHPort
	I0816 18:14:35.061319   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHPort
	I0816 18:14:35.061339   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:35.061456   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHKeyPath
	I0816 18:14:35.061518   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHUsername
	I0816 18:14:35.061639   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetSSHUsername
	I0816 18:14:35.061720   75402 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/old-k8s-version-783465/id_rsa Username:docker}
	I0816 18:14:35.061773   75402 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/old-k8s-version-783465/id_rsa Username:docker}
	I0816 18:14:35.174137   75402 ssh_runner.go:195] Run: systemctl --version
	I0816 18:14:35.181704   75402 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 18:14:35.323490   75402 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 18:14:35.330733   75402 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 18:14:35.330807   75402 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 18:14:35.350653   75402 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 18:14:35.350679   75402 start.go:495] detecting cgroup driver to use...
	I0816 18:14:35.350763   75402 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 18:14:35.372307   75402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 18:14:35.386513   75402 docker.go:217] disabling cri-docker service (if available) ...
	I0816 18:14:35.386598   75402 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 18:14:35.400406   75402 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 18:14:35.414761   75402 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 18:14:35.540356   75402 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 18:14:35.675726   75402 docker.go:233] disabling docker service ...
	I0816 18:14:35.675793   75402 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 18:14:35.691169   75402 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 18:14:35.707288   75402 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 18:14:35.858149   75402 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 18:14:35.981654   75402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 18:14:35.996396   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 18:14:36.013656   75402 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0816 18:14:36.013711   75402 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:36.023839   75402 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 18:14:36.023907   75402 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:36.033889   75402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:36.043727   75402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:36.053496   75402 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 18:14:36.063694   75402 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 18:14:36.072919   75402 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 18:14:36.072979   75402 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 18:14:36.085707   75402 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 18:14:36.095377   75402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:14:36.219235   75402 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 18:14:36.384915   75402 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 18:14:36.384990   75402 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 18:14:36.392122   75402 start.go:563] Will wait 60s for crictl version
	I0816 18:14:36.392196   75402 ssh_runner.go:195] Run: which crictl
	I0816 18:14:36.397589   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 18:14:36.443581   75402 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 18:14:36.443710   75402 ssh_runner.go:195] Run: crio --version
	I0816 18:14:36.473740   75402 ssh_runner.go:195] Run: crio --version
	I0816 18:14:36.512542   75402 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0816 18:14:36.513678   75402 main.go:141] libmachine: (old-k8s-version-783465) Calling .GetIP
	I0816 18:14:36.517404   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:36.517912   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:97:35", ip: ""} in network mk-old-k8s-version-783465: {Iface:virbr1 ExpiryTime:2024-08-16 19:14:27 +0000 UTC Type:0 Mac:52:54:00:d1:97:35 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-783465 Clientid:01:52:54:00:d1:97:35}
	I0816 18:14:36.517948   75402 main.go:141] libmachine: (old-k8s-version-783465) DBG | domain old-k8s-version-783465 has defined IP address 192.168.39.211 and MAC address 52:54:00:d1:97:35 in network mk-old-k8s-version-783465
	I0816 18:14:36.518190   75402 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0816 18:14:36.523577   75402 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 18:14:36.536188   75402 kubeadm.go:883] updating cluster {Name:old-k8s-version-783465 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-783465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 18:14:36.536361   75402 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0816 18:14:36.536425   75402 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 18:14:36.587027   75402 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0816 18:14:36.587085   75402 ssh_runner.go:195] Run: which lz4
	I0816 18:14:36.590780   75402 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 18:14:36.594635   75402 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 18:14:36.594673   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0816 18:14:35.080033   74510 main.go:141] libmachine: (embed-certs-777541) Calling .Start
	I0816 18:14:35.080220   74510 main.go:141] libmachine: (embed-certs-777541) Ensuring networks are active...
	I0816 18:14:35.080971   74510 main.go:141] libmachine: (embed-certs-777541) Ensuring network default is active
	I0816 18:14:35.081366   74510 main.go:141] libmachine: (embed-certs-777541) Ensuring network mk-embed-certs-777541 is active
	I0816 18:14:35.081887   74510 main.go:141] libmachine: (embed-certs-777541) Getting domain xml...
	I0816 18:14:35.082634   74510 main.go:141] libmachine: (embed-certs-777541) Creating domain...
	I0816 18:14:36.459300   74510 main.go:141] libmachine: (embed-certs-777541) Waiting to get IP...
	I0816 18:14:36.460282   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:36.460801   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:36.460883   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:36.460778   76422 retry.go:31] will retry after 291.491491ms: waiting for machine to come up
	I0816 18:14:36.754548   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:36.755372   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:36.755412   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:36.755313   76422 retry.go:31] will retry after 356.347467ms: waiting for machine to come up
	I0816 18:14:37.113124   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:37.113704   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:37.113739   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:37.113676   76422 retry.go:31] will retry after 386.244375ms: waiting for machine to come up
	I0816 18:14:37.502241   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:37.502796   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:37.502826   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:37.502750   76422 retry.go:31] will retry after 437.69847ms: waiting for machine to come up
	I0816 18:14:37.942667   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:37.943423   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:37.943456   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:37.943378   76422 retry.go:31] will retry after 709.064032ms: waiting for machine to come up
	I0816 18:14:38.653840   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:38.654349   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:38.654386   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:38.654297   76422 retry.go:31] will retry after 594.417028ms: waiting for machine to come up
	I0816 18:14:34.700134   74828 pod_ready.go:93] pod "kube-scheduler-no-preload-864476" in "kube-system" namespace has status "Ready":"True"
	I0816 18:14:34.700158   74828 pod_ready.go:82] duration metric: took 13.007571631s for pod "kube-scheduler-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:34.700171   74828 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:36.707977   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:38.708527   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:35.790842   75006 pod_ready.go:103] pod "coredns-6f6b679f8f-2sgmk" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:37.791236   75006 pod_ready.go:93] pod "coredns-6f6b679f8f-2sgmk" in "kube-system" namespace has status "Ready":"True"
	I0816 18:14:37.791278   75006 pod_ready.go:82] duration metric: took 6.008656328s for pod "coredns-6f6b679f8f-2sgmk" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:37.791294   75006 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:39.798513   75006 pod_ready.go:93] pod "etcd-default-k8s-diff-port-256678" in "kube-system" namespace has status "Ready":"True"
	I0816 18:14:39.798543   75006 pod_ready.go:82] duration metric: took 2.007240233s for pod "etcd-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:39.798557   75006 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:38.127403   75402 crio.go:462] duration metric: took 1.536659915s to copy over tarball
	I0816 18:14:38.127504   75402 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 18:14:41.109575   75402 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.982013621s)
	I0816 18:14:41.109639   75402 crio.go:469] duration metric: took 2.982198625s to extract the tarball
	I0816 18:14:41.109650   75402 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 18:14:41.152940   75402 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 18:14:41.185863   75402 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0816 18:14:41.185892   75402 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0816 18:14:41.185982   75402 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:14:41.186003   75402 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0816 18:14:41.186036   75402 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0816 18:14:41.186044   75402 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 18:14:41.186103   75402 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 18:14:41.185993   75402 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0816 18:14:41.186171   75402 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 18:14:41.185993   75402 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 18:14:41.187521   75402 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0816 18:14:41.187532   75402 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 18:14:41.187542   75402 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0816 18:14:41.187527   75402 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 18:14:41.187595   75402 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:14:41.187605   75402 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 18:14:41.187688   75402 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 18:14:41.187840   75402 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0816 18:14:41.421551   75402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0816 18:14:41.462506   75402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0816 18:14:41.467716   75402 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0816 18:14:41.467758   75402 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0816 18:14:41.467810   75402 ssh_runner.go:195] Run: which crictl
	I0816 18:14:41.508571   75402 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0816 18:14:41.508638   75402 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 18:14:41.508687   75402 ssh_runner.go:195] Run: which crictl
	I0816 18:14:41.508691   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 18:14:41.514560   75402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 18:14:41.520003   75402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0816 18:14:41.526475   75402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0816 18:14:41.526892   75402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0816 18:14:41.533271   75402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0816 18:14:41.569269   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 18:14:41.569426   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 18:14:41.694043   75402 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0816 18:14:41.694100   75402 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 18:14:41.694049   75402 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0816 18:14:41.694210   75402 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 18:14:41.694173   75402 ssh_runner.go:195] Run: which crictl
	I0816 18:14:41.694268   75402 ssh_runner.go:195] Run: which crictl
	I0816 18:14:41.701292   75402 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0816 18:14:41.701337   75402 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0816 18:14:41.701389   75402 ssh_runner.go:195] Run: which crictl
	I0816 18:14:41.707345   75402 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0816 18:14:41.707415   75402 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 18:14:41.707467   75402 ssh_runner.go:195] Run: which crictl
	I0816 18:14:41.711820   75402 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0816 18:14:41.711854   75402 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0816 18:14:41.711896   75402 ssh_runner.go:195] Run: which crictl
	I0816 18:14:41.723813   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 18:14:41.723850   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 18:14:41.723814   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 18:14:41.723939   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 18:14:41.723951   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 18:14:41.724003   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 18:14:41.724060   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 18:14:41.872645   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 18:14:41.872674   75402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0816 18:14:41.873747   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 18:14:41.873786   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 18:14:41.873891   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 18:14:41.873899   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 18:14:41.873960   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 18:14:41.997519   75402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0816 18:14:42.002048   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 18:14:42.002091   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 18:14:42.002140   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 18:14:42.002178   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 18:14:42.002218   75402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 18:14:42.070993   75402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:14:42.115418   75402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0816 18:14:42.115527   75402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0816 18:14:42.115623   75402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0816 18:14:42.115631   75402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0816 18:14:42.115689   75402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0816 18:14:42.235706   75402 cache_images.go:92] duration metric: took 1.049784726s to LoadCachedImages
	W0816 18:14:42.235807   75402 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19461-9545/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0816 18:14:42.235821   75402 kubeadm.go:934] updating node { 192.168.39.211 8443 v1.20.0 crio true true} ...
	I0816 18:14:42.235939   75402 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-783465 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.211
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-783465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 18:14:42.236024   75402 ssh_runner.go:195] Run: crio config
	I0816 18:14:42.286742   75402 cni.go:84] Creating CNI manager for ""
	I0816 18:14:42.286763   75402 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 18:14:42.286771   75402 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 18:14:42.286789   75402 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.211 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-783465 NodeName:old-k8s-version-783465 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.211"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.211 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0816 18:14:42.286904   75402 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.211
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-783465"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.211
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.211"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 18:14:42.286961   75402 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0816 18:14:42.297015   75402 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 18:14:42.297098   75402 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 18:14:42.306400   75402 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0816 18:14:42.322812   75402 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 18:14:42.339791   75402 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0816 18:14:42.356930   75402 ssh_runner.go:195] Run: grep 192.168.39.211	control-plane.minikube.internal$ /etc/hosts
	I0816 18:14:42.360578   75402 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.211	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 18:14:42.373248   75402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:14:42.495499   75402 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 18:14:42.511910   75402 certs.go:68] Setting up /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465 for IP: 192.168.39.211
	I0816 18:14:42.511942   75402 certs.go:194] generating shared ca certs ...
	I0816 18:14:42.511964   75402 certs.go:226] acquiring lock for ca certs: {Name:mk1e000575c94c138513704c2900b8a68810eb65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:14:42.512147   75402 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key
	I0816 18:14:42.512206   75402 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key
	I0816 18:14:42.512220   75402 certs.go:256] generating profile certs ...
	I0816 18:14:42.512361   75402 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/client.key
	I0816 18:14:42.512431   75402 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/apiserver.key.94c45fb6
	I0816 18:14:42.512483   75402 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/proxy-client.key
	I0816 18:14:42.512664   75402 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem (1338 bytes)
	W0816 18:14:42.512709   75402 certs.go:480] ignoring /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753_empty.pem, impossibly tiny 0 bytes
	I0816 18:14:42.512724   75402 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 18:14:42.512754   75402 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem (1082 bytes)
	I0816 18:14:42.512794   75402 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem (1123 bytes)
	I0816 18:14:42.512825   75402 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem (1679 bytes)
	I0816 18:14:42.512881   75402 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem (1708 bytes)
	I0816 18:14:42.513660   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 18:14:42.552291   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 18:14:42.585617   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 18:14:42.611017   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 18:14:42.638092   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0816 18:14:42.676877   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 18:14:42.710091   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 18:14:42.743734   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/old-k8s-version-783465/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 18:14:42.779905   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /usr/share/ca-certificates/167532.pem (1708 bytes)
	I0816 18:14:42.802779   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 18:14:42.826432   75402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem --> /usr/share/ca-certificates/16753.pem (1338 bytes)
	I0816 18:14:42.849286   75402 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 18:14:42.866901   75402 ssh_runner.go:195] Run: openssl version
	I0816 18:14:42.872283   75402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167532.pem && ln -fs /usr/share/ca-certificates/167532.pem /etc/ssl/certs/167532.pem"
	I0816 18:14:42.882976   75402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167532.pem
	I0816 18:14:42.887432   75402 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 17:00 /usr/share/ca-certificates/167532.pem
	I0816 18:14:42.887504   75402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167532.pem
	I0816 18:14:42.893275   75402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167532.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 18:14:42.903687   75402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 18:14:42.915232   75402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:14:42.919669   75402 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:49 /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:14:42.919735   75402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:14:42.925282   75402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 18:14:42.937888   75402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16753.pem && ln -fs /usr/share/ca-certificates/16753.pem /etc/ssl/certs/16753.pem"
	I0816 18:14:42.949994   75402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16753.pem
	I0816 18:14:42.954495   75402 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 17:00 /usr/share/ca-certificates/16753.pem
	I0816 18:14:42.954548   75402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16753.pem
	I0816 18:14:42.960295   75402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16753.pem /etc/ssl/certs/51391683.0"
	I0816 18:14:42.972006   75402 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 18:14:42.976450   75402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 18:14:42.982741   75402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 18:14:42.988649   75402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 18:14:42.995021   75402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 18:14:43.000965   75402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 18:14:43.007030   75402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 18:14:43.012891   75402 kubeadm.go:392] StartCluster: {Name:old-k8s-version-783465 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-783465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 18:14:43.012983   75402 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 18:14:43.013071   75402 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 18:14:43.050670   75402 cri.go:89] found id: ""
	I0816 18:14:43.050741   75402 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 18:14:43.060748   75402 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 18:14:43.060773   75402 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 18:14:43.060825   75402 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 18:14:43.070299   75402 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 18:14:43.071251   75402 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-783465" does not appear in /home/jenkins/minikube-integration/19461-9545/kubeconfig
	I0816 18:14:43.071945   75402 kubeconfig.go:62] /home/jenkins/minikube-integration/19461-9545/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-783465" cluster setting kubeconfig missing "old-k8s-version-783465" context setting]
	I0816 18:14:43.072870   75402 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/kubeconfig: {Name:mk7d5abb3a1b9b654c5d7490d32aeae2c9a1bdd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:14:39.250064   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:39.250979   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:39.251028   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:39.250914   76422 retry.go:31] will retry after 1.014851653s: waiting for machine to come up
	I0816 18:14:40.266811   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:40.267287   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:40.267323   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:40.267238   76422 retry.go:31] will retry after 1.333311972s: waiting for machine to come up
	I0816 18:14:41.602031   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:41.602532   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:41.602565   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:41.602480   76422 retry.go:31] will retry after 1.525496469s: waiting for machine to come up
	I0816 18:14:43.130136   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:43.130620   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:43.130661   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:43.130563   76422 retry.go:31] will retry after 2.206344656s: waiting for machine to come up
	I0816 18:14:41.206879   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:43.706278   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:41.806382   75006 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-256678" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:43.927145   75006 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-256678" in "kube-system" namespace has status "Ready":"True"
	I0816 18:14:43.927173   75006 pod_ready.go:82] duration metric: took 4.128607781s for pod "kube-apiserver-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:43.927182   75006 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:43.932293   75006 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-256678" in "kube-system" namespace has status "Ready":"True"
	I0816 18:14:43.932314   75006 pod_ready.go:82] duration metric: took 5.122737ms for pod "kube-controller-manager-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:43.932326   75006 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-l4lr2" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:43.937128   75006 pod_ready.go:93] pod "kube-proxy-l4lr2" in "kube-system" namespace has status "Ready":"True"
	I0816 18:14:43.937146   75006 pod_ready.go:82] duration metric: took 4.812798ms for pod "kube-proxy-l4lr2" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:43.937154   75006 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:43.941992   75006 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-256678" in "kube-system" namespace has status "Ready":"True"
	I0816 18:14:43.942018   75006 pod_ready.go:82] duration metric: took 4.856588ms for pod "kube-scheduler-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:43.942030   75006 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace to be "Ready" ...
	I0816 18:14:43.141753   75402 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 18:14:43.154269   75402 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.211
	I0816 18:14:43.154324   75402 kubeadm.go:1160] stopping kube-system containers ...
	I0816 18:14:43.154341   75402 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 18:14:43.154404   75402 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 18:14:43.192966   75402 cri.go:89] found id: ""
	I0816 18:14:43.193035   75402 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 18:14:43.213101   75402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 18:14:43.222811   75402 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 18:14:43.222826   75402 kubeadm.go:157] found existing configuration files:
	
	I0816 18:14:43.222870   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 18:14:43.232196   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 18:14:43.232261   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 18:14:43.241633   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 18:14:43.250751   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 18:14:43.250800   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 18:14:43.260197   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 18:14:43.268943   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 18:14:43.269000   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 18:14:43.277887   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 18:14:43.286281   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 18:14:43.286391   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 18:14:43.295899   75402 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 18:14:43.306026   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:43.441487   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:44.213457   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:44.431649   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:44.553955   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:14:44.646817   75402 api_server.go:52] waiting for apiserver process to appear ...
	I0816 18:14:44.646923   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:45.147202   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:45.648050   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:46.147958   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:46.647398   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:47.147403   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:47.646992   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:45.338228   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:45.338729   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:45.338763   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:45.338660   76422 retry.go:31] will retry after 2.526891535s: waiting for machine to come up
	I0816 18:14:47.868326   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:47.868821   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:47.868853   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:47.868774   76422 retry.go:31] will retry after 2.866643935s: waiting for machine to come up
	I0816 18:14:45.706669   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:47.707062   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:45.948791   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:48.447930   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:48.147987   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:48.646974   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:49.147114   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:49.647020   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:50.147765   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:50.647135   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:51.147506   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:51.647568   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:52.147648   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:52.647865   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:50.736760   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:50.737295   74510 main.go:141] libmachine: (embed-certs-777541) DBG | unable to find current IP address of domain embed-certs-777541 in network mk-embed-certs-777541
	I0816 18:14:50.737331   74510 main.go:141] libmachine: (embed-certs-777541) DBG | I0816 18:14:50.737245   76422 retry.go:31] will retry after 3.824271015s: waiting for machine to come up
	I0816 18:14:50.206249   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:52.206435   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:50.449586   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:52.948577   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:54.566285   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:54.566784   74510 main.go:141] libmachine: (embed-certs-777541) Found IP for machine: 192.168.61.218
	I0816 18:14:54.566809   74510 main.go:141] libmachine: (embed-certs-777541) Reserving static IP address...
	I0816 18:14:54.566825   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has current primary IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:54.567171   74510 main.go:141] libmachine: (embed-certs-777541) Reserved static IP address: 192.168.61.218
	I0816 18:14:54.567193   74510 main.go:141] libmachine: (embed-certs-777541) Waiting for SSH to be available...
	I0816 18:14:54.567211   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "embed-certs-777541", mac: "52:54:00:54:9a:0c", ip: "192.168.61.218"} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:54.567231   74510 main.go:141] libmachine: (embed-certs-777541) DBG | skip adding static IP to network mk-embed-certs-777541 - found existing host DHCP lease matching {name: "embed-certs-777541", mac: "52:54:00:54:9a:0c", ip: "192.168.61.218"}
	I0816 18:14:54.567245   74510 main.go:141] libmachine: (embed-certs-777541) DBG | Getting to WaitForSSH function...
	I0816 18:14:54.569546   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:54.569864   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:54.569890   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:54.570019   74510 main.go:141] libmachine: (embed-certs-777541) DBG | Using SSH client type: external
	I0816 18:14:54.570046   74510 main.go:141] libmachine: (embed-certs-777541) DBG | Using SSH private key: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/embed-certs-777541/id_rsa (-rw-------)
	I0816 18:14:54.570073   74510 main.go:141] libmachine: (embed-certs-777541) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.218 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19461-9545/.minikube/machines/embed-certs-777541/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 18:14:54.570082   74510 main.go:141] libmachine: (embed-certs-777541) DBG | About to run SSH command:
	I0816 18:14:54.570109   74510 main.go:141] libmachine: (embed-certs-777541) DBG | exit 0
	I0816 18:14:54.692450   74510 main.go:141] libmachine: (embed-certs-777541) DBG | SSH cmd err, output: <nil>: 
	I0816 18:14:54.692828   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetConfigRaw
	I0816 18:14:54.693486   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetIP
	I0816 18:14:54.696565   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:54.696943   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:54.696987   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:54.697248   74510 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/embed-certs-777541/config.json ...
	I0816 18:14:54.697455   74510 machine.go:93] provisionDockerMachine start ...
	I0816 18:14:54.697475   74510 main.go:141] libmachine: (embed-certs-777541) Calling .DriverName
	I0816 18:14:54.697686   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:14:54.700172   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:54.700491   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:54.700520   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:54.700716   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHPort
	I0816 18:14:54.700906   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:54.701102   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:54.701239   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHUsername
	I0816 18:14:54.701440   74510 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:54.701650   74510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.218 22 <nil> <nil>}
	I0816 18:14:54.701662   74510 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 18:14:54.800770   74510 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 18:14:54.800805   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetMachineName
	I0816 18:14:54.801047   74510 buildroot.go:166] provisioning hostname "embed-certs-777541"
	I0816 18:14:54.801079   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetMachineName
	I0816 18:14:54.801264   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:14:54.804313   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:54.804734   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:54.804761   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:54.804940   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHPort
	I0816 18:14:54.805132   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:54.805322   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:54.805485   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHUsername
	I0816 18:14:54.805711   74510 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:54.805869   74510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.218 22 <nil> <nil>}
	I0816 18:14:54.805886   74510 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-777541 && echo "embed-certs-777541" | sudo tee /etc/hostname
	I0816 18:14:54.918908   74510 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-777541
	
	I0816 18:14:54.918949   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:14:54.921760   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:54.922117   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:54.922146   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:54.922338   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHPort
	I0816 18:14:54.922511   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:54.922681   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:54.922843   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHUsername
	I0816 18:14:54.923033   74510 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:54.923243   74510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.218 22 <nil> <nil>}
	I0816 18:14:54.923261   74510 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-777541' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-777541/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-777541' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 18:14:55.028983   74510 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 18:14:55.029016   74510 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19461-9545/.minikube CaCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19461-9545/.minikube}
	I0816 18:14:55.029040   74510 buildroot.go:174] setting up certificates
	I0816 18:14:55.029051   74510 provision.go:84] configureAuth start
	I0816 18:14:55.029064   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetMachineName
	I0816 18:14:55.029320   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetIP
	I0816 18:14:55.032273   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.032693   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:55.032743   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.032983   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:14:55.035257   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.035581   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:55.035606   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.035742   74510 provision.go:143] copyHostCerts
	I0816 18:14:55.035797   74510 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem, removing ...
	I0816 18:14:55.035814   74510 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem
	I0816 18:14:55.035899   74510 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/cert.pem (1123 bytes)
	I0816 18:14:55.035996   74510 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem, removing ...
	I0816 18:14:55.036004   74510 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem
	I0816 18:14:55.036024   74510 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/key.pem (1679 bytes)
	I0816 18:14:55.036081   74510 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem, removing ...
	I0816 18:14:55.036087   74510 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem
	I0816 18:14:55.036106   74510 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19461-9545/.minikube/ca.pem (1082 bytes)
	I0816 18:14:55.036155   74510 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem org=jenkins.embed-certs-777541 san=[127.0.0.1 192.168.61.218 embed-certs-777541 localhost minikube]
	I0816 18:14:55.182540   74510 provision.go:177] copyRemoteCerts
	I0816 18:14:55.182606   74510 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 18:14:55.182633   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:14:55.185807   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.186179   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:55.186229   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.186429   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHPort
	I0816 18:14:55.186619   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:55.186770   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHUsername
	I0816 18:14:55.186884   74510 sshutil.go:53] new ssh client: &{IP:192.168.61.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/embed-certs-777541/id_rsa Username:docker}
	I0816 18:14:55.262494   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0816 18:14:55.285186   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 18:14:55.307082   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 18:14:55.328912   74510 provision.go:87] duration metric: took 299.848734ms to configureAuth
	I0816 18:14:55.328934   74510 buildroot.go:189] setting minikube options for container-runtime
	I0816 18:14:55.329140   74510 config.go:182] Loaded profile config "embed-certs-777541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 18:14:55.329215   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:14:55.331989   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.332366   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:55.332414   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.332594   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHPort
	I0816 18:14:55.332801   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:55.333006   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:55.333158   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHUsername
	I0816 18:14:55.333312   74510 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:55.333501   74510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.218 22 <nil> <nil>}
	I0816 18:14:55.333522   74510 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 18:14:55.579734   74510 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 18:14:55.579765   74510 machine.go:96] duration metric: took 882.296402ms to provisionDockerMachine
	I0816 18:14:55.579781   74510 start.go:293] postStartSetup for "embed-certs-777541" (driver="kvm2")
	I0816 18:14:55.579793   74510 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 18:14:55.579814   74510 main.go:141] libmachine: (embed-certs-777541) Calling .DriverName
	I0816 18:14:55.580182   74510 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 18:14:55.580216   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:14:55.582826   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.583250   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:55.583285   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.583374   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHPort
	I0816 18:14:55.583574   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:55.583739   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHUsername
	I0816 18:14:55.583972   74510 sshutil.go:53] new ssh client: &{IP:192.168.61.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/embed-certs-777541/id_rsa Username:docker}
	I0816 18:14:55.663379   74510 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 18:14:55.667205   74510 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 18:14:55.667231   74510 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/addons for local assets ...
	I0816 18:14:55.667321   74510 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-9545/.minikube/files for local assets ...
	I0816 18:14:55.667426   74510 filesync.go:149] local asset: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem -> 167532.pem in /etc/ssl/certs
	I0816 18:14:55.667560   74510 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 18:14:55.676427   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /etc/ssl/certs/167532.pem (1708 bytes)
	I0816 18:14:55.698188   74510 start.go:296] duration metric: took 118.396211ms for postStartSetup
	I0816 18:14:55.698226   74510 fix.go:56] duration metric: took 20.644852989s for fixHost
	I0816 18:14:55.698245   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:14:55.701014   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.701359   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:55.701390   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.701587   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHPort
	I0816 18:14:55.701755   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:55.701924   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:55.702070   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHUsername
	I0816 18:14:55.702241   74510 main.go:141] libmachine: Using SSH client type: native
	I0816 18:14:55.702452   74510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.218 22 <nil> <nil>}
	I0816 18:14:55.702464   74510 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 18:14:55.801397   74510 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723832095.756052952
	
	I0816 18:14:55.801431   74510 fix.go:216] guest clock: 1723832095.756052952
	I0816 18:14:55.801443   74510 fix.go:229] Guest: 2024-08-16 18:14:55.756052952 +0000 UTC Remote: 2024-08-16 18:14:55.698231489 +0000 UTC m=+357.018707788 (delta=57.821463ms)
	I0816 18:14:55.801492   74510 fix.go:200] guest clock delta is within tolerance: 57.821463ms
	I0816 18:14:55.801504   74510 start.go:83] releasing machines lock for "embed-certs-777541", held for 20.74815396s
	I0816 18:14:55.801528   74510 main.go:141] libmachine: (embed-certs-777541) Calling .DriverName
	I0816 18:14:55.801781   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetIP
	I0816 18:14:55.804216   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.804617   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:55.804659   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.804795   74510 main.go:141] libmachine: (embed-certs-777541) Calling .DriverName
	I0816 18:14:55.805395   74510 main.go:141] libmachine: (embed-certs-777541) Calling .DriverName
	I0816 18:14:55.805622   74510 main.go:141] libmachine: (embed-certs-777541) Calling .DriverName
	I0816 18:14:55.805730   74510 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 18:14:55.805781   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:14:55.805849   74510 ssh_runner.go:195] Run: cat /version.json
	I0816 18:14:55.805877   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:14:55.808587   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.808946   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:55.808978   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.809080   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.809249   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHPort
	I0816 18:14:55.809415   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:55.809417   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:55.809442   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:55.809575   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHPort
	I0816 18:14:55.809597   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHUsername
	I0816 18:14:55.809720   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:14:55.809766   74510 sshutil.go:53] new ssh client: &{IP:192.168.61.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/embed-certs-777541/id_rsa Username:docker}
	I0816 18:14:55.809857   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHUsername
	I0816 18:14:55.809970   74510 sshutil.go:53] new ssh client: &{IP:192.168.61.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/embed-certs-777541/id_rsa Username:docker}
	I0816 18:14:55.885026   74510 ssh_runner.go:195] Run: systemctl --version
	I0816 18:14:55.927940   74510 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 18:14:56.072936   74510 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 18:14:56.080952   74510 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 18:14:56.081029   74510 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 18:14:56.100709   74510 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 18:14:56.100734   74510 start.go:495] detecting cgroup driver to use...
	I0816 18:14:56.100791   74510 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 18:14:56.115759   74510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 18:14:56.129714   74510 docker.go:217] disabling cri-docker service (if available) ...
	I0816 18:14:56.129774   74510 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 18:14:56.142909   74510 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 18:14:56.156413   74510 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 18:14:56.268818   74510 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 18:14:56.424536   74510 docker.go:233] disabling docker service ...
	I0816 18:14:56.424612   74510 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 18:14:56.438033   74510 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 18:14:56.450479   74510 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 18:14:56.560132   74510 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 18:14:56.683671   74510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 18:14:56.697636   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 18:14:56.716486   74510 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 18:14:56.716560   74510 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:56.726082   74510 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 18:14:56.726144   74510 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:56.735971   74510 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:56.745410   74510 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:56.754952   74510 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 18:14:56.764717   74510 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:56.774153   74510 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:56.789843   74510 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 18:14:56.799399   74510 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 18:14:56.807679   74510 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 18:14:56.807743   74510 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 18:14:56.819873   74510 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 18:14:56.829921   74510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:14:56.936372   74510 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 18:14:57.073931   74510 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 18:14:57.073998   74510 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 18:14:57.078254   74510 start.go:563] Will wait 60s for crictl version
	I0816 18:14:57.078327   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:14:57.081833   74510 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 18:14:57.121402   74510 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 18:14:57.121476   74510 ssh_runner.go:195] Run: crio --version
	I0816 18:14:57.149262   74510 ssh_runner.go:195] Run: crio --version
	I0816 18:14:57.183015   74510 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 18:14:53.146986   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:53.647279   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:54.147587   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:54.647911   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:55.147322   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:55.647765   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:56.147695   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:56.647296   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:57.147031   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:57.647108   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:57.184157   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetIP
	I0816 18:14:57.186758   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:57.187177   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:14:57.187206   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:14:57.187439   74510 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0816 18:14:57.191152   74510 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 18:14:57.203073   74510 kubeadm.go:883] updating cluster {Name:embed-certs-777541 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-777541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.218 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 18:14:57.203240   74510 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 18:14:57.203332   74510 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 18:14:57.238289   74510 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0816 18:14:57.238348   74510 ssh_runner.go:195] Run: which lz4
	I0816 18:14:57.242251   74510 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 18:14:57.246081   74510 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 18:14:57.246124   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0816 18:14:58.459887   74510 crio.go:462] duration metric: took 1.217672418s to copy over tarball
	I0816 18:14:58.459960   74510 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 18:14:54.707069   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:57.206750   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:55.449391   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:57.449830   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:59.451338   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:14:58.147661   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:58.647270   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:59.147355   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:59.647821   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:00.148023   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:00.647165   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:01.147669   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:01.647960   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:02.147721   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:02.647932   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:00.545989   74510 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.085985152s)
	I0816 18:15:00.546028   74510 crio.go:469] duration metric: took 2.086110527s to extract the tarball
	I0816 18:15:00.546039   74510 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 18:15:00.587096   74510 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 18:15:00.630366   74510 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 18:15:00.630394   74510 cache_images.go:84] Images are preloaded, skipping loading
	I0816 18:15:00.630405   74510 kubeadm.go:934] updating node { 192.168.61.218 8443 v1.31.0 crio true true} ...
	I0816 18:15:00.630540   74510 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-777541 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.218
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-777541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 18:15:00.630630   74510 ssh_runner.go:195] Run: crio config
	I0816 18:15:00.681196   74510 cni.go:84] Creating CNI manager for ""
	I0816 18:15:00.681224   74510 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 18:15:00.681235   74510 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 18:15:00.681262   74510 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.218 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-777541 NodeName:embed-certs-777541 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.218"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.218 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 18:15:00.681439   74510 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.218
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-777541"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.218
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.218"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 18:15:00.681534   74510 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 18:15:00.691239   74510 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 18:15:00.691294   74510 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 18:15:00.700059   74510 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0816 18:15:00.717826   74510 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 18:15:00.733475   74510 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0816 18:15:00.750175   74510 ssh_runner.go:195] Run: grep 192.168.61.218	control-plane.minikube.internal$ /etc/hosts
	I0816 18:15:00.753865   74510 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.218	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 18:15:00.765531   74510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:15:00.875234   74510 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 18:15:00.893095   74510 certs.go:68] Setting up /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/embed-certs-777541 for IP: 192.168.61.218
	I0816 18:15:00.893115   74510 certs.go:194] generating shared ca certs ...
	I0816 18:15:00.893131   74510 certs.go:226] acquiring lock for ca certs: {Name:mk1e000575c94c138513704c2900b8a68810eb65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:15:00.893274   74510 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key
	I0816 18:15:00.893318   74510 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key
	I0816 18:15:00.893327   74510 certs.go:256] generating profile certs ...
	I0816 18:15:00.893403   74510 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/embed-certs-777541/client.key
	I0816 18:15:00.893459   74510 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/embed-certs-777541/apiserver.key.dd0c1a01
	I0816 18:15:00.893503   74510 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/embed-certs-777541/proxy-client.key
	I0816 18:15:00.893617   74510 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem (1338 bytes)
	W0816 18:15:00.893645   74510 certs.go:480] ignoring /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753_empty.pem, impossibly tiny 0 bytes
	I0816 18:15:00.893655   74510 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 18:15:00.893675   74510 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/ca.pem (1082 bytes)
	I0816 18:15:00.893698   74510 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/cert.pem (1123 bytes)
	I0816 18:15:00.893721   74510 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/certs/key.pem (1679 bytes)
	I0816 18:15:00.893759   74510 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem (1708 bytes)
	I0816 18:15:00.894445   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 18:15:00.936535   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 18:15:00.969775   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 18:15:01.013053   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 18:15:01.046087   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/embed-certs-777541/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0816 18:15:01.073290   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/embed-certs-777541/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 18:15:01.097033   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/embed-certs-777541/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 18:15:01.119859   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/embed-certs-777541/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 18:15:01.141943   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/certs/16753.pem --> /usr/share/ca-certificates/16753.pem (1338 bytes)
	I0816 18:15:01.168752   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/ssl/certs/167532.pem --> /usr/share/ca-certificates/167532.pem (1708 bytes)
	I0816 18:15:01.191193   74510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 18:15:01.213691   74510 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 18:15:01.229374   74510 ssh_runner.go:195] Run: openssl version
	I0816 18:15:01.234563   74510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16753.pem && ln -fs /usr/share/ca-certificates/16753.pem /etc/ssl/certs/16753.pem"
	I0816 18:15:01.244301   74510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16753.pem
	I0816 18:15:01.248156   74510 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 17:00 /usr/share/ca-certificates/16753.pem
	I0816 18:15:01.248220   74510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16753.pem
	I0816 18:15:01.253468   74510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16753.pem /etc/ssl/certs/51391683.0"
	I0816 18:15:01.262917   74510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167532.pem && ln -fs /usr/share/ca-certificates/167532.pem /etc/ssl/certs/167532.pem"
	I0816 18:15:01.272577   74510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167532.pem
	I0816 18:15:01.276790   74510 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 17:00 /usr/share/ca-certificates/167532.pem
	I0816 18:15:01.276841   74510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167532.pem
	I0816 18:15:01.281847   74510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167532.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 18:15:01.291789   74510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 18:15:01.302422   74510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:15:01.306320   74510 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:49 /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:15:01.306364   74510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:15:01.311335   74510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 18:15:01.320713   74510 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 18:15:01.324442   74510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 18:15:01.330137   74510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 18:15:01.335693   74510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 18:15:01.340987   74510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 18:15:01.346071   74510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 18:15:01.351280   74510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 18:15:01.357275   74510 kubeadm.go:392] StartCluster: {Name:embed-certs-777541 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-777541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.218 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 18:15:01.357388   74510 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 18:15:01.357427   74510 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 18:15:01.400422   74510 cri.go:89] found id: ""
	I0816 18:15:01.400497   74510 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 18:15:01.410142   74510 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 18:15:01.410162   74510 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 18:15:01.410211   74510 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 18:15:01.419129   74510 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 18:15:01.420130   74510 kubeconfig.go:125] found "embed-certs-777541" server: "https://192.168.61.218:8443"
	I0816 18:15:01.422036   74510 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 18:15:01.430665   74510 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.218
	I0816 18:15:01.430694   74510 kubeadm.go:1160] stopping kube-system containers ...
	I0816 18:15:01.430705   74510 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 18:15:01.430762   74510 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 18:15:01.469108   74510 cri.go:89] found id: ""
	I0816 18:15:01.469182   74510 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 18:15:01.486125   74510 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 18:15:01.495311   74510 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 18:15:01.495335   74510 kubeadm.go:157] found existing configuration files:
	
	I0816 18:15:01.495384   74510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 18:15:01.504066   74510 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 18:15:01.504128   74510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 18:15:01.513222   74510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 18:15:01.521593   74510 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 18:15:01.521692   74510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 18:15:01.530413   74510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 18:15:01.539027   74510 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 18:15:01.539101   74510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 18:15:01.547802   74510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 18:15:01.557143   74510 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 18:15:01.557203   74510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 18:15:01.568616   74510 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 18:15:01.578091   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:15:01.700661   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:15:02.631047   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:15:02.833132   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:15:02.900476   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:15:02.972431   74510 api_server.go:52] waiting for apiserver process to appear ...
	I0816 18:15:02.972514   74510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:03.473296   74510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:14:59.707731   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:02.206825   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:01.948070   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:03.948398   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:03.147098   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:03.646983   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:04.147320   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:04.647649   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:05.147258   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:05.647999   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:06.147901   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:06.647340   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:07.147339   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:07.648033   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:03.973603   74510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:04.472779   74510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:04.972846   74510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:05.473594   74510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:05.487878   74510 api_server.go:72] duration metric: took 2.51545841s to wait for apiserver process to appear ...
	I0816 18:15:05.487914   74510 api_server.go:88] waiting for apiserver healthz status ...
	I0816 18:15:05.487937   74510 api_server.go:253] Checking apiserver healthz at https://192.168.61.218:8443/healthz ...
	I0816 18:15:08.450583   74510 api_server.go:279] https://192.168.61.218:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 18:15:08.450618   74510 api_server.go:103] status: https://192.168.61.218:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 18:15:08.450635   74510 api_server.go:253] Checking apiserver healthz at https://192.168.61.218:8443/healthz ...
	I0816 18:15:08.495625   74510 api_server.go:279] https://192.168.61.218:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 18:15:08.495656   74510 api_server.go:103] status: https://192.168.61.218:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 18:15:08.495669   74510 api_server.go:253] Checking apiserver healthz at https://192.168.61.218:8443/healthz ...
	I0816 18:15:08.516711   74510 api_server.go:279] https://192.168.61.218:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:15:08.516744   74510 api_server.go:103] status: https://192.168.61.218:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:15:04.836663   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:07.206999   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:06.447839   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:08.449939   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:08.988897   74510 api_server.go:253] Checking apiserver healthz at https://192.168.61.218:8443/healthz ...
	I0816 18:15:08.996347   74510 api_server.go:279] https://192.168.61.218:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:15:08.996374   74510 api_server.go:103] status: https://192.168.61.218:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:15:09.488013   74510 api_server.go:253] Checking apiserver healthz at https://192.168.61.218:8443/healthz ...
	I0816 18:15:09.499514   74510 api_server.go:279] https://192.168.61.218:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 18:15:09.499559   74510 api_server.go:103] status: https://192.168.61.218:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 18:15:09.988080   74510 api_server.go:253] Checking apiserver healthz at https://192.168.61.218:8443/healthz ...
	I0816 18:15:09.992106   74510 api_server.go:279] https://192.168.61.218:8443/healthz returned 200:
	ok
	I0816 18:15:09.998515   74510 api_server.go:141] control plane version: v1.31.0
	I0816 18:15:09.998542   74510 api_server.go:131] duration metric: took 4.510619176s to wait for apiserver health ...
	I0816 18:15:09.998555   74510 cni.go:84] Creating CNI manager for ""
	I0816 18:15:09.998563   74510 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 18:15:10.000470   74510 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 18:15:10.001870   74510 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 18:15:10.011805   74510 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 18:15:10.032349   74510 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 18:15:10.046765   74510 system_pods.go:59] 8 kube-system pods found
	I0816 18:15:10.046798   74510 system_pods.go:61] "coredns-6f6b679f8f-8njs2" [f29c31e1-4c2a-4dd8-ba60-62998504c55e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 18:15:10.046808   74510 system_pods.go:61] "etcd-embed-certs-777541" [9cad9a1c-cea5-4271-9a68-3689ecdad607] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 18:15:10.046817   74510 system_pods.go:61] "kube-apiserver-embed-certs-777541" [a5105f98-368f-4687-8eca-ecc66ae59b42] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 18:15:10.046829   74510 system_pods.go:61] "kube-controller-manager-embed-certs-777541" [63cb72fb-167b-446b-874d-6ee665ad8a55] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 18:15:10.046838   74510 system_pods.go:61] "kube-proxy-j5rl7" [fcbc8903-6fa2-4f55-9ec0-92b77e21fb08] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0816 18:15:10.046847   74510 system_pods.go:61] "kube-scheduler-embed-certs-777541" [f2224375-d2f9-4e85-9eb8-26070a90642f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 18:15:10.046855   74510 system_pods.go:61] "metrics-server-6867b74b74-6hkzb" [3e01da8d-7ddf-47cc-9079-5162cf2c2b53] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 18:15:10.046867   74510 system_pods.go:61] "storage-provisioner" [6fc6c4da-0e0f-45cc-84a6-bd4907f5e852] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0816 18:15:10.046876   74510 system_pods.go:74] duration metric: took 14.506593ms to wait for pod list to return data ...
	I0816 18:15:10.046889   74510 node_conditions.go:102] verifying NodePressure condition ...
	I0816 18:15:10.050663   74510 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 18:15:10.050686   74510 node_conditions.go:123] node cpu capacity is 2
	I0816 18:15:10.050699   74510 node_conditions.go:105] duration metric: took 3.805313ms to run NodePressure ...
	I0816 18:15:10.050717   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 18:15:10.344177   74510 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0816 18:15:10.348795   74510 kubeadm.go:739] kubelet initialised
	I0816 18:15:10.348820   74510 kubeadm.go:740] duration metric: took 4.612695ms waiting for restarted kubelet to initialise ...
	I0816 18:15:10.348830   74510 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:15:10.355270   74510 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-8njs2" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:10.361564   74510 pod_ready.go:98] node "embed-certs-777541" hosting pod "coredns-6f6b679f8f-8njs2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:10.361584   74510 pod_ready.go:82] duration metric: took 6.2936ms for pod "coredns-6f6b679f8f-8njs2" in "kube-system" namespace to be "Ready" ...
	E0816 18:15:10.361592   74510 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-777541" hosting pod "coredns-6f6b679f8f-8njs2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:10.361598   74510 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:10.367126   74510 pod_ready.go:98] node "embed-certs-777541" hosting pod "etcd-embed-certs-777541" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:10.367149   74510 pod_ready.go:82] duration metric: took 5.542782ms for pod "etcd-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	E0816 18:15:10.367159   74510 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-777541" hosting pod "etcd-embed-certs-777541" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:10.367166   74510 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:10.372241   74510 pod_ready.go:98] node "embed-certs-777541" hosting pod "kube-apiserver-embed-certs-777541" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:10.372262   74510 pod_ready.go:82] duration metric: took 5.086551ms for pod "kube-apiserver-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	E0816 18:15:10.372273   74510 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-777541" hosting pod "kube-apiserver-embed-certs-777541" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:10.372301   74510 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:10.436397   74510 pod_ready.go:98] node "embed-certs-777541" hosting pod "kube-controller-manager-embed-certs-777541" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:10.436423   74510 pod_ready.go:82] duration metric: took 64.108858ms for pod "kube-controller-manager-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	E0816 18:15:10.436432   74510 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-777541" hosting pod "kube-controller-manager-embed-certs-777541" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:10.436443   74510 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-j5rl7" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:10.836116   74510 pod_ready.go:98] node "embed-certs-777541" hosting pod "kube-proxy-j5rl7" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:10.836146   74510 pod_ready.go:82] duration metric: took 399.693364ms for pod "kube-proxy-j5rl7" in "kube-system" namespace to be "Ready" ...
	E0816 18:15:10.836158   74510 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-777541" hosting pod "kube-proxy-j5rl7" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:10.836165   74510 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:11.235403   74510 pod_ready.go:98] node "embed-certs-777541" hosting pod "kube-scheduler-embed-certs-777541" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:11.235426   74510 pod_ready.go:82] duration metric: took 399.255693ms for pod "kube-scheduler-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	E0816 18:15:11.235439   74510 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-777541" hosting pod "kube-scheduler-embed-certs-777541" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:11.235445   74510 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:11.635717   74510 pod_ready.go:98] node "embed-certs-777541" hosting pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:11.635746   74510 pod_ready.go:82] duration metric: took 400.29283ms for pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace to be "Ready" ...
	E0816 18:15:11.635756   74510 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-777541" hosting pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:11.635762   74510 pod_ready.go:39] duration metric: took 1.286923943s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:15:11.635784   74510 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 18:15:11.646221   74510 ops.go:34] apiserver oom_adj: -16
	I0816 18:15:11.646248   74510 kubeadm.go:597] duration metric: took 10.23607804s to restartPrimaryControlPlane
	I0816 18:15:11.646269   74510 kubeadm.go:394] duration metric: took 10.288999278s to StartCluster
	I0816 18:15:11.646322   74510 settings.go:142] acquiring lock: {Name:mk7d6bbff611e99a9222156e58c7ea3b0a5541f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:15:11.646405   74510 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19461-9545/kubeconfig
	I0816 18:15:11.648652   74510 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/kubeconfig: {Name:mk7d5abb3a1b9b654c5d7490d32aeae2c9a1bdd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:15:11.648939   74510 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.218 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 18:15:11.649056   74510 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 18:15:11.649124   74510 config.go:182] Loaded profile config "embed-certs-777541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 18:15:11.649155   74510 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-777541"
	I0816 18:15:11.649165   74510 addons.go:69] Setting metrics-server=true in profile "embed-certs-777541"
	I0816 18:15:11.649192   74510 addons.go:234] Setting addon metrics-server=true in "embed-certs-777541"
	I0816 18:15:11.649201   74510 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-777541"
	W0816 18:15:11.649205   74510 addons.go:243] addon metrics-server should already be in state true
	I0816 18:15:11.649193   74510 addons.go:69] Setting default-storageclass=true in profile "embed-certs-777541"
	I0816 18:15:11.649252   74510 host.go:66] Checking if "embed-certs-777541" exists ...
	I0816 18:15:11.649254   74510 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-777541"
	W0816 18:15:11.649209   74510 addons.go:243] addon storage-provisioner should already be in state true
	I0816 18:15:11.649332   74510 host.go:66] Checking if "embed-certs-777541" exists ...
	I0816 18:15:11.649702   74510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:15:11.649706   74510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:15:11.649742   74510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:15:11.649772   74510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:15:11.649877   74510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:15:11.649930   74510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:15:11.651580   74510 out.go:177] * Verifying Kubernetes components...
	I0816 18:15:11.652903   74510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:15:11.665975   74510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33631
	I0816 18:15:11.666041   74510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44231
	I0816 18:15:11.666404   74510 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:15:11.666439   74510 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:15:11.666986   74510 main.go:141] libmachine: Using API Version  1
	I0816 18:15:11.667005   74510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:15:11.667051   74510 main.go:141] libmachine: Using API Version  1
	I0816 18:15:11.667085   74510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:15:11.667312   74510 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:15:11.667517   74510 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:15:11.667846   74510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:15:11.667899   74510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:15:11.668039   74510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:15:11.668077   74510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:15:11.669328   74510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38849
	I0816 18:15:11.669765   74510 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:15:11.670270   74510 main.go:141] libmachine: Using API Version  1
	I0816 18:15:11.670301   74510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:15:11.670658   74510 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:15:11.670896   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetState
	I0816 18:15:11.674148   74510 addons.go:234] Setting addon default-storageclass=true in "embed-certs-777541"
	W0816 18:15:11.674165   74510 addons.go:243] addon default-storageclass should already be in state true
	I0816 18:15:11.674184   74510 host.go:66] Checking if "embed-certs-777541" exists ...
	I0816 18:15:11.674448   74510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:15:11.674482   74510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:15:11.683629   74510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39851
	I0816 18:15:11.683637   74510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42943
	I0816 18:15:11.684040   74510 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:15:11.684048   74510 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:15:11.684499   74510 main.go:141] libmachine: Using API Version  1
	I0816 18:15:11.684516   74510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:15:11.684653   74510 main.go:141] libmachine: Using API Version  1
	I0816 18:15:11.684670   74510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:15:11.684968   74510 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:15:11.685114   74510 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:15:11.685136   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetState
	I0816 18:15:11.685329   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetState
	I0816 18:15:11.687030   74510 main.go:141] libmachine: (embed-certs-777541) Calling .DriverName
	I0816 18:15:11.687130   74510 main.go:141] libmachine: (embed-certs-777541) Calling .DriverName
	I0816 18:15:11.688852   74510 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0816 18:15:11.688855   74510 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:15:08.147308   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:08.647669   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:09.147149   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:09.647072   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:10.147381   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:10.647567   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:11.147101   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:11.647587   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:12.146972   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:12.647842   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:11.689590   74510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46491
	I0816 18:15:11.690041   74510 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:15:11.690152   74510 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 18:15:11.690170   74510 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0816 18:15:11.690186   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:15:11.690223   74510 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 18:15:11.690238   74510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 18:15:11.690253   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:15:11.690606   74510 main.go:141] libmachine: Using API Version  1
	I0816 18:15:11.690627   74510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:15:11.691006   74510 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:15:11.691543   74510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:15:11.691575   74510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:15:11.693646   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:15:11.693669   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:15:11.693988   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:15:11.694007   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:15:11.694051   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:15:11.694064   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:15:11.694275   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHPort
	I0816 18:15:11.694322   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHPort
	I0816 18:15:11.694436   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:15:11.694468   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:15:11.694545   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHUsername
	I0816 18:15:11.694602   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHUsername
	I0816 18:15:11.694677   74510 sshutil.go:53] new ssh client: &{IP:192.168.61.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/embed-certs-777541/id_rsa Username:docker}
	I0816 18:15:11.694885   74510 sshutil.go:53] new ssh client: &{IP:192.168.61.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/embed-certs-777541/id_rsa Username:docker}
	I0816 18:15:11.709409   74510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36055
	I0816 18:15:11.709800   74510 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:15:11.710343   74510 main.go:141] libmachine: Using API Version  1
	I0816 18:15:11.710363   74510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:15:11.710700   74510 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:15:11.710874   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetState
	I0816 18:15:11.712484   74510 main.go:141] libmachine: (embed-certs-777541) Calling .DriverName
	I0816 18:15:11.712691   74510 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 18:15:11.712706   74510 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 18:15:11.712723   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHHostname
	I0816 18:15:11.715590   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:15:11.716017   74510 main.go:141] libmachine: (embed-certs-777541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:9a:0c", ip: ""} in network mk-embed-certs-777541: {Iface:virbr3 ExpiryTime:2024-08-16 19:14:46 +0000 UTC Type:0 Mac:52:54:00:54:9a:0c Iaid: IPaddr:192.168.61.218 Prefix:24 Hostname:embed-certs-777541 Clientid:01:52:54:00:54:9a:0c}
	I0816 18:15:11.716050   74510 main.go:141] libmachine: (embed-certs-777541) DBG | domain embed-certs-777541 has defined IP address 192.168.61.218 and MAC address 52:54:00:54:9a:0c in network mk-embed-certs-777541
	I0816 18:15:11.716167   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHPort
	I0816 18:15:11.716379   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHKeyPath
	I0816 18:15:11.716572   74510 main.go:141] libmachine: (embed-certs-777541) Calling .GetSSHUsername
	I0816 18:15:11.716737   74510 sshutil.go:53] new ssh client: &{IP:192.168.61.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/embed-certs-777541/id_rsa Username:docker}
	I0816 18:15:11.864710   74510 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 18:15:11.885871   74510 node_ready.go:35] waiting up to 6m0s for node "embed-certs-777541" to be "Ready" ...
	I0816 18:15:11.985725   74510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 18:15:12.007635   74510 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 18:15:12.007669   74510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0816 18:15:12.040044   74510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 18:15:12.059661   74510 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 18:15:12.059687   74510 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0816 18:15:12.123787   74510 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 18:15:12.123812   74510 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0816 18:15:12.167249   74510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 18:15:12.457960   74510 main.go:141] libmachine: Making call to close driver server
	I0816 18:15:12.457985   74510 main.go:141] libmachine: (embed-certs-777541) Calling .Close
	I0816 18:15:12.458264   74510 main.go:141] libmachine: (embed-certs-777541) DBG | Closing plugin on server side
	I0816 18:15:12.458315   74510 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:15:12.458334   74510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:15:12.458348   74510 main.go:141] libmachine: Making call to close driver server
	I0816 18:15:12.458360   74510 main.go:141] libmachine: (embed-certs-777541) Calling .Close
	I0816 18:15:12.458577   74510 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:15:12.458590   74510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:15:12.468651   74510 main.go:141] libmachine: Making call to close driver server
	I0816 18:15:12.468675   74510 main.go:141] libmachine: (embed-certs-777541) Calling .Close
	I0816 18:15:12.468921   74510 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:15:12.468940   74510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:15:12.468963   74510 main.go:141] libmachine: (embed-certs-777541) DBG | Closing plugin on server side
	I0816 18:15:13.203995   74510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.163904081s)
	I0816 18:15:13.204048   74510 main.go:141] libmachine: Making call to close driver server
	I0816 18:15:13.204060   74510 main.go:141] libmachine: (embed-certs-777541) Calling .Close
	I0816 18:15:13.204309   74510 main.go:141] libmachine: (embed-certs-777541) DBG | Closing plugin on server side
	I0816 18:15:13.204350   74510 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:15:13.204359   74510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:15:13.204368   74510 main.go:141] libmachine: Making call to close driver server
	I0816 18:15:13.204376   74510 main.go:141] libmachine: (embed-certs-777541) Calling .Close
	I0816 18:15:13.204562   74510 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:15:13.204589   74510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:15:13.213068   74510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.045790147s)
	I0816 18:15:13.213101   74510 main.go:141] libmachine: Making call to close driver server
	I0816 18:15:13.213115   74510 main.go:141] libmachine: (embed-certs-777541) Calling .Close
	I0816 18:15:13.213533   74510 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:15:13.213551   74510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:15:13.213555   74510 main.go:141] libmachine: (embed-certs-777541) DBG | Closing plugin on server side
	I0816 18:15:13.213560   74510 main.go:141] libmachine: Making call to close driver server
	I0816 18:15:13.213595   74510 main.go:141] libmachine: (embed-certs-777541) Calling .Close
	I0816 18:15:13.213869   74510 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:15:13.213887   74510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:15:13.213897   74510 addons.go:475] Verifying addon metrics-server=true in "embed-certs-777541"
	I0816 18:15:13.213901   74510 main.go:141] libmachine: (embed-certs-777541) DBG | Closing plugin on server side
	I0816 18:15:13.215724   74510 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0816 18:15:13.217031   74510 addons.go:510] duration metric: took 1.567977779s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0816 18:15:09.706813   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:11.708577   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:10.947986   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:12.949227   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:13.147558   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:13.647755   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:14.147408   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:14.647810   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:15.147888   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:15.647476   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:16.147258   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:16.647785   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:17.147086   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:17.647852   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:13.889379   74510 node_ready.go:53] node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:15.889764   74510 node_ready.go:53] node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:18.390031   74510 node_ready.go:53] node "embed-certs-777541" has status "Ready":"False"
	I0816 18:15:14.207743   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:16.705831   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:15.448826   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:17.950756   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:18.147086   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:18.647013   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:19.147027   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:19.647100   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:20.147070   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:20.647097   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:21.147251   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:21.647856   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:22.147427   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:22.647231   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:18.890110   74510 node_ready.go:49] node "embed-certs-777541" has status "Ready":"True"
	I0816 18:15:18.890138   74510 node_ready.go:38] duration metric: took 7.004237799s for node "embed-certs-777541" to be "Ready" ...
	I0816 18:15:18.890156   74510 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:15:18.897124   74510 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-8njs2" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:18.902860   74510 pod_ready.go:93] pod "coredns-6f6b679f8f-8njs2" in "kube-system" namespace has status "Ready":"True"
	I0816 18:15:18.902878   74510 pod_ready.go:82] duration metric: took 5.73242ms for pod "coredns-6f6b679f8f-8njs2" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:18.902886   74510 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:20.909185   74510 pod_ready.go:103] pod "etcd-embed-certs-777541" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:21.909629   74510 pod_ready.go:93] pod "etcd-embed-certs-777541" in "kube-system" namespace has status "Ready":"True"
	I0816 18:15:21.909660   74510 pod_ready.go:82] duration metric: took 3.006768325s for pod "etcd-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:21.909670   74510 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:22.916066   74510 pod_ready.go:93] pod "kube-apiserver-embed-certs-777541" in "kube-system" namespace has status "Ready":"True"
	I0816 18:15:22.916090   74510 pod_ready.go:82] duration metric: took 1.006414177s for pod "kube-apiserver-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:22.916099   74510 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:22.920882   74510 pod_ready.go:93] pod "kube-controller-manager-embed-certs-777541" in "kube-system" namespace has status "Ready":"True"
	I0816 18:15:22.920908   74510 pod_ready.go:82] duration metric: took 4.802561ms for pod "kube-controller-manager-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:22.920918   74510 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-j5rl7" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:22.926952   74510 pod_ready.go:93] pod "kube-proxy-j5rl7" in "kube-system" namespace has status "Ready":"True"
	I0816 18:15:22.926975   74510 pod_ready.go:82] duration metric: took 6.0498ms for pod "kube-proxy-j5rl7" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:22.926984   74510 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:19.206127   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:21.206280   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:23.705588   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:20.448793   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:22.948798   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:23.147403   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:23.647030   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:24.147677   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:24.647324   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:25.147973   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:25.647097   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:26.147160   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:26.646963   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:27.147620   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:27.647918   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:24.933953   74510 pod_ready.go:103] pod "kube-scheduler-embed-certs-777541" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:25.433826   74510 pod_ready.go:93] pod "kube-scheduler-embed-certs-777541" in "kube-system" namespace has status "Ready":"True"
	I0816 18:15:25.433846   74510 pod_ready.go:82] duration metric: took 2.506855714s for pod "kube-scheduler-embed-certs-777541" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:25.433855   74510 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace to be "Ready" ...
	I0816 18:15:27.440119   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:25.707915   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:28.206580   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:25.447687   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:27.948700   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:28.146994   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:28.647364   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:29.147332   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:29.647773   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:30.147276   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:30.647794   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:31.147398   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:31.647565   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:32.147139   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:32.647961   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:29.440564   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:31.940747   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:30.706544   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:32.706852   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:29.948982   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:32.447920   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:34.448186   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:33.147648   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:33.647087   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:34.147881   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:34.646988   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:35.147118   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:35.647978   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:36.147541   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:36.647423   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:37.147051   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:37.647726   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:34.439692   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:36.439956   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:38.440315   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:35.206291   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:37.206902   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:36.948416   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:39.447952   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:38.147192   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:38.647318   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:39.147186   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:39.647662   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:40.147044   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:40.647787   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:41.147638   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:41.647490   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:42.147787   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:42.647959   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:40.440405   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:42.440727   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:39.207086   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:41.706048   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:43.706585   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:41.450069   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:43.948101   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:43.147938   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:43.647855   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:44.147781   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:44.647710   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:15:44.647796   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:15:44.682176   75402 cri.go:89] found id: ""
	I0816 18:15:44.682207   75402 logs.go:276] 0 containers: []
	W0816 18:15:44.682218   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:15:44.682226   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:15:44.682285   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:15:44.717500   75402 cri.go:89] found id: ""
	I0816 18:15:44.717530   75402 logs.go:276] 0 containers: []
	W0816 18:15:44.717540   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:15:44.717552   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:15:44.717620   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:15:44.751816   75402 cri.go:89] found id: ""
	I0816 18:15:44.751847   75402 logs.go:276] 0 containers: []
	W0816 18:15:44.751858   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:15:44.751865   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:15:44.751942   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:15:44.783236   75402 cri.go:89] found id: ""
	I0816 18:15:44.783260   75402 logs.go:276] 0 containers: []
	W0816 18:15:44.783267   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:15:44.783272   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:15:44.783337   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:15:44.813087   75402 cri.go:89] found id: ""
	I0816 18:15:44.813110   75402 logs.go:276] 0 containers: []
	W0816 18:15:44.813116   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:15:44.813122   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:15:44.813166   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:15:44.843568   75402 cri.go:89] found id: ""
	I0816 18:15:44.843599   75402 logs.go:276] 0 containers: []
	W0816 18:15:44.843609   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:15:44.843616   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:15:44.843679   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:15:44.873694   75402 cri.go:89] found id: ""
	I0816 18:15:44.873723   75402 logs.go:276] 0 containers: []
	W0816 18:15:44.873734   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:15:44.873741   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:15:44.873808   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:15:44.906183   75402 cri.go:89] found id: ""
	I0816 18:15:44.906212   75402 logs.go:276] 0 containers: []
	W0816 18:15:44.906222   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:15:44.906231   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:15:44.906241   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:15:44.958963   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:15:44.958993   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:15:44.972390   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:15:44.972415   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:15:45.091624   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:15:45.091645   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:15:45.091661   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:15:45.159927   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:15:45.159963   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:15:47.698398   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:47.711848   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:15:47.711917   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:15:47.744247   75402 cri.go:89] found id: ""
	I0816 18:15:47.744278   75402 logs.go:276] 0 containers: []
	W0816 18:15:47.744288   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:15:47.744295   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:15:47.744374   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:15:47.783188   75402 cri.go:89] found id: ""
	I0816 18:15:47.783211   75402 logs.go:276] 0 containers: []
	W0816 18:15:47.783219   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:15:47.783224   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:15:47.783270   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:15:47.829284   75402 cri.go:89] found id: ""
	I0816 18:15:47.829320   75402 logs.go:276] 0 containers: []
	W0816 18:15:47.829333   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:15:47.829341   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:15:47.829413   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:15:47.879482   75402 cri.go:89] found id: ""
	I0816 18:15:47.879514   75402 logs.go:276] 0 containers: []
	W0816 18:15:47.879525   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:15:47.879532   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:15:47.879606   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:15:47.913766   75402 cri.go:89] found id: ""
	I0816 18:15:47.913797   75402 logs.go:276] 0 containers: []
	W0816 18:15:47.913808   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:15:47.913815   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:15:47.913880   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:15:47.947262   75402 cri.go:89] found id: ""
	I0816 18:15:47.947340   75402 logs.go:276] 0 containers: []
	W0816 18:15:47.947353   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:15:47.947362   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:15:47.947427   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:15:47.979638   75402 cri.go:89] found id: ""
	I0816 18:15:47.979667   75402 logs.go:276] 0 containers: []
	W0816 18:15:47.979678   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:15:47.979685   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:15:47.979741   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:15:48.010246   75402 cri.go:89] found id: ""
	I0816 18:15:48.010277   75402 logs.go:276] 0 containers: []
	W0816 18:15:48.010288   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:15:48.010296   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:15:48.010310   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:15:48.083916   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:15:48.083953   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:15:44.940775   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:47.440356   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:46.207236   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:48.705791   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:45.948300   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:47.948501   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:48.120254   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:15:48.120285   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:15:48.169590   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:15:48.169628   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:15:48.182821   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:15:48.182850   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:15:48.254088   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:15:50.755114   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:50.768167   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:15:50.768250   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:15:50.800881   75402 cri.go:89] found id: ""
	I0816 18:15:50.800906   75402 logs.go:276] 0 containers: []
	W0816 18:15:50.800913   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:15:50.800918   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:15:50.800969   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:15:50.833538   75402 cri.go:89] found id: ""
	I0816 18:15:50.833567   75402 logs.go:276] 0 containers: []
	W0816 18:15:50.833578   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:15:50.833586   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:15:50.833649   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:15:50.867306   75402 cri.go:89] found id: ""
	I0816 18:15:50.867336   75402 logs.go:276] 0 containers: []
	W0816 18:15:50.867347   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:15:50.867353   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:15:50.867400   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:15:50.900029   75402 cri.go:89] found id: ""
	I0816 18:15:50.900055   75402 logs.go:276] 0 containers: []
	W0816 18:15:50.900064   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:15:50.900072   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:15:50.900135   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:15:50.933604   75402 cri.go:89] found id: ""
	I0816 18:15:50.933630   75402 logs.go:276] 0 containers: []
	W0816 18:15:50.933638   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:15:50.933643   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:15:50.933707   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:15:50.966102   75402 cri.go:89] found id: ""
	I0816 18:15:50.966131   75402 logs.go:276] 0 containers: []
	W0816 18:15:50.966141   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:15:50.966149   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:15:50.966210   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:15:50.998007   75402 cri.go:89] found id: ""
	I0816 18:15:50.998036   75402 logs.go:276] 0 containers: []
	W0816 18:15:50.998047   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:15:50.998054   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:15:50.998115   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:15:51.032306   75402 cri.go:89] found id: ""
	I0816 18:15:51.032342   75402 logs.go:276] 0 containers: []
	W0816 18:15:51.032349   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:15:51.032357   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:15:51.032369   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:15:51.083186   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:15:51.083222   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:15:51.096072   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:15:51.096153   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:15:51.162667   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:15:51.162693   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:15:51.162709   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:15:51.241913   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:15:51.241954   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:15:49.440546   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:51.940026   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:50.706662   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:53.206075   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:50.447947   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:52.448340   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:54.448431   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:53.779323   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:53.793358   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:15:53.793433   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:15:53.827380   75402 cri.go:89] found id: ""
	I0816 18:15:53.827414   75402 logs.go:276] 0 containers: []
	W0816 18:15:53.827424   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:15:53.827430   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:15:53.827489   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:15:53.867331   75402 cri.go:89] found id: ""
	I0816 18:15:53.867370   75402 logs.go:276] 0 containers: []
	W0816 18:15:53.867380   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:15:53.867386   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:15:53.867438   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:15:53.899445   75402 cri.go:89] found id: ""
	I0816 18:15:53.899477   75402 logs.go:276] 0 containers: []
	W0816 18:15:53.899489   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:15:53.899498   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:15:53.899588   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:15:53.936527   75402 cri.go:89] found id: ""
	I0816 18:15:53.936556   75402 logs.go:276] 0 containers: []
	W0816 18:15:53.936568   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:15:53.936576   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:15:53.936653   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:15:53.970739   75402 cri.go:89] found id: ""
	I0816 18:15:53.970765   75402 logs.go:276] 0 containers: []
	W0816 18:15:53.970773   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:15:53.970780   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:15:53.970842   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:15:54.004119   75402 cri.go:89] found id: ""
	I0816 18:15:54.004150   75402 logs.go:276] 0 containers: []
	W0816 18:15:54.004159   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:15:54.004164   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:15:54.004217   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:15:54.038370   75402 cri.go:89] found id: ""
	I0816 18:15:54.038400   75402 logs.go:276] 0 containers: []
	W0816 18:15:54.038411   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:15:54.038416   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:15:54.038472   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:15:54.079346   75402 cri.go:89] found id: ""
	I0816 18:15:54.079375   75402 logs.go:276] 0 containers: []
	W0816 18:15:54.079383   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:15:54.079392   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:15:54.079403   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:15:54.116551   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:15:54.116586   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:15:54.169930   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:15:54.169970   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:15:54.182416   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:15:54.182448   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:15:54.253516   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:15:54.253539   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:15:54.253559   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:15:56.833124   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:56.846139   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:15:56.846211   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:15:56.880899   75402 cri.go:89] found id: ""
	I0816 18:15:56.880928   75402 logs.go:276] 0 containers: []
	W0816 18:15:56.880939   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:15:56.880945   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:15:56.880994   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:15:56.913362   75402 cri.go:89] found id: ""
	I0816 18:15:56.913393   75402 logs.go:276] 0 containers: []
	W0816 18:15:56.913406   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:15:56.913415   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:15:56.913507   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:15:56.951876   75402 cri.go:89] found id: ""
	I0816 18:15:56.951904   75402 logs.go:276] 0 containers: []
	W0816 18:15:56.951914   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:15:56.951919   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:15:56.951988   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:15:56.986335   75402 cri.go:89] found id: ""
	I0816 18:15:56.986358   75402 logs.go:276] 0 containers: []
	W0816 18:15:56.986366   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:15:56.986372   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:15:56.986423   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:15:57.022485   75402 cri.go:89] found id: ""
	I0816 18:15:57.022511   75402 logs.go:276] 0 containers: []
	W0816 18:15:57.022522   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:15:57.022529   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:15:57.022641   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:15:57.055436   75402 cri.go:89] found id: ""
	I0816 18:15:57.055463   75402 logs.go:276] 0 containers: []
	W0816 18:15:57.055470   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:15:57.055476   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:15:57.055536   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:15:57.085930   75402 cri.go:89] found id: ""
	I0816 18:15:57.085965   75402 logs.go:276] 0 containers: []
	W0816 18:15:57.085975   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:15:57.085981   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:15:57.086032   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:15:57.120436   75402 cri.go:89] found id: ""
	I0816 18:15:57.120466   75402 logs.go:276] 0 containers: []
	W0816 18:15:57.120477   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:15:57.120488   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:15:57.120501   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:15:57.202161   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:15:57.202218   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:15:57.243766   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:15:57.243805   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:15:57.295552   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:15:57.295585   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:15:57.307769   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:15:57.307802   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:15:57.390480   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:15:53.941399   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:56.439763   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:58.440357   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:55.206970   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:57.207312   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:56.948085   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:59.448174   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:59.891480   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:15:59.904766   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:15:59.904836   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:15:59.939209   75402 cri.go:89] found id: ""
	I0816 18:15:59.939241   75402 logs.go:276] 0 containers: []
	W0816 18:15:59.939252   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:15:59.939260   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:15:59.939324   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:15:59.971782   75402 cri.go:89] found id: ""
	I0816 18:15:59.971812   75402 logs.go:276] 0 containers: []
	W0816 18:15:59.971822   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:15:59.971832   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:15:59.971894   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:00.018585   75402 cri.go:89] found id: ""
	I0816 18:16:00.018630   75402 logs.go:276] 0 containers: []
	W0816 18:16:00.018643   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:00.018654   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:00.018722   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:00.050484   75402 cri.go:89] found id: ""
	I0816 18:16:00.050520   75402 logs.go:276] 0 containers: []
	W0816 18:16:00.050532   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:00.050540   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:00.050603   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:00.082900   75402 cri.go:89] found id: ""
	I0816 18:16:00.082930   75402 logs.go:276] 0 containers: []
	W0816 18:16:00.082942   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:00.082951   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:00.083025   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:00.115330   75402 cri.go:89] found id: ""
	I0816 18:16:00.115363   75402 logs.go:276] 0 containers: []
	W0816 18:16:00.115372   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:00.115378   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:00.115442   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:00.150898   75402 cri.go:89] found id: ""
	I0816 18:16:00.150935   75402 logs.go:276] 0 containers: []
	W0816 18:16:00.150952   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:00.150960   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:00.151033   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:00.193304   75402 cri.go:89] found id: ""
	I0816 18:16:00.193338   75402 logs.go:276] 0 containers: []
	W0816 18:16:00.193349   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:00.193359   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:00.193370   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:00.247340   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:00.247376   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:00.260470   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:00.260500   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:00.336483   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:00.336506   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:00.336521   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:00.421251   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:00.421289   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:02.964042   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:02.977284   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:02.977381   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:03.009533   75402 cri.go:89] found id: ""
	I0816 18:16:03.009574   75402 logs.go:276] 0 containers: []
	W0816 18:16:03.009586   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:03.009594   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:03.009673   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:03.043756   75402 cri.go:89] found id: ""
	I0816 18:16:03.043784   75402 logs.go:276] 0 containers: []
	W0816 18:16:03.043794   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:03.043802   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:03.043867   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:03.078817   75402 cri.go:89] found id: ""
	I0816 18:16:03.078840   75402 logs.go:276] 0 containers: []
	W0816 18:16:03.078848   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:03.078853   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:03.078906   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:00.440728   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:02.440788   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:15:59.706129   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:01.707967   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:01.948193   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:04.448504   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:03.112874   75402 cri.go:89] found id: ""
	I0816 18:16:03.112903   75402 logs.go:276] 0 containers: []
	W0816 18:16:03.112912   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:03.112918   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:03.112985   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:03.152008   75402 cri.go:89] found id: ""
	I0816 18:16:03.152040   75402 logs.go:276] 0 containers: []
	W0816 18:16:03.152052   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:03.152059   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:03.152125   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:03.187353   75402 cri.go:89] found id: ""
	I0816 18:16:03.187386   75402 logs.go:276] 0 containers: []
	W0816 18:16:03.187396   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:03.187404   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:03.187467   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:03.220860   75402 cri.go:89] found id: ""
	I0816 18:16:03.220895   75402 logs.go:276] 0 containers: []
	W0816 18:16:03.220903   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:03.220909   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:03.220958   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:03.252202   75402 cri.go:89] found id: ""
	I0816 18:16:03.252240   75402 logs.go:276] 0 containers: []
	W0816 18:16:03.252247   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:03.252256   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:03.252268   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:03.286907   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:03.286934   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:03.338212   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:03.338249   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:03.352548   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:03.352585   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:03.427580   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:03.427610   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:03.427626   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:06.011792   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:06.024201   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:06.024277   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:06.058328   75402 cri.go:89] found id: ""
	I0816 18:16:06.058356   75402 logs.go:276] 0 containers: []
	W0816 18:16:06.058367   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:06.058373   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:06.058433   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:06.091262   75402 cri.go:89] found id: ""
	I0816 18:16:06.091298   75402 logs.go:276] 0 containers: []
	W0816 18:16:06.091311   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:06.091318   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:06.091382   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:06.124114   75402 cri.go:89] found id: ""
	I0816 18:16:06.124146   75402 logs.go:276] 0 containers: []
	W0816 18:16:06.124154   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:06.124159   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:06.124220   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:06.155379   75402 cri.go:89] found id: ""
	I0816 18:16:06.155406   75402 logs.go:276] 0 containers: []
	W0816 18:16:06.155416   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:06.155422   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:06.155471   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:06.189442   75402 cri.go:89] found id: ""
	I0816 18:16:06.189472   75402 logs.go:276] 0 containers: []
	W0816 18:16:06.189480   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:06.189485   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:06.189538   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:06.228881   75402 cri.go:89] found id: ""
	I0816 18:16:06.228910   75402 logs.go:276] 0 containers: []
	W0816 18:16:06.228921   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:06.228929   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:06.229003   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:06.262272   75402 cri.go:89] found id: ""
	I0816 18:16:06.262299   75402 logs.go:276] 0 containers: []
	W0816 18:16:06.262310   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:06.262317   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:06.262386   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:06.295427   75402 cri.go:89] found id: ""
	I0816 18:16:06.295456   75402 logs.go:276] 0 containers: []
	W0816 18:16:06.295468   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:06.295478   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:06.295492   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:06.347569   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:06.347608   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:06.362786   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:06.362825   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:06.432020   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:06.432044   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:06.432059   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:06.512085   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:06.512120   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:04.940128   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:07.439708   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:04.206477   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:06.208125   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:08.706765   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:06.947599   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:08.948183   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:09.051957   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:09.066630   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:09.066690   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:09.101484   75402 cri.go:89] found id: ""
	I0816 18:16:09.101515   75402 logs.go:276] 0 containers: []
	W0816 18:16:09.101526   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:09.101536   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:09.101614   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:09.140645   75402 cri.go:89] found id: ""
	I0816 18:16:09.140677   75402 logs.go:276] 0 containers: []
	W0816 18:16:09.140689   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:09.140696   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:09.140758   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:09.174666   75402 cri.go:89] found id: ""
	I0816 18:16:09.174698   75402 logs.go:276] 0 containers: []
	W0816 18:16:09.174708   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:09.174717   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:09.174780   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:09.209715   75402 cri.go:89] found id: ""
	I0816 18:16:09.209748   75402 logs.go:276] 0 containers: []
	W0816 18:16:09.209758   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:09.209767   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:09.209845   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:09.243681   75402 cri.go:89] found id: ""
	I0816 18:16:09.243712   75402 logs.go:276] 0 containers: []
	W0816 18:16:09.243720   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:09.243726   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:09.243781   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:09.278058   75402 cri.go:89] found id: ""
	I0816 18:16:09.278090   75402 logs.go:276] 0 containers: []
	W0816 18:16:09.278102   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:09.278111   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:09.278178   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:09.313092   75402 cri.go:89] found id: ""
	I0816 18:16:09.313122   75402 logs.go:276] 0 containers: []
	W0816 18:16:09.313132   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:09.313137   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:09.313201   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:09.345203   75402 cri.go:89] found id: ""
	I0816 18:16:09.345229   75402 logs.go:276] 0 containers: []
	W0816 18:16:09.345236   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:09.345245   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:09.345259   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:09.358198   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:09.358225   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:09.422024   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:09.422047   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:09.422059   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:09.498684   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:09.498717   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:09.535349   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:09.535382   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:12.087472   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:12.100412   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:12.100477   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:12.133982   75402 cri.go:89] found id: ""
	I0816 18:16:12.134018   75402 logs.go:276] 0 containers: []
	W0816 18:16:12.134030   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:12.134038   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:12.134100   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:12.166466   75402 cri.go:89] found id: ""
	I0816 18:16:12.166497   75402 logs.go:276] 0 containers: []
	W0816 18:16:12.166507   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:12.166514   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:12.166589   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:12.197752   75402 cri.go:89] found id: ""
	I0816 18:16:12.197779   75402 logs.go:276] 0 containers: []
	W0816 18:16:12.197790   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:12.197797   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:12.197856   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:12.239759   75402 cri.go:89] found id: ""
	I0816 18:16:12.239789   75402 logs.go:276] 0 containers: []
	W0816 18:16:12.239801   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:12.239810   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:12.239871   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:12.273263   75402 cri.go:89] found id: ""
	I0816 18:16:12.273292   75402 logs.go:276] 0 containers: []
	W0816 18:16:12.273302   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:12.273310   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:12.273370   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:12.308788   75402 cri.go:89] found id: ""
	I0816 18:16:12.308820   75402 logs.go:276] 0 containers: []
	W0816 18:16:12.308831   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:12.308839   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:12.308897   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:12.345243   75402 cri.go:89] found id: ""
	I0816 18:16:12.345274   75402 logs.go:276] 0 containers: []
	W0816 18:16:12.345281   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:12.345288   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:12.345341   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:12.379939   75402 cri.go:89] found id: ""
	I0816 18:16:12.379968   75402 logs.go:276] 0 containers: []
	W0816 18:16:12.379978   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:12.379989   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:12.380004   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:12.436097   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:12.436130   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:12.449328   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:12.449357   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:12.518723   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:12.518749   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:12.518764   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:12.600228   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:12.600268   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:09.441051   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:11.441097   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:11.206853   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:13.705328   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:11.449793   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:13.948517   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:15.137940   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:15.150617   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:15.150694   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:15.186029   75402 cri.go:89] found id: ""
	I0816 18:16:15.186057   75402 logs.go:276] 0 containers: []
	W0816 18:16:15.186067   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:15.186074   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:15.186134   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:15.219812   75402 cri.go:89] found id: ""
	I0816 18:16:15.219840   75402 logs.go:276] 0 containers: []
	W0816 18:16:15.219851   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:15.219864   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:15.219927   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:15.253434   75402 cri.go:89] found id: ""
	I0816 18:16:15.253462   75402 logs.go:276] 0 containers: []
	W0816 18:16:15.253472   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:15.253479   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:15.253542   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:15.286697   75402 cri.go:89] found id: ""
	I0816 18:16:15.286729   75402 logs.go:276] 0 containers: []
	W0816 18:16:15.286745   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:15.286751   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:15.286810   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:15.319363   75402 cri.go:89] found id: ""
	I0816 18:16:15.319405   75402 logs.go:276] 0 containers: []
	W0816 18:16:15.319415   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:15.319422   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:15.319506   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:15.353900   75402 cri.go:89] found id: ""
	I0816 18:16:15.353924   75402 logs.go:276] 0 containers: []
	W0816 18:16:15.353931   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:15.353937   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:15.353991   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:15.389086   75402 cri.go:89] found id: ""
	I0816 18:16:15.389114   75402 logs.go:276] 0 containers: []
	W0816 18:16:15.389122   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:15.389127   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:15.389184   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:15.424069   75402 cri.go:89] found id: ""
	I0816 18:16:15.424099   75402 logs.go:276] 0 containers: []
	W0816 18:16:15.424110   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:15.424121   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:15.424136   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:15.482703   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:15.482738   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:15.496859   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:15.496886   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:15.562178   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:15.562196   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:15.562212   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:15.643484   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:15.643521   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:13.944174   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:16.439987   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:18.442569   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:15.706743   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:18.206088   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:16.448775   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:18.948447   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:18.180963   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:18.194705   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:18.194783   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:18.231302   75402 cri.go:89] found id: ""
	I0816 18:16:18.231337   75402 logs.go:276] 0 containers: []
	W0816 18:16:18.231348   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:18.231355   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:18.231413   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:18.264098   75402 cri.go:89] found id: ""
	I0816 18:16:18.264124   75402 logs.go:276] 0 containers: []
	W0816 18:16:18.264135   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:18.264155   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:18.264228   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:18.298133   75402 cri.go:89] found id: ""
	I0816 18:16:18.298165   75402 logs.go:276] 0 containers: []
	W0816 18:16:18.298178   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:18.298186   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:18.298252   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:18.331323   75402 cri.go:89] found id: ""
	I0816 18:16:18.331354   75402 logs.go:276] 0 containers: []
	W0816 18:16:18.331362   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:18.331367   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:18.331416   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:18.365677   75402 cri.go:89] found id: ""
	I0816 18:16:18.365709   75402 logs.go:276] 0 containers: []
	W0816 18:16:18.365718   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:18.365724   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:18.365774   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:18.399801   75402 cri.go:89] found id: ""
	I0816 18:16:18.399835   75402 logs.go:276] 0 containers: []
	W0816 18:16:18.399844   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:18.399850   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:18.399908   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:18.438148   75402 cri.go:89] found id: ""
	I0816 18:16:18.438179   75402 logs.go:276] 0 containers: []
	W0816 18:16:18.438189   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:18.438197   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:18.438257   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:18.472185   75402 cri.go:89] found id: ""
	I0816 18:16:18.472215   75402 logs.go:276] 0 containers: []
	W0816 18:16:18.472223   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:18.472232   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:18.472243   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:18.523369   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:18.523400   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:18.536152   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:18.536179   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:18.611539   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:18.611560   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:18.611571   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:18.688043   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:18.688079   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:21.229163   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:21.242641   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:21.242717   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:21.275188   75402 cri.go:89] found id: ""
	I0816 18:16:21.275213   75402 logs.go:276] 0 containers: []
	W0816 18:16:21.275220   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:21.275226   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:21.275275   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:21.308377   75402 cri.go:89] found id: ""
	I0816 18:16:21.308406   75402 logs.go:276] 0 containers: []
	W0816 18:16:21.308417   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:21.308424   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:21.308475   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:21.341067   75402 cri.go:89] found id: ""
	I0816 18:16:21.341098   75402 logs.go:276] 0 containers: []
	W0816 18:16:21.341106   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:21.341112   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:21.341170   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:21.372707   75402 cri.go:89] found id: ""
	I0816 18:16:21.372743   75402 logs.go:276] 0 containers: []
	W0816 18:16:21.372756   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:21.372763   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:21.372847   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:21.410210   75402 cri.go:89] found id: ""
	I0816 18:16:21.410241   75402 logs.go:276] 0 containers: []
	W0816 18:16:21.410252   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:21.410259   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:21.410323   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:21.444840   75402 cri.go:89] found id: ""
	I0816 18:16:21.444863   75402 logs.go:276] 0 containers: []
	W0816 18:16:21.444872   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:21.444879   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:21.444942   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:21.478278   75402 cri.go:89] found id: ""
	I0816 18:16:21.478319   75402 logs.go:276] 0 containers: []
	W0816 18:16:21.478327   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:21.478333   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:21.478395   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:21.512026   75402 cri.go:89] found id: ""
	I0816 18:16:21.512063   75402 logs.go:276] 0 containers: []
	W0816 18:16:21.512073   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:21.512090   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:21.512111   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:21.564800   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:21.564834   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:21.577343   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:21.577368   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:21.663216   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:21.663238   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:21.663251   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:21.741960   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:21.741994   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:20.939740   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:22.942844   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:20.706032   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:22.707112   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:21.449404   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:23.454804   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:24.282136   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:24.296452   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:24.296513   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:24.337173   75402 cri.go:89] found id: ""
	I0816 18:16:24.337200   75402 logs.go:276] 0 containers: []
	W0816 18:16:24.337210   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:24.337218   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:24.337282   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:24.374163   75402 cri.go:89] found id: ""
	I0816 18:16:24.374200   75402 logs.go:276] 0 containers: []
	W0816 18:16:24.374213   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:24.374222   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:24.374287   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:24.407823   75402 cri.go:89] found id: ""
	I0816 18:16:24.407854   75402 logs.go:276] 0 containers: []
	W0816 18:16:24.407866   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:24.407881   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:24.407953   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:24.444006   75402 cri.go:89] found id: ""
	I0816 18:16:24.444032   75402 logs.go:276] 0 containers: []
	W0816 18:16:24.444042   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:24.444049   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:24.444113   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:24.479082   75402 cri.go:89] found id: ""
	I0816 18:16:24.479110   75402 logs.go:276] 0 containers: []
	W0816 18:16:24.479119   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:24.479125   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:24.479174   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:24.524738   75402 cri.go:89] found id: ""
	I0816 18:16:24.524764   75402 logs.go:276] 0 containers: []
	W0816 18:16:24.524775   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:24.524782   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:24.524842   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:24.560298   75402 cri.go:89] found id: ""
	I0816 18:16:24.560326   75402 logs.go:276] 0 containers: []
	W0816 18:16:24.560335   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:24.560343   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:24.560406   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:24.597182   75402 cri.go:89] found id: ""
	I0816 18:16:24.597214   75402 logs.go:276] 0 containers: []
	W0816 18:16:24.597227   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:24.597239   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:24.597254   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:24.653063   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:24.653106   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:24.665940   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:24.665972   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:24.736599   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:24.736639   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:24.736657   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:24.821883   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:24.821939   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:27.359558   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:27.382980   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:27.383053   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:27.416766   75402 cri.go:89] found id: ""
	I0816 18:16:27.416793   75402 logs.go:276] 0 containers: []
	W0816 18:16:27.416802   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:27.416811   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:27.416873   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:27.452966   75402 cri.go:89] found id: ""
	I0816 18:16:27.452988   75402 logs.go:276] 0 containers: []
	W0816 18:16:27.452995   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:27.453001   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:27.453050   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:27.485850   75402 cri.go:89] found id: ""
	I0816 18:16:27.485885   75402 logs.go:276] 0 containers: []
	W0816 18:16:27.485896   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:27.485903   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:27.485960   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:27.517667   75402 cri.go:89] found id: ""
	I0816 18:16:27.517694   75402 logs.go:276] 0 containers: []
	W0816 18:16:27.517704   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:27.517711   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:27.517774   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:27.553547   75402 cri.go:89] found id: ""
	I0816 18:16:27.553574   75402 logs.go:276] 0 containers: []
	W0816 18:16:27.553582   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:27.553593   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:27.553653   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:27.586857   75402 cri.go:89] found id: ""
	I0816 18:16:27.586884   75402 logs.go:276] 0 containers: []
	W0816 18:16:27.586893   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:27.586898   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:27.586957   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:27.621739   75402 cri.go:89] found id: ""
	I0816 18:16:27.621766   75402 logs.go:276] 0 containers: []
	W0816 18:16:27.621776   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:27.621784   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:27.621844   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:27.657772   75402 cri.go:89] found id: ""
	I0816 18:16:27.657797   75402 logs.go:276] 0 containers: []
	W0816 18:16:27.657805   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:27.657819   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:27.657831   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:27.729769   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:27.729796   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:27.729810   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:27.813351   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:27.813403   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:27.852985   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:27.853010   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:27.908434   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:27.908476   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:25.439828   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:27.440749   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:25.207590   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:27.706496   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:25.948579   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:28.448590   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:30.422781   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:30.435987   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:30.436070   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:30.470878   75402 cri.go:89] found id: ""
	I0816 18:16:30.470907   75402 logs.go:276] 0 containers: []
	W0816 18:16:30.470918   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:30.470926   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:30.470983   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:30.504940   75402 cri.go:89] found id: ""
	I0816 18:16:30.504969   75402 logs.go:276] 0 containers: []
	W0816 18:16:30.504980   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:30.504988   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:30.505058   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:30.538680   75402 cri.go:89] found id: ""
	I0816 18:16:30.538708   75402 logs.go:276] 0 containers: []
	W0816 18:16:30.538716   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:30.538722   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:30.538788   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:30.574757   75402 cri.go:89] found id: ""
	I0816 18:16:30.574782   75402 logs.go:276] 0 containers: []
	W0816 18:16:30.574791   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:30.574797   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:30.574853   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:30.612500   75402 cri.go:89] found id: ""
	I0816 18:16:30.612529   75402 logs.go:276] 0 containers: []
	W0816 18:16:30.612539   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:30.612547   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:30.612613   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:30.644572   75402 cri.go:89] found id: ""
	I0816 18:16:30.644595   75402 logs.go:276] 0 containers: []
	W0816 18:16:30.644603   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:30.644609   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:30.644678   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:30.678199   75402 cri.go:89] found id: ""
	I0816 18:16:30.678232   75402 logs.go:276] 0 containers: []
	W0816 18:16:30.678243   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:30.678252   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:30.678331   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:30.709435   75402 cri.go:89] found id: ""
	I0816 18:16:30.709470   75402 logs.go:276] 0 containers: []
	W0816 18:16:30.709482   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:30.709494   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:30.709511   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:30.723430   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:30.723464   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:30.800340   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:30.800374   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:30.800390   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:30.883945   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:30.883986   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:30.922107   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:30.922139   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:29.940430   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:32.440198   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:29.706649   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:32.205271   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:30.949515   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:33.448456   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:33.480016   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:33.494178   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:33.494241   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:33.529497   75402 cri.go:89] found id: ""
	I0816 18:16:33.529527   75402 logs.go:276] 0 containers: []
	W0816 18:16:33.529546   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:33.529554   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:33.529614   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:33.566670   75402 cri.go:89] found id: ""
	I0816 18:16:33.566700   75402 logs.go:276] 0 containers: []
	W0816 18:16:33.566711   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:33.566718   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:33.566781   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:33.603898   75402 cri.go:89] found id: ""
	I0816 18:16:33.603926   75402 logs.go:276] 0 containers: []
	W0816 18:16:33.603937   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:33.603944   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:33.604003   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:33.636077   75402 cri.go:89] found id: ""
	I0816 18:16:33.636111   75402 logs.go:276] 0 containers: []
	W0816 18:16:33.636125   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:33.636134   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:33.636200   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:33.668974   75402 cri.go:89] found id: ""
	I0816 18:16:33.669002   75402 logs.go:276] 0 containers: []
	W0816 18:16:33.669011   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:33.669017   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:33.669070   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:33.700981   75402 cri.go:89] found id: ""
	I0816 18:16:33.701010   75402 logs.go:276] 0 containers: []
	W0816 18:16:33.701019   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:33.701026   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:33.701088   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:33.735430   75402 cri.go:89] found id: ""
	I0816 18:16:33.735463   75402 logs.go:276] 0 containers: []
	W0816 18:16:33.735474   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:33.735481   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:33.735539   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:33.779797   75402 cri.go:89] found id: ""
	I0816 18:16:33.779829   75402 logs.go:276] 0 containers: []
	W0816 18:16:33.779840   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:33.779851   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:33.779865   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:33.824873   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:33.824908   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:33.874177   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:33.874217   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:33.888535   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:33.888561   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:33.957590   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:33.957608   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:33.957627   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:36.533660   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:36.546542   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:36.546606   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:36.584056   75402 cri.go:89] found id: ""
	I0816 18:16:36.584085   75402 logs.go:276] 0 containers: []
	W0816 18:16:36.584094   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:36.584099   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:36.584149   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:36.622143   75402 cri.go:89] found id: ""
	I0816 18:16:36.622172   75402 logs.go:276] 0 containers: []
	W0816 18:16:36.622184   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:36.622193   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:36.622262   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:36.655479   75402 cri.go:89] found id: ""
	I0816 18:16:36.655509   75402 logs.go:276] 0 containers: []
	W0816 18:16:36.655520   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:36.655528   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:36.655603   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:36.688044   75402 cri.go:89] found id: ""
	I0816 18:16:36.688076   75402 logs.go:276] 0 containers: []
	W0816 18:16:36.688088   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:36.688096   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:36.688161   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:36.725831   75402 cri.go:89] found id: ""
	I0816 18:16:36.725861   75402 logs.go:276] 0 containers: []
	W0816 18:16:36.725868   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:36.725874   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:36.725925   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:36.758398   75402 cri.go:89] found id: ""
	I0816 18:16:36.758433   75402 logs.go:276] 0 containers: []
	W0816 18:16:36.758444   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:36.758453   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:36.758517   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:36.791097   75402 cri.go:89] found id: ""
	I0816 18:16:36.791126   75402 logs.go:276] 0 containers: []
	W0816 18:16:36.791136   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:36.791144   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:36.791207   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:36.829337   75402 cri.go:89] found id: ""
	I0816 18:16:36.829369   75402 logs.go:276] 0 containers: []
	W0816 18:16:36.829380   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:36.829391   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:36.829405   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:36.881898   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:36.881932   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:36.895584   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:36.895618   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:36.967175   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:36.967197   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:36.967213   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:37.046993   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:37.047025   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:34.440475   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:36.946369   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:34.206677   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:36.207893   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:38.706193   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:35.449611   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:37.947527   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:39.588683   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:39.607205   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:39.607287   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:39.640517   75402 cri.go:89] found id: ""
	I0816 18:16:39.640541   75402 logs.go:276] 0 containers: []
	W0816 18:16:39.640549   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:39.640554   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:39.640604   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:39.673777   75402 cri.go:89] found id: ""
	I0816 18:16:39.673805   75402 logs.go:276] 0 containers: []
	W0816 18:16:39.673813   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:39.673818   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:39.673899   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:39.709574   75402 cri.go:89] found id: ""
	I0816 18:16:39.709598   75402 logs.go:276] 0 containers: []
	W0816 18:16:39.709606   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:39.709611   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:39.709666   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:39.743946   75402 cri.go:89] found id: ""
	I0816 18:16:39.743971   75402 logs.go:276] 0 containers: []
	W0816 18:16:39.743979   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:39.743985   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:39.744041   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:39.776140   75402 cri.go:89] found id: ""
	I0816 18:16:39.776171   75402 logs.go:276] 0 containers: []
	W0816 18:16:39.776181   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:39.776187   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:39.776254   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:39.808697   75402 cri.go:89] found id: ""
	I0816 18:16:39.808719   75402 logs.go:276] 0 containers: []
	W0816 18:16:39.808728   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:39.808734   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:39.808793   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:39.840163   75402 cri.go:89] found id: ""
	I0816 18:16:39.840190   75402 logs.go:276] 0 containers: []
	W0816 18:16:39.840200   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:39.840206   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:39.840270   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:39.874396   75402 cri.go:89] found id: ""
	I0816 18:16:39.874419   75402 logs.go:276] 0 containers: []
	W0816 18:16:39.874426   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:39.874434   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:39.874448   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:39.927922   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:39.927963   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:39.942048   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:39.942076   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:40.012143   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:40.012166   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:40.012181   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:40.088798   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:40.088844   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:42.625875   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:42.640386   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:42.640448   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:42.675201   75402 cri.go:89] found id: ""
	I0816 18:16:42.675224   75402 logs.go:276] 0 containers: []
	W0816 18:16:42.675231   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:42.675236   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:42.675293   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:42.705156   75402 cri.go:89] found id: ""
	I0816 18:16:42.705182   75402 logs.go:276] 0 containers: []
	W0816 18:16:42.705192   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:42.705199   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:42.705258   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:42.738921   75402 cri.go:89] found id: ""
	I0816 18:16:42.738948   75402 logs.go:276] 0 containers: []
	W0816 18:16:42.738956   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:42.738962   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:42.739013   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:42.771130   75402 cri.go:89] found id: ""
	I0816 18:16:42.771160   75402 logs.go:276] 0 containers: []
	W0816 18:16:42.771168   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:42.771175   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:42.771231   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:42.805774   75402 cri.go:89] found id: ""
	I0816 18:16:42.805803   75402 logs.go:276] 0 containers: []
	W0816 18:16:42.805811   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:42.805817   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:42.805879   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:42.840248   75402 cri.go:89] found id: ""
	I0816 18:16:42.840277   75402 logs.go:276] 0 containers: []
	W0816 18:16:42.840293   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:42.840302   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:42.840360   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:42.873260   75402 cri.go:89] found id: ""
	I0816 18:16:42.873287   75402 logs.go:276] 0 containers: []
	W0816 18:16:42.873297   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:42.873322   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:42.873383   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:42.906205   75402 cri.go:89] found id: ""
	I0816 18:16:42.906230   75402 logs.go:276] 0 containers: []
	W0816 18:16:42.906238   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:42.906247   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:42.906257   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:42.959235   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:42.959272   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:42.972063   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:42.972090   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:43.039530   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:43.039558   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:43.039569   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:39.440219   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:41.441052   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:40.707059   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:43.210643   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:39.948907   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:42.448534   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:43.115486   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:43.115519   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:45.651040   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:45.663718   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:45.663812   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:45.696548   75402 cri.go:89] found id: ""
	I0816 18:16:45.696578   75402 logs.go:276] 0 containers: []
	W0816 18:16:45.696586   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:45.696591   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:45.696663   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:45.731032   75402 cri.go:89] found id: ""
	I0816 18:16:45.731059   75402 logs.go:276] 0 containers: []
	W0816 18:16:45.731068   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:45.731073   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:45.731126   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:45.764801   75402 cri.go:89] found id: ""
	I0816 18:16:45.764829   75402 logs.go:276] 0 containers: []
	W0816 18:16:45.764840   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:45.764846   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:45.764908   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:45.800768   75402 cri.go:89] found id: ""
	I0816 18:16:45.800795   75402 logs.go:276] 0 containers: []
	W0816 18:16:45.800803   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:45.800809   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:45.800858   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:45.841460   75402 cri.go:89] found id: ""
	I0816 18:16:45.841486   75402 logs.go:276] 0 containers: []
	W0816 18:16:45.841493   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:45.841505   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:45.841566   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:45.875230   75402 cri.go:89] found id: ""
	I0816 18:16:45.875254   75402 logs.go:276] 0 containers: []
	W0816 18:16:45.875261   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:45.875266   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:45.875319   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:45.907711   75402 cri.go:89] found id: ""
	I0816 18:16:45.907739   75402 logs.go:276] 0 containers: []
	W0816 18:16:45.907747   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:45.907753   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:45.907804   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:45.943147   75402 cri.go:89] found id: ""
	I0816 18:16:45.943171   75402 logs.go:276] 0 containers: []
	W0816 18:16:45.943182   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:45.943192   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:45.943206   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:45.998459   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:45.998491   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:46.013237   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:46.013267   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:46.079248   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:46.079273   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:46.079288   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:46.158842   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:46.158874   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:43.939212   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:45.939893   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:47.940331   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:45.706588   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:48.206342   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:44.948046   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:46.948752   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:49.448263   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:48.696728   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:48.710946   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:48.711041   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:48.746696   75402 cri.go:89] found id: ""
	I0816 18:16:48.746727   75402 logs.go:276] 0 containers: []
	W0816 18:16:48.746735   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:48.746741   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:48.746803   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:48.781496   75402 cri.go:89] found id: ""
	I0816 18:16:48.781522   75402 logs.go:276] 0 containers: []
	W0816 18:16:48.781532   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:48.781539   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:48.781603   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:48.815628   75402 cri.go:89] found id: ""
	I0816 18:16:48.815654   75402 logs.go:276] 0 containers: []
	W0816 18:16:48.815665   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:48.815673   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:48.815736   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:48.848990   75402 cri.go:89] found id: ""
	I0816 18:16:48.849018   75402 logs.go:276] 0 containers: []
	W0816 18:16:48.849030   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:48.849040   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:48.849098   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:48.886924   75402 cri.go:89] found id: ""
	I0816 18:16:48.886949   75402 logs.go:276] 0 containers: []
	W0816 18:16:48.886960   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:48.886968   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:48.887022   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:48.923989   75402 cri.go:89] found id: ""
	I0816 18:16:48.924018   75402 logs.go:276] 0 containers: []
	W0816 18:16:48.924030   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:48.924038   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:48.924102   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:48.959513   75402 cri.go:89] found id: ""
	I0816 18:16:48.959546   75402 logs.go:276] 0 containers: []
	W0816 18:16:48.959556   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:48.959562   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:48.959614   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:48.995615   75402 cri.go:89] found id: ""
	I0816 18:16:48.995651   75402 logs.go:276] 0 containers: []
	W0816 18:16:48.995662   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:48.995673   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:48.995688   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:49.008440   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:49.008468   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:49.076761   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:49.076780   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:49.076797   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:49.152855   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:49.152893   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:49.190857   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:49.190887   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:51.745344   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:51.759552   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:51.759628   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:51.795494   75402 cri.go:89] found id: ""
	I0816 18:16:51.795520   75402 logs.go:276] 0 containers: []
	W0816 18:16:51.795531   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:51.795539   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:51.795600   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:51.833162   75402 cri.go:89] found id: ""
	I0816 18:16:51.833188   75402 logs.go:276] 0 containers: []
	W0816 18:16:51.833198   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:51.833205   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:51.833265   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:51.866940   75402 cri.go:89] found id: ""
	I0816 18:16:51.866968   75402 logs.go:276] 0 containers: []
	W0816 18:16:51.866979   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:51.866986   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:51.867051   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:51.899824   75402 cri.go:89] found id: ""
	I0816 18:16:51.899857   75402 logs.go:276] 0 containers: []
	W0816 18:16:51.899867   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:51.899874   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:51.899937   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:51.932273   75402 cri.go:89] found id: ""
	I0816 18:16:51.932297   75402 logs.go:276] 0 containers: []
	W0816 18:16:51.932312   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:51.932320   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:51.932390   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:51.966885   75402 cri.go:89] found id: ""
	I0816 18:16:51.966911   75402 logs.go:276] 0 containers: []
	W0816 18:16:51.966922   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:51.966930   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:51.966996   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:52.002988   75402 cri.go:89] found id: ""
	I0816 18:16:52.003020   75402 logs.go:276] 0 containers: []
	W0816 18:16:52.003029   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:52.003035   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:52.003098   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:52.038858   75402 cri.go:89] found id: ""
	I0816 18:16:52.038894   75402 logs.go:276] 0 containers: []
	W0816 18:16:52.038909   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:52.038919   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:52.038933   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:52.076404   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:52.076431   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:52.127735   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:52.127767   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:52.140657   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:52.140680   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:52.202961   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:52.202989   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:52.203008   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:50.440577   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:52.441865   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:50.705618   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:52.706795   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:51.448948   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:53.947907   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:54.787095   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:54.801258   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:54.801332   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:54.837987   75402 cri.go:89] found id: ""
	I0816 18:16:54.838018   75402 logs.go:276] 0 containers: []
	W0816 18:16:54.838028   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:54.838034   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:54.838118   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:54.872439   75402 cri.go:89] found id: ""
	I0816 18:16:54.872466   75402 logs.go:276] 0 containers: []
	W0816 18:16:54.872477   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:54.872490   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:54.872554   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:54.904676   75402 cri.go:89] found id: ""
	I0816 18:16:54.904706   75402 logs.go:276] 0 containers: []
	W0816 18:16:54.904717   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:54.904724   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:54.904783   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:54.938101   75402 cri.go:89] found id: ""
	I0816 18:16:54.938134   75402 logs.go:276] 0 containers: []
	W0816 18:16:54.938145   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:54.938154   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:54.938218   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:54.977409   75402 cri.go:89] found id: ""
	I0816 18:16:54.977442   75402 logs.go:276] 0 containers: []
	W0816 18:16:54.977453   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:54.977460   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:54.977521   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:55.013248   75402 cri.go:89] found id: ""
	I0816 18:16:55.013275   75402 logs.go:276] 0 containers: []
	W0816 18:16:55.013286   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:55.013294   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:55.013363   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:55.044555   75402 cri.go:89] found id: ""
	I0816 18:16:55.044588   75402 logs.go:276] 0 containers: []
	W0816 18:16:55.044597   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:55.044603   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:55.044690   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:55.075970   75402 cri.go:89] found id: ""
	I0816 18:16:55.075997   75402 logs.go:276] 0 containers: []
	W0816 18:16:55.076006   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:55.076014   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:55.076025   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:55.149982   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:55.150017   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:16:55.190160   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:55.190194   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:55.242629   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:55.242660   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:55.255229   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:55.255254   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:55.324775   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:57.824996   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:16:57.838666   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:16:57.838740   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:16:57.872828   75402 cri.go:89] found id: ""
	I0816 18:16:57.872861   75402 logs.go:276] 0 containers: []
	W0816 18:16:57.872869   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:16:57.872875   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:16:57.872927   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:16:57.907324   75402 cri.go:89] found id: ""
	I0816 18:16:57.907354   75402 logs.go:276] 0 containers: []
	W0816 18:16:57.907366   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:16:57.907373   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:16:57.907433   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:16:57.941657   75402 cri.go:89] found id: ""
	I0816 18:16:57.941682   75402 logs.go:276] 0 containers: []
	W0816 18:16:57.941689   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:16:57.941695   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:16:57.941746   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:16:57.981424   75402 cri.go:89] found id: ""
	I0816 18:16:57.981466   75402 logs.go:276] 0 containers: []
	W0816 18:16:57.981480   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:16:57.981489   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:16:57.981562   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:16:58.015534   75402 cri.go:89] found id: ""
	I0816 18:16:58.015587   75402 logs.go:276] 0 containers: []
	W0816 18:16:58.015598   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:16:58.015606   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:16:58.015669   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:16:58.047875   75402 cri.go:89] found id: ""
	I0816 18:16:58.047908   75402 logs.go:276] 0 containers: []
	W0816 18:16:58.047917   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:16:58.047923   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:16:58.047976   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:16:58.079294   75402 cri.go:89] found id: ""
	I0816 18:16:58.079324   75402 logs.go:276] 0 containers: []
	W0816 18:16:58.079334   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:16:58.079342   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:16:58.079406   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:16:54.940977   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:57.439254   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:55.208298   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:57.706380   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:55.948080   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:57.949589   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:58.112357   75402 cri.go:89] found id: ""
	I0816 18:16:58.112389   75402 logs.go:276] 0 containers: []
	W0816 18:16:58.112401   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:16:58.112413   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:16:58.112428   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:58.159903   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:16:58.159934   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:16:58.172763   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:16:58.172789   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:16:58.245827   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:16:58.245856   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:16:58.245872   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:16:58.325008   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:16:58.325049   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:00.864354   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:00.877517   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:00.877593   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:00.915396   75402 cri.go:89] found id: ""
	I0816 18:17:00.915428   75402 logs.go:276] 0 containers: []
	W0816 18:17:00.915438   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:00.915446   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:00.915611   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:00.953950   75402 cri.go:89] found id: ""
	I0816 18:17:00.953977   75402 logs.go:276] 0 containers: []
	W0816 18:17:00.953987   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:00.953993   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:00.954051   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:00.987673   75402 cri.go:89] found id: ""
	I0816 18:17:00.987703   75402 logs.go:276] 0 containers: []
	W0816 18:17:00.987713   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:00.987721   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:00.987784   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:01.021230   75402 cri.go:89] found id: ""
	I0816 18:17:01.021277   75402 logs.go:276] 0 containers: []
	W0816 18:17:01.021308   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:01.021315   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:01.021388   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:01.057087   75402 cri.go:89] found id: ""
	I0816 18:17:01.057117   75402 logs.go:276] 0 containers: []
	W0816 18:17:01.057127   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:01.057135   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:01.057207   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:01.094142   75402 cri.go:89] found id: ""
	I0816 18:17:01.094168   75402 logs.go:276] 0 containers: []
	W0816 18:17:01.094176   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:01.094183   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:01.094233   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:01.132799   75402 cri.go:89] found id: ""
	I0816 18:17:01.132824   75402 logs.go:276] 0 containers: []
	W0816 18:17:01.132831   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:01.132837   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:01.132888   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:01.173367   75402 cri.go:89] found id: ""
	I0816 18:17:01.173402   75402 logs.go:276] 0 containers: []
	W0816 18:17:01.173414   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:01.173425   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:01.173443   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:01.186856   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:01.186896   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:01.259913   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:01.259941   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:01.259955   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:01.340914   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:01.340947   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:01.381023   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:01.381058   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:16:59.440314   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:01.440377   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:16:59.706750   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:01.707186   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:00.448182   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:02.448773   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:03.933420   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:03.946940   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:03.947008   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:03.984529   75402 cri.go:89] found id: ""
	I0816 18:17:03.984560   75402 logs.go:276] 0 containers: []
	W0816 18:17:03.984571   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:03.984581   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:03.984668   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:04.017900   75402 cri.go:89] found id: ""
	I0816 18:17:04.017929   75402 logs.go:276] 0 containers: []
	W0816 18:17:04.017940   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:04.017948   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:04.018009   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:04.050837   75402 cri.go:89] found id: ""
	I0816 18:17:04.050871   75402 logs.go:276] 0 containers: []
	W0816 18:17:04.050888   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:04.050896   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:04.050959   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:04.085448   75402 cri.go:89] found id: ""
	I0816 18:17:04.085477   75402 logs.go:276] 0 containers: []
	W0816 18:17:04.085487   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:04.085495   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:04.085564   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:04.118177   75402 cri.go:89] found id: ""
	I0816 18:17:04.118203   75402 logs.go:276] 0 containers: []
	W0816 18:17:04.118213   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:04.118220   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:04.118284   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:04.150289   75402 cri.go:89] found id: ""
	I0816 18:17:04.150317   75402 logs.go:276] 0 containers: []
	W0816 18:17:04.150330   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:04.150338   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:04.150404   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:04.184258   75402 cri.go:89] found id: ""
	I0816 18:17:04.184282   75402 logs.go:276] 0 containers: []
	W0816 18:17:04.184290   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:04.184295   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:04.184347   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:04.217142   75402 cri.go:89] found id: ""
	I0816 18:17:04.217174   75402 logs.go:276] 0 containers: []
	W0816 18:17:04.217184   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:04.217192   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:04.217204   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:04.253000   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:04.253034   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:04.304978   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:04.305018   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:04.320210   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:04.320241   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:04.396146   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:04.396169   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:04.396184   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:06.980747   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:06.992944   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:06.993006   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:07.026303   75402 cri.go:89] found id: ""
	I0816 18:17:07.026356   75402 logs.go:276] 0 containers: []
	W0816 18:17:07.026368   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:07.026376   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:07.026443   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:07.059226   75402 cri.go:89] found id: ""
	I0816 18:17:07.059257   75402 logs.go:276] 0 containers: []
	W0816 18:17:07.059268   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:07.059277   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:07.059339   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:07.092142   75402 cri.go:89] found id: ""
	I0816 18:17:07.092171   75402 logs.go:276] 0 containers: []
	W0816 18:17:07.092182   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:07.092188   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:07.092248   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:07.125284   75402 cri.go:89] found id: ""
	I0816 18:17:07.125330   75402 logs.go:276] 0 containers: []
	W0816 18:17:07.125347   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:07.125355   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:07.125420   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:07.163890   75402 cri.go:89] found id: ""
	I0816 18:17:07.163919   75402 logs.go:276] 0 containers: []
	W0816 18:17:07.163930   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:07.163938   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:07.164002   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:07.197988   75402 cri.go:89] found id: ""
	I0816 18:17:07.198014   75402 logs.go:276] 0 containers: []
	W0816 18:17:07.198025   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:07.198033   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:07.198116   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:07.232709   75402 cri.go:89] found id: ""
	I0816 18:17:07.232738   75402 logs.go:276] 0 containers: []
	W0816 18:17:07.232749   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:07.232756   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:07.232817   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:07.264514   75402 cri.go:89] found id: ""
	I0816 18:17:07.264548   75402 logs.go:276] 0 containers: []
	W0816 18:17:07.264558   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:07.264569   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:07.264583   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:07.316138   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:07.316173   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:07.329659   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:07.329688   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:07.397345   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:07.397380   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:07.397397   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:07.481245   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:07.481280   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:03.940100   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:05.940355   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:07.940821   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:04.207253   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:06.705745   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:08.706828   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:04.949027   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:07.447957   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:10.024405   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:10.036860   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:10.036927   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:10.069402   75402 cri.go:89] found id: ""
	I0816 18:17:10.069436   75402 logs.go:276] 0 containers: []
	W0816 18:17:10.069448   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:10.069458   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:10.069511   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:10.101480   75402 cri.go:89] found id: ""
	I0816 18:17:10.101508   75402 logs.go:276] 0 containers: []
	W0816 18:17:10.101518   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:10.101529   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:10.101601   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:10.131673   75402 cri.go:89] found id: ""
	I0816 18:17:10.131708   75402 logs.go:276] 0 containers: []
	W0816 18:17:10.131719   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:10.131726   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:10.131821   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:10.166476   75402 cri.go:89] found id: ""
	I0816 18:17:10.166508   75402 logs.go:276] 0 containers: []
	W0816 18:17:10.166518   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:10.166525   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:10.166590   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:10.199296   75402 cri.go:89] found id: ""
	I0816 18:17:10.199321   75402 logs.go:276] 0 containers: []
	W0816 18:17:10.199332   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:10.199340   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:10.199406   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:10.232640   75402 cri.go:89] found id: ""
	I0816 18:17:10.232672   75402 logs.go:276] 0 containers: []
	W0816 18:17:10.232683   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:10.232691   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:10.232775   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:10.263958   75402 cri.go:89] found id: ""
	I0816 18:17:10.263988   75402 logs.go:276] 0 containers: []
	W0816 18:17:10.263998   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:10.264003   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:10.264052   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:10.295904   75402 cri.go:89] found id: ""
	I0816 18:17:10.295929   75402 logs.go:276] 0 containers: []
	W0816 18:17:10.295937   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:10.295946   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:10.295957   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:10.344874   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:10.344909   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:10.358523   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:10.358552   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:10.433311   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:10.433334   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:10.433351   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:10.514580   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:10.514620   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:13.053815   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:13.068517   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:13.068597   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:10.440472   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:12.939209   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:10.707438   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:13.207630   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:09.947889   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:11.949408   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:14.447906   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:13.104251   75402 cri.go:89] found id: ""
	I0816 18:17:13.104279   75402 logs.go:276] 0 containers: []
	W0816 18:17:13.104313   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:13.104321   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:13.104375   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:13.137415   75402 cri.go:89] found id: ""
	I0816 18:17:13.137442   75402 logs.go:276] 0 containers: []
	W0816 18:17:13.137453   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:13.137461   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:13.137510   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:13.174165   75402 cri.go:89] found id: ""
	I0816 18:17:13.174191   75402 logs.go:276] 0 containers: []
	W0816 18:17:13.174203   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:13.174210   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:13.174271   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:13.206789   75402 cri.go:89] found id: ""
	I0816 18:17:13.206814   75402 logs.go:276] 0 containers: []
	W0816 18:17:13.206823   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:13.206831   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:13.206892   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:13.238950   75402 cri.go:89] found id: ""
	I0816 18:17:13.238975   75402 logs.go:276] 0 containers: []
	W0816 18:17:13.238984   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:13.238990   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:13.239037   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:13.271485   75402 cri.go:89] found id: ""
	I0816 18:17:13.271518   75402 logs.go:276] 0 containers: []
	W0816 18:17:13.271535   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:13.271544   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:13.271612   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:13.307576   75402 cri.go:89] found id: ""
	I0816 18:17:13.307610   75402 logs.go:276] 0 containers: []
	W0816 18:17:13.307622   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:13.307632   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:13.307698   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:13.339746   75402 cri.go:89] found id: ""
	I0816 18:17:13.339792   75402 logs.go:276] 0 containers: []
	W0816 18:17:13.339802   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:13.339813   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:13.339827   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:13.352847   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:13.352875   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:13.440397   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:13.440418   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:13.440432   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:13.514879   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:13.514916   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:13.553848   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:13.553882   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:16.103318   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:16.115837   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:16.115922   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:16.147079   75402 cri.go:89] found id: ""
	I0816 18:17:16.147108   75402 logs.go:276] 0 containers: []
	W0816 18:17:16.147119   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:16.147127   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:16.147189   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:16.184207   75402 cri.go:89] found id: ""
	I0816 18:17:16.184233   75402 logs.go:276] 0 containers: []
	W0816 18:17:16.184241   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:16.184247   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:16.184295   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:16.219036   75402 cri.go:89] found id: ""
	I0816 18:17:16.219065   75402 logs.go:276] 0 containers: []
	W0816 18:17:16.219072   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:16.219078   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:16.219163   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:16.251269   75402 cri.go:89] found id: ""
	I0816 18:17:16.251307   75402 logs.go:276] 0 containers: []
	W0816 18:17:16.251320   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:16.251329   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:16.251394   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:16.286549   75402 cri.go:89] found id: ""
	I0816 18:17:16.286576   75402 logs.go:276] 0 containers: []
	W0816 18:17:16.286585   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:16.286591   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:16.286647   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:16.322017   75402 cri.go:89] found id: ""
	I0816 18:17:16.322045   75402 logs.go:276] 0 containers: []
	W0816 18:17:16.322055   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:16.322063   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:16.322128   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:16.353606   75402 cri.go:89] found id: ""
	I0816 18:17:16.353636   75402 logs.go:276] 0 containers: []
	W0816 18:17:16.353646   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:16.353653   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:16.353719   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:16.386973   75402 cri.go:89] found id: ""
	I0816 18:17:16.387005   75402 logs.go:276] 0 containers: []
	W0816 18:17:16.387016   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:16.387027   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:16.387039   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:16.437031   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:16.437066   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:16.451258   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:16.451292   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:16.519130   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:16.519155   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:16.519170   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:16.598591   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:16.598626   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:14.939993   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:17.440655   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:15.705969   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:17.706271   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:16.449266   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:18.948220   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:19.147916   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:19.160525   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:19.160600   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:19.193494   75402 cri.go:89] found id: ""
	I0816 18:17:19.193520   75402 logs.go:276] 0 containers: []
	W0816 18:17:19.193527   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:19.193533   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:19.193599   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:19.230936   75402 cri.go:89] found id: ""
	I0816 18:17:19.230963   75402 logs.go:276] 0 containers: []
	W0816 18:17:19.230971   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:19.230976   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:19.231029   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:19.263713   75402 cri.go:89] found id: ""
	I0816 18:17:19.263735   75402 logs.go:276] 0 containers: []
	W0816 18:17:19.263742   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:19.263748   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:19.263794   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:19.294609   75402 cri.go:89] found id: ""
	I0816 18:17:19.294635   75402 logs.go:276] 0 containers: []
	W0816 18:17:19.294642   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:19.294647   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:19.294698   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:19.329278   75402 cri.go:89] found id: ""
	I0816 18:17:19.329303   75402 logs.go:276] 0 containers: []
	W0816 18:17:19.329313   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:19.329319   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:19.329368   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:19.362007   75402 cri.go:89] found id: ""
	I0816 18:17:19.362043   75402 logs.go:276] 0 containers: []
	W0816 18:17:19.362052   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:19.362067   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:19.362120   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:19.395190   75402 cri.go:89] found id: ""
	I0816 18:17:19.395217   75402 logs.go:276] 0 containers: []
	W0816 18:17:19.395248   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:19.395255   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:19.395302   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:19.426962   75402 cri.go:89] found id: ""
	I0816 18:17:19.426991   75402 logs.go:276] 0 containers: []
	W0816 18:17:19.427002   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:19.427012   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:19.427027   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:19.441319   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:19.441346   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:19.511390   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:19.511409   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:19.511425   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:19.590897   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:19.590935   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:19.628753   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:19.628781   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:22.182534   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:22.194844   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:22.194917   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:22.228225   75402 cri.go:89] found id: ""
	I0816 18:17:22.228247   75402 logs.go:276] 0 containers: []
	W0816 18:17:22.228269   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:22.228276   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:22.228325   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:22.258614   75402 cri.go:89] found id: ""
	I0816 18:17:22.258646   75402 logs.go:276] 0 containers: []
	W0816 18:17:22.258654   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:22.258660   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:22.258708   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:22.289103   75402 cri.go:89] found id: ""
	I0816 18:17:22.289136   75402 logs.go:276] 0 containers: []
	W0816 18:17:22.289147   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:22.289154   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:22.289215   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:22.321828   75402 cri.go:89] found id: ""
	I0816 18:17:22.321857   75402 logs.go:276] 0 containers: []
	W0816 18:17:22.321869   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:22.321877   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:22.321942   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:22.353557   75402 cri.go:89] found id: ""
	I0816 18:17:22.353588   75402 logs.go:276] 0 containers: []
	W0816 18:17:22.353597   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:22.353602   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:22.353660   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:22.385078   75402 cri.go:89] found id: ""
	I0816 18:17:22.385103   75402 logs.go:276] 0 containers: []
	W0816 18:17:22.385110   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:22.385116   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:22.385189   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:22.415864   75402 cri.go:89] found id: ""
	I0816 18:17:22.415900   75402 logs.go:276] 0 containers: []
	W0816 18:17:22.415913   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:22.415922   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:22.415990   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:22.449895   75402 cri.go:89] found id: ""
	I0816 18:17:22.449922   75402 logs.go:276] 0 containers: []
	W0816 18:17:22.449942   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:22.449957   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:22.449974   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:22.523055   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:22.523073   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:22.523084   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:22.599680   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:22.599719   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:22.638021   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:22.638057   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:22.688970   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:22.689010   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:19.941154   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:22.440580   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:20.207713   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:22.706805   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:21.448399   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:23.448444   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:25.202748   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:25.217316   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:25.217388   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:25.249528   75402 cri.go:89] found id: ""
	I0816 18:17:25.249558   75402 logs.go:276] 0 containers: []
	W0816 18:17:25.249566   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:25.249578   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:25.249625   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:25.282667   75402 cri.go:89] found id: ""
	I0816 18:17:25.282696   75402 logs.go:276] 0 containers: []
	W0816 18:17:25.282706   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:25.282712   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:25.282764   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:25.314061   75402 cri.go:89] found id: ""
	I0816 18:17:25.314091   75402 logs.go:276] 0 containers: []
	W0816 18:17:25.314101   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:25.314108   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:25.314161   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:25.351260   75402 cri.go:89] found id: ""
	I0816 18:17:25.351287   75402 logs.go:276] 0 containers: []
	W0816 18:17:25.351296   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:25.351301   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:25.351352   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:25.388303   75402 cri.go:89] found id: ""
	I0816 18:17:25.388334   75402 logs.go:276] 0 containers: []
	W0816 18:17:25.388345   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:25.388352   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:25.388412   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:25.422133   75402 cri.go:89] found id: ""
	I0816 18:17:25.422161   75402 logs.go:276] 0 containers: []
	W0816 18:17:25.422169   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:25.422175   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:25.422232   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:25.456749   75402 cri.go:89] found id: ""
	I0816 18:17:25.456775   75402 logs.go:276] 0 containers: []
	W0816 18:17:25.456783   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:25.456789   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:25.456836   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:25.494783   75402 cri.go:89] found id: ""
	I0816 18:17:25.494809   75402 logs.go:276] 0 containers: []
	W0816 18:17:25.494817   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:25.494825   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:25.494836   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:25.561253   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:25.561290   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:25.580349   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:25.580383   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:25.656333   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:25.656361   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:25.656378   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:25.733479   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:25.733515   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:24.444069   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:26.939743   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:24.707849   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:26.709711   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:25.448555   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:27.449070   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:28.272217   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:28.285750   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:28.285822   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:28.318230   75402 cri.go:89] found id: ""
	I0816 18:17:28.318260   75402 logs.go:276] 0 containers: []
	W0816 18:17:28.318268   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:28.318275   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:28.318344   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:28.351766   75402 cri.go:89] found id: ""
	I0816 18:17:28.351798   75402 logs.go:276] 0 containers: []
	W0816 18:17:28.351808   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:28.351814   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:28.351872   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:28.385543   75402 cri.go:89] found id: ""
	I0816 18:17:28.385572   75402 logs.go:276] 0 containers: []
	W0816 18:17:28.385581   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:28.385588   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:28.385653   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:28.418808   75402 cri.go:89] found id: ""
	I0816 18:17:28.418837   75402 logs.go:276] 0 containers: []
	W0816 18:17:28.418846   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:28.418852   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:28.418900   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:28.453883   75402 cri.go:89] found id: ""
	I0816 18:17:28.453911   75402 logs.go:276] 0 containers: []
	W0816 18:17:28.453922   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:28.453929   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:28.453996   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:28.486261   75402 cri.go:89] found id: ""
	I0816 18:17:28.486291   75402 logs.go:276] 0 containers: []
	W0816 18:17:28.486304   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:28.486310   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:28.486366   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:28.520617   75402 cri.go:89] found id: ""
	I0816 18:17:28.520658   75402 logs.go:276] 0 containers: []
	W0816 18:17:28.520670   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:28.520678   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:28.520731   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:28.552996   75402 cri.go:89] found id: ""
	I0816 18:17:28.553026   75402 logs.go:276] 0 containers: []
	W0816 18:17:28.553036   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:28.553046   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:28.553061   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:28.604149   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:28.604192   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:28.617393   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:28.617421   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:28.683258   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:28.683279   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:28.683294   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:28.766933   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:28.766977   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:31.305897   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:31.326070   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:31.326143   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:31.375314   75402 cri.go:89] found id: ""
	I0816 18:17:31.375350   75402 logs.go:276] 0 containers: []
	W0816 18:17:31.375361   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:31.375369   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:31.375429   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:31.407372   75402 cri.go:89] found id: ""
	I0816 18:17:31.407398   75402 logs.go:276] 0 containers: []
	W0816 18:17:31.407406   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:31.407411   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:31.407459   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:31.445679   75402 cri.go:89] found id: ""
	I0816 18:17:31.445706   75402 logs.go:276] 0 containers: []
	W0816 18:17:31.445714   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:31.445720   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:31.445781   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:31.480040   75402 cri.go:89] found id: ""
	I0816 18:17:31.480072   75402 logs.go:276] 0 containers: []
	W0816 18:17:31.480080   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:31.480085   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:31.480145   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:31.511143   75402 cri.go:89] found id: ""
	I0816 18:17:31.511171   75402 logs.go:276] 0 containers: []
	W0816 18:17:31.511182   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:31.511188   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:31.511252   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:31.544254   75402 cri.go:89] found id: ""
	I0816 18:17:31.544282   75402 logs.go:276] 0 containers: []
	W0816 18:17:31.544293   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:31.544300   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:31.544363   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:31.579007   75402 cri.go:89] found id: ""
	I0816 18:17:31.579033   75402 logs.go:276] 0 containers: []
	W0816 18:17:31.579041   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:31.579046   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:31.579108   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:31.619966   75402 cri.go:89] found id: ""
	I0816 18:17:31.619995   75402 logs.go:276] 0 containers: []
	W0816 18:17:31.620005   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:31.620018   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:31.620035   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:31.657784   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:31.657815   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:31.706824   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:31.706853   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:31.719696   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:31.719721   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:31.786096   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:31.786124   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:31.786142   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:28.940711   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:31.440514   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:29.206929   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:31.706188   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:33.706244   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:29.948053   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:32.448453   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:34.363862   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:34.377365   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:34.377430   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:34.414191   75402 cri.go:89] found id: ""
	I0816 18:17:34.414216   75402 logs.go:276] 0 containers: []
	W0816 18:17:34.414223   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:34.414229   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:34.414285   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:34.446811   75402 cri.go:89] found id: ""
	I0816 18:17:34.446836   75402 logs.go:276] 0 containers: []
	W0816 18:17:34.446843   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:34.446848   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:34.446905   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:34.477582   75402 cri.go:89] found id: ""
	I0816 18:17:34.477615   75402 logs.go:276] 0 containers: []
	W0816 18:17:34.477627   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:34.477634   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:34.477695   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:34.507868   75402 cri.go:89] found id: ""
	I0816 18:17:34.507901   75402 logs.go:276] 0 containers: []
	W0816 18:17:34.507912   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:34.507921   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:34.507984   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:34.538719   75402 cri.go:89] found id: ""
	I0816 18:17:34.538754   75402 logs.go:276] 0 containers: []
	W0816 18:17:34.538765   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:34.538772   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:34.538826   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:34.571445   75402 cri.go:89] found id: ""
	I0816 18:17:34.571468   75402 logs.go:276] 0 containers: []
	W0816 18:17:34.571477   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:34.571484   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:34.571557   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:34.601587   75402 cri.go:89] found id: ""
	I0816 18:17:34.601611   75402 logs.go:276] 0 containers: []
	W0816 18:17:34.601618   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:34.601624   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:34.601669   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:34.634850   75402 cri.go:89] found id: ""
	I0816 18:17:34.634878   75402 logs.go:276] 0 containers: []
	W0816 18:17:34.634892   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:34.634906   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:34.634920   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:34.682828   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:34.682859   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:34.695796   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:34.695820   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:34.762100   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:34.762121   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:34.762133   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:34.845329   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:34.845359   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:37.386266   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:37.398940   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:37.399005   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:37.433072   75402 cri.go:89] found id: ""
	I0816 18:17:37.433099   75402 logs.go:276] 0 containers: []
	W0816 18:17:37.433112   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:37.433118   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:37.433169   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:37.466968   75402 cri.go:89] found id: ""
	I0816 18:17:37.467001   75402 logs.go:276] 0 containers: []
	W0816 18:17:37.467012   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:37.467021   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:37.467086   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:37.509268   75402 cri.go:89] found id: ""
	I0816 18:17:37.509291   75402 logs.go:276] 0 containers: []
	W0816 18:17:37.509300   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:37.509306   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:37.509365   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:37.541295   75402 cri.go:89] found id: ""
	I0816 18:17:37.541338   75402 logs.go:276] 0 containers: []
	W0816 18:17:37.541350   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:37.541357   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:37.541421   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:37.575423   75402 cri.go:89] found id: ""
	I0816 18:17:37.575453   75402 logs.go:276] 0 containers: []
	W0816 18:17:37.575464   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:37.575472   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:37.575540   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:37.614787   75402 cri.go:89] found id: ""
	I0816 18:17:37.614817   75402 logs.go:276] 0 containers: []
	W0816 18:17:37.614828   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:37.614835   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:37.614896   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:37.646396   75402 cri.go:89] found id: ""
	I0816 18:17:37.646430   75402 logs.go:276] 0 containers: []
	W0816 18:17:37.646441   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:37.646449   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:37.646517   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:37.679383   75402 cri.go:89] found id: ""
	I0816 18:17:37.679414   75402 logs.go:276] 0 containers: []
	W0816 18:17:37.679423   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:37.679431   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:37.679442   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:37.729641   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:37.729673   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:37.742420   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:37.742448   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:37.812572   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:37.812600   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:37.812615   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:37.887100   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:37.887137   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:33.940380   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:35.941055   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:38.440700   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:35.706903   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:38.207115   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:34.947638   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:37.448511   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:39.448944   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:40.424202   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:40.438231   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:40.438337   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:40.474614   75402 cri.go:89] found id: ""
	I0816 18:17:40.474639   75402 logs.go:276] 0 containers: []
	W0816 18:17:40.474648   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:40.474653   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:40.474701   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:40.510123   75402 cri.go:89] found id: ""
	I0816 18:17:40.510154   75402 logs.go:276] 0 containers: []
	W0816 18:17:40.510162   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:40.510167   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:40.510217   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:40.548971   75402 cri.go:89] found id: ""
	I0816 18:17:40.549000   75402 logs.go:276] 0 containers: []
	W0816 18:17:40.549008   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:40.549013   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:40.549069   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:40.595126   75402 cri.go:89] found id: ""
	I0816 18:17:40.595158   75402 logs.go:276] 0 containers: []
	W0816 18:17:40.595167   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:40.595174   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:40.595220   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:40.629769   75402 cri.go:89] found id: ""
	I0816 18:17:40.629793   75402 logs.go:276] 0 containers: []
	W0816 18:17:40.629801   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:40.629807   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:40.629871   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:40.661889   75402 cri.go:89] found id: ""
	I0816 18:17:40.661922   75402 logs.go:276] 0 containers: []
	W0816 18:17:40.661932   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:40.661939   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:40.662001   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:40.697764   75402 cri.go:89] found id: ""
	I0816 18:17:40.697790   75402 logs.go:276] 0 containers: []
	W0816 18:17:40.697801   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:40.697808   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:40.697867   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:40.734825   75402 cri.go:89] found id: ""
	I0816 18:17:40.734852   75402 logs.go:276] 0 containers: []
	W0816 18:17:40.734862   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:40.734872   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:40.734939   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:40.787975   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:40.788015   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:40.800817   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:40.800843   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:40.874182   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:40.874205   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:40.874219   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:40.960032   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:40.960066   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:40.940284   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:42.943218   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:40.207943   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:42.707356   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:41.947437   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:43.947887   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:43.499770   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:43.513726   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:43.513806   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:43.548368   75402 cri.go:89] found id: ""
	I0816 18:17:43.548396   75402 logs.go:276] 0 containers: []
	W0816 18:17:43.548406   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:43.548413   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:43.548474   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:43.581177   75402 cri.go:89] found id: ""
	I0816 18:17:43.581205   75402 logs.go:276] 0 containers: []
	W0816 18:17:43.581216   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:43.581223   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:43.581291   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:43.614315   75402 cri.go:89] found id: ""
	I0816 18:17:43.614354   75402 logs.go:276] 0 containers: []
	W0816 18:17:43.614367   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:43.614374   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:43.614437   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:43.648608   75402 cri.go:89] found id: ""
	I0816 18:17:43.648645   75402 logs.go:276] 0 containers: []
	W0816 18:17:43.648658   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:43.648669   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:43.648722   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:43.680549   75402 cri.go:89] found id: ""
	I0816 18:17:43.680586   75402 logs.go:276] 0 containers: []
	W0816 18:17:43.680597   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:43.680604   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:43.680686   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:43.710473   75402 cri.go:89] found id: ""
	I0816 18:17:43.710497   75402 logs.go:276] 0 containers: []
	W0816 18:17:43.710506   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:43.710514   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:43.710576   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:43.741415   75402 cri.go:89] found id: ""
	I0816 18:17:43.741442   75402 logs.go:276] 0 containers: []
	W0816 18:17:43.741450   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:43.741456   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:43.741505   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:43.775018   75402 cri.go:89] found id: ""
	I0816 18:17:43.775051   75402 logs.go:276] 0 containers: []
	W0816 18:17:43.775063   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:43.775074   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:43.775087   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:43.825596   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:43.825630   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:43.839133   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:43.839161   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:43.905645   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:43.905667   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:43.905679   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:43.988860   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:43.988901   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:46.525896   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:46.539147   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:46.539229   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:46.570703   75402 cri.go:89] found id: ""
	I0816 18:17:46.570726   75402 logs.go:276] 0 containers: []
	W0816 18:17:46.570734   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:46.570740   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:46.570785   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:46.605909   75402 cri.go:89] found id: ""
	I0816 18:17:46.605939   75402 logs.go:276] 0 containers: []
	W0816 18:17:46.605954   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:46.605961   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:46.606013   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:46.638865   75402 cri.go:89] found id: ""
	I0816 18:17:46.638899   75402 logs.go:276] 0 containers: []
	W0816 18:17:46.638911   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:46.638919   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:46.638994   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:46.671869   75402 cri.go:89] found id: ""
	I0816 18:17:46.671904   75402 logs.go:276] 0 containers: []
	W0816 18:17:46.671917   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:46.671926   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:46.671988   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:46.703423   75402 cri.go:89] found id: ""
	I0816 18:17:46.703464   75402 logs.go:276] 0 containers: []
	W0816 18:17:46.703473   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:46.703479   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:46.703545   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:46.735824   75402 cri.go:89] found id: ""
	I0816 18:17:46.735853   75402 logs.go:276] 0 containers: []
	W0816 18:17:46.735864   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:46.735871   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:46.735926   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:46.767122   75402 cri.go:89] found id: ""
	I0816 18:17:46.767146   75402 logs.go:276] 0 containers: []
	W0816 18:17:46.767154   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:46.767160   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:46.767207   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:46.798093   75402 cri.go:89] found id: ""
	I0816 18:17:46.798126   75402 logs.go:276] 0 containers: []
	W0816 18:17:46.798140   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:46.798152   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:46.798167   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:46.832699   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:46.832725   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:46.884212   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:46.884246   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:46.896896   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:46.896921   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:46.968805   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:46.968824   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:46.968838   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:45.440474   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:47.940127   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:45.206534   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:47.206973   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:45.948252   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:48.448086   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:49.552581   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:49.565134   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:49.565212   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:49.597012   75402 cri.go:89] found id: ""
	I0816 18:17:49.597042   75402 logs.go:276] 0 containers: []
	W0816 18:17:49.597057   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:49.597067   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:49.597133   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:49.628902   75402 cri.go:89] found id: ""
	I0816 18:17:49.628935   75402 logs.go:276] 0 containers: []
	W0816 18:17:49.628948   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:49.628957   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:49.629025   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:49.662668   75402 cri.go:89] found id: ""
	I0816 18:17:49.662698   75402 logs.go:276] 0 containers: []
	W0816 18:17:49.662709   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:49.662715   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:49.662778   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:49.696354   75402 cri.go:89] found id: ""
	I0816 18:17:49.696381   75402 logs.go:276] 0 containers: []
	W0816 18:17:49.696389   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:49.696395   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:49.696487   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:49.730801   75402 cri.go:89] found id: ""
	I0816 18:17:49.730838   75402 logs.go:276] 0 containers: []
	W0816 18:17:49.730849   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:49.730856   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:49.730921   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:49.764474   75402 cri.go:89] found id: ""
	I0816 18:17:49.764503   75402 logs.go:276] 0 containers: []
	W0816 18:17:49.764514   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:49.764522   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:49.764585   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:49.798577   75402 cri.go:89] found id: ""
	I0816 18:17:49.798616   75402 logs.go:276] 0 containers: []
	W0816 18:17:49.798627   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:49.798634   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:49.798703   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:49.830987   75402 cri.go:89] found id: ""
	I0816 18:17:49.831016   75402 logs.go:276] 0 containers: []
	W0816 18:17:49.831024   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:49.831032   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:49.831043   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:49.883397   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:49.883433   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:49.897208   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:49.897239   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:49.968363   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:49.968386   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:49.968398   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:50.056552   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:50.056583   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:52.596191   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:52.609592   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:52.609668   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:52.645775   75402 cri.go:89] found id: ""
	I0816 18:17:52.645807   75402 logs.go:276] 0 containers: []
	W0816 18:17:52.645817   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:52.645823   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:52.645869   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:52.677817   75402 cri.go:89] found id: ""
	I0816 18:17:52.677852   75402 logs.go:276] 0 containers: []
	W0816 18:17:52.677862   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:52.677870   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:52.677935   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:52.710618   75402 cri.go:89] found id: ""
	I0816 18:17:52.710648   75402 logs.go:276] 0 containers: []
	W0816 18:17:52.710658   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:52.710664   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:52.710716   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:52.745830   75402 cri.go:89] found id: ""
	I0816 18:17:52.745858   75402 logs.go:276] 0 containers: []
	W0816 18:17:52.745867   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:52.745872   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:52.745929   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:52.778511   75402 cri.go:89] found id: ""
	I0816 18:17:52.778538   75402 logs.go:276] 0 containers: []
	W0816 18:17:52.778548   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:52.778567   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:52.778632   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:52.810759   75402 cri.go:89] found id: ""
	I0816 18:17:52.810788   75402 logs.go:276] 0 containers: []
	W0816 18:17:52.810800   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:52.810807   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:52.810872   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:52.843786   75402 cri.go:89] found id: ""
	I0816 18:17:52.843814   75402 logs.go:276] 0 containers: []
	W0816 18:17:52.843824   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:52.843831   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:52.843886   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:52.876886   75402 cri.go:89] found id: ""
	I0816 18:17:52.876914   75402 logs.go:276] 0 containers: []
	W0816 18:17:52.876924   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:52.876934   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:52.876950   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:52.932519   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:52.932559   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:52.946645   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:52.946671   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:53.018156   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:53.018177   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:53.018190   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:53.095562   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:53.095600   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:49.940263   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:51.940433   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:49.707635   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:52.206027   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:50.449204   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:52.949591   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:55.633820   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:55.646170   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:55.646238   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:55.678147   75402 cri.go:89] found id: ""
	I0816 18:17:55.678181   75402 logs.go:276] 0 containers: []
	W0816 18:17:55.678194   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:55.678202   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:55.678264   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:55.710910   75402 cri.go:89] found id: ""
	I0816 18:17:55.710938   75402 logs.go:276] 0 containers: []
	W0816 18:17:55.710948   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:55.710956   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:55.711012   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:55.744822   75402 cri.go:89] found id: ""
	I0816 18:17:55.744853   75402 logs.go:276] 0 containers: []
	W0816 18:17:55.744863   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:55.744870   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:55.744931   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:55.791677   75402 cri.go:89] found id: ""
	I0816 18:17:55.791708   75402 logs.go:276] 0 containers: []
	W0816 18:17:55.791719   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:55.791727   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:55.791788   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:55.826448   75402 cri.go:89] found id: ""
	I0816 18:17:55.826481   75402 logs.go:276] 0 containers: []
	W0816 18:17:55.826492   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:55.826500   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:55.826564   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:55.861178   75402 cri.go:89] found id: ""
	I0816 18:17:55.861210   75402 logs.go:276] 0 containers: []
	W0816 18:17:55.861219   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:55.861225   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:55.861280   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:55.898073   75402 cri.go:89] found id: ""
	I0816 18:17:55.898099   75402 logs.go:276] 0 containers: []
	W0816 18:17:55.898110   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:55.898117   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:55.898184   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:55.931446   75402 cri.go:89] found id: ""
	I0816 18:17:55.931478   75402 logs.go:276] 0 containers: []
	W0816 18:17:55.931487   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:55.931498   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:55.931514   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:55.999910   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:55.999931   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:55.999943   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:56.077240   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:56.077312   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:56.115479   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:56.115506   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:56.166954   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:56.166989   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:17:54.440166   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:56.939865   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:54.206368   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:56.206710   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:58.207053   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:55.448566   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:57.948891   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:17:58.680571   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:17:58.692824   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:17:58.692890   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:17:58.729761   75402 cri.go:89] found id: ""
	I0816 18:17:58.729786   75402 logs.go:276] 0 containers: []
	W0816 18:17:58.729794   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:17:58.729799   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:17:58.729857   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:17:58.764943   75402 cri.go:89] found id: ""
	I0816 18:17:58.765082   75402 logs.go:276] 0 containers: []
	W0816 18:17:58.765113   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:17:58.765124   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:17:58.765179   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:17:58.801314   75402 cri.go:89] found id: ""
	I0816 18:17:58.801345   75402 logs.go:276] 0 containers: []
	W0816 18:17:58.801357   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:17:58.801365   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:17:58.801429   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:17:58.833936   75402 cri.go:89] found id: ""
	I0816 18:17:58.833973   75402 logs.go:276] 0 containers: []
	W0816 18:17:58.833982   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:17:58.833988   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:17:58.834046   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:17:58.870108   75402 cri.go:89] found id: ""
	I0816 18:17:58.870137   75402 logs.go:276] 0 containers: []
	W0816 18:17:58.870148   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:17:58.870155   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:17:58.870219   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:17:58.904157   75402 cri.go:89] found id: ""
	I0816 18:17:58.904184   75402 logs.go:276] 0 containers: []
	W0816 18:17:58.904194   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:17:58.904201   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:17:58.904264   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:17:58.937862   75402 cri.go:89] found id: ""
	I0816 18:17:58.937891   75402 logs.go:276] 0 containers: []
	W0816 18:17:58.937901   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:17:58.937909   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:17:58.937972   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:17:58.972465   75402 cri.go:89] found id: ""
	I0816 18:17:58.972495   75402 logs.go:276] 0 containers: []
	W0816 18:17:58.972506   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:17:58.972517   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:17:58.972532   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:17:59.047197   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:17:59.047223   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:17:59.047238   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:17:59.126634   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:17:59.126668   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:59.165528   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:17:59.165562   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:17:59.214294   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:17:59.214433   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:01.729662   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:01.742582   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:01.742642   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:01.776148   75402 cri.go:89] found id: ""
	I0816 18:18:01.776180   75402 logs.go:276] 0 containers: []
	W0816 18:18:01.776188   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:01.776197   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:01.776243   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:01.809186   75402 cri.go:89] found id: ""
	I0816 18:18:01.809218   75402 logs.go:276] 0 containers: []
	W0816 18:18:01.809229   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:01.809237   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:01.809307   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:01.842379   75402 cri.go:89] found id: ""
	I0816 18:18:01.842406   75402 logs.go:276] 0 containers: []
	W0816 18:18:01.842417   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:01.842425   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:01.842490   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:01.874706   75402 cri.go:89] found id: ""
	I0816 18:18:01.874739   75402 logs.go:276] 0 containers: []
	W0816 18:18:01.874747   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:01.874753   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:01.874813   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:01.915567   75402 cri.go:89] found id: ""
	I0816 18:18:01.915596   75402 logs.go:276] 0 containers: []
	W0816 18:18:01.915607   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:01.915615   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:01.915675   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:01.951527   75402 cri.go:89] found id: ""
	I0816 18:18:01.951559   75402 logs.go:276] 0 containers: []
	W0816 18:18:01.951569   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:01.951576   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:01.951638   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:01.983822   75402 cri.go:89] found id: ""
	I0816 18:18:01.983848   75402 logs.go:276] 0 containers: []
	W0816 18:18:01.983856   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:01.983861   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:01.983909   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:02.018976   75402 cri.go:89] found id: ""
	I0816 18:18:02.019003   75402 logs.go:276] 0 containers: []
	W0816 18:18:02.019012   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:02.019019   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:02.019033   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:02.071096   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:02.071131   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:02.085163   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:02.085189   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:02.154771   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:02.154789   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:02.154800   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:02.242068   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:02.242105   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:17:58.941456   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:01.440404   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:00.208085   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:02.705334   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:00.447843   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:02.448334   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:04.790311   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:04.803215   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:04.803298   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:04.835834   75402 cri.go:89] found id: ""
	I0816 18:18:04.835868   75402 logs.go:276] 0 containers: []
	W0816 18:18:04.835879   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:04.835886   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:04.835951   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:04.870000   75402 cri.go:89] found id: ""
	I0816 18:18:04.870032   75402 logs.go:276] 0 containers: []
	W0816 18:18:04.870042   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:04.870049   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:04.870111   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:04.906624   75402 cri.go:89] found id: ""
	I0816 18:18:04.906653   75402 logs.go:276] 0 containers: []
	W0816 18:18:04.906663   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:04.906670   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:04.906730   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:04.940115   75402 cri.go:89] found id: ""
	I0816 18:18:04.940139   75402 logs.go:276] 0 containers: []
	W0816 18:18:04.940148   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:04.940155   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:04.940213   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:04.974461   75402 cri.go:89] found id: ""
	I0816 18:18:04.974493   75402 logs.go:276] 0 containers: []
	W0816 18:18:04.974503   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:04.974510   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:04.974571   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:05.006593   75402 cri.go:89] found id: ""
	I0816 18:18:05.006618   75402 logs.go:276] 0 containers: []
	W0816 18:18:05.006628   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:05.006635   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:05.006691   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:05.040041   75402 cri.go:89] found id: ""
	I0816 18:18:05.040066   75402 logs.go:276] 0 containers: []
	W0816 18:18:05.040082   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:05.040089   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:05.040144   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:05.072968   75402 cri.go:89] found id: ""
	I0816 18:18:05.072996   75402 logs.go:276] 0 containers: []
	W0816 18:18:05.073005   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:05.073014   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:05.073025   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:05.124510   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:05.124543   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:05.145566   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:05.145592   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:05.221874   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:05.221898   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:05.221914   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:05.297283   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:05.297316   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:07.837564   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:07.850372   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:07.850441   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:07.882879   75402 cri.go:89] found id: ""
	I0816 18:18:07.882906   75402 logs.go:276] 0 containers: []
	W0816 18:18:07.882915   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:07.882920   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:07.882978   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:07.916983   75402 cri.go:89] found id: ""
	I0816 18:18:07.917011   75402 logs.go:276] 0 containers: []
	W0816 18:18:07.917019   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:07.917024   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:07.917075   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:07.953864   75402 cri.go:89] found id: ""
	I0816 18:18:07.953886   75402 logs.go:276] 0 containers: []
	W0816 18:18:07.953896   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:07.953903   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:07.953951   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:07.994375   75402 cri.go:89] found id: ""
	I0816 18:18:07.994399   75402 logs.go:276] 0 containers: []
	W0816 18:18:07.994408   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:07.994414   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:07.994472   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:08.029137   75402 cri.go:89] found id: ""
	I0816 18:18:08.029170   75402 logs.go:276] 0 containers: []
	W0816 18:18:08.029182   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:08.029189   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:08.029253   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:08.062331   75402 cri.go:89] found id: ""
	I0816 18:18:08.062358   75402 logs.go:276] 0 containers: []
	W0816 18:18:08.062367   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:08.062373   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:08.062430   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:08.097021   75402 cri.go:89] found id: ""
	I0816 18:18:08.097044   75402 logs.go:276] 0 containers: []
	W0816 18:18:08.097051   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:08.097056   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:08.097112   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:03.940724   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:06.441847   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:04.706298   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:06.707011   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:04.948066   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:06.948125   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:08.948992   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:08.131147   75402 cri.go:89] found id: ""
	I0816 18:18:08.131174   75402 logs.go:276] 0 containers: []
	W0816 18:18:08.131184   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:08.131192   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:08.131203   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:08.182334   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:08.182373   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:08.195459   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:08.195485   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:08.260333   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:08.260351   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:08.260363   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:08.344466   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:08.344506   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:10.881640   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:10.896400   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:10.896482   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:10.934034   75402 cri.go:89] found id: ""
	I0816 18:18:10.934068   75402 logs.go:276] 0 containers: []
	W0816 18:18:10.934076   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:10.934081   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:10.934130   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:10.966697   75402 cri.go:89] found id: ""
	I0816 18:18:10.966724   75402 logs.go:276] 0 containers: []
	W0816 18:18:10.966733   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:10.966741   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:10.966807   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:11.000540   75402 cri.go:89] found id: ""
	I0816 18:18:11.000568   75402 logs.go:276] 0 containers: []
	W0816 18:18:11.000579   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:11.000587   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:11.000665   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:11.034322   75402 cri.go:89] found id: ""
	I0816 18:18:11.034346   75402 logs.go:276] 0 containers: []
	W0816 18:18:11.034354   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:11.034360   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:11.034407   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:11.067081   75402 cri.go:89] found id: ""
	I0816 18:18:11.067108   75402 logs.go:276] 0 containers: []
	W0816 18:18:11.067116   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:11.067122   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:11.067170   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:11.099726   75402 cri.go:89] found id: ""
	I0816 18:18:11.099753   75402 logs.go:276] 0 containers: []
	W0816 18:18:11.099763   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:11.099770   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:11.099834   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:11.133187   75402 cri.go:89] found id: ""
	I0816 18:18:11.133216   75402 logs.go:276] 0 containers: []
	W0816 18:18:11.133226   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:11.133235   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:11.133315   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:11.167121   75402 cri.go:89] found id: ""
	I0816 18:18:11.167157   75402 logs.go:276] 0 containers: []
	W0816 18:18:11.167166   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:11.167177   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:11.167194   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:11.181396   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:11.181424   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:11.248286   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:11.248313   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:11.248325   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:11.328546   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:11.328583   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:11.365534   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:11.365576   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:08.939686   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:10.941097   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:13.440001   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:09.207018   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:11.207677   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:13.706818   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:10.949461   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:13.448057   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:13.919889   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:13.935097   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:13.935178   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:13.973196   75402 cri.go:89] found id: ""
	I0816 18:18:13.973225   75402 logs.go:276] 0 containers: []
	W0816 18:18:13.973236   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:13.973244   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:13.973328   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:14.011913   75402 cri.go:89] found id: ""
	I0816 18:18:14.011936   75402 logs.go:276] 0 containers: []
	W0816 18:18:14.011944   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:14.011950   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:14.012013   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:14.048418   75402 cri.go:89] found id: ""
	I0816 18:18:14.048447   75402 logs.go:276] 0 containers: []
	W0816 18:18:14.048459   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:14.048466   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:14.048515   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:14.082462   75402 cri.go:89] found id: ""
	I0816 18:18:14.082496   75402 logs.go:276] 0 containers: []
	W0816 18:18:14.082506   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:14.082514   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:14.082576   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:14.114958   75402 cri.go:89] found id: ""
	I0816 18:18:14.114986   75402 logs.go:276] 0 containers: []
	W0816 18:18:14.114996   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:14.115005   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:14.115067   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:14.154829   75402 cri.go:89] found id: ""
	I0816 18:18:14.154865   75402 logs.go:276] 0 containers: []
	W0816 18:18:14.154878   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:14.154888   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:14.154957   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:14.190012   75402 cri.go:89] found id: ""
	I0816 18:18:14.190045   75402 logs.go:276] 0 containers: []
	W0816 18:18:14.190053   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:14.190058   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:14.190108   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:14.223314   75402 cri.go:89] found id: ""
	I0816 18:18:14.223341   75402 logs.go:276] 0 containers: []
	W0816 18:18:14.223350   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:14.223360   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:14.223381   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:14.274995   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:14.275035   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:14.288518   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:14.288564   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:14.365668   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:14.365691   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:14.365705   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:14.445828   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:14.445866   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:16.981802   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:16.994729   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:16.994794   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:17.029790   75402 cri.go:89] found id: ""
	I0816 18:18:17.029821   75402 logs.go:276] 0 containers: []
	W0816 18:18:17.029839   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:17.029848   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:17.029912   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:17.063194   75402 cri.go:89] found id: ""
	I0816 18:18:17.063223   75402 logs.go:276] 0 containers: []
	W0816 18:18:17.063233   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:17.063240   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:17.063293   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:17.097808   75402 cri.go:89] found id: ""
	I0816 18:18:17.097831   75402 logs.go:276] 0 containers: []
	W0816 18:18:17.097839   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:17.097844   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:17.097900   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:17.132646   75402 cri.go:89] found id: ""
	I0816 18:18:17.132682   75402 logs.go:276] 0 containers: []
	W0816 18:18:17.132691   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:17.132697   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:17.132751   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:17.164285   75402 cri.go:89] found id: ""
	I0816 18:18:17.164316   75402 logs.go:276] 0 containers: []
	W0816 18:18:17.164328   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:17.164335   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:17.164391   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:17.195642   75402 cri.go:89] found id: ""
	I0816 18:18:17.195672   75402 logs.go:276] 0 containers: []
	W0816 18:18:17.195683   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:17.195691   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:17.195754   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:17.228005   75402 cri.go:89] found id: ""
	I0816 18:18:17.228033   75402 logs.go:276] 0 containers: []
	W0816 18:18:17.228041   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:17.228047   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:17.228107   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:17.279195   75402 cri.go:89] found id: ""
	I0816 18:18:17.279229   75402 logs.go:276] 0 containers: []
	W0816 18:18:17.279241   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:17.279253   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:17.279270   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:17.360084   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:17.360125   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:17.405184   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:17.405210   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:17.457453   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:17.457483   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:17.471472   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:17.471502   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:17.536478   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:15.939660   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:17.940456   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:16.207019   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:18.706191   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:15.450419   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:17.948912   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:20.036644   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:20.050169   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:20.050244   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:20.087943   75402 cri.go:89] found id: ""
	I0816 18:18:20.087971   75402 logs.go:276] 0 containers: []
	W0816 18:18:20.087981   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:20.087988   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:20.088051   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:20.119908   75402 cri.go:89] found id: ""
	I0816 18:18:20.119931   75402 logs.go:276] 0 containers: []
	W0816 18:18:20.119940   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:20.119945   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:20.120013   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:20.152115   75402 cri.go:89] found id: ""
	I0816 18:18:20.152146   75402 logs.go:276] 0 containers: []
	W0816 18:18:20.152156   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:20.152162   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:20.152209   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:20.189464   75402 cri.go:89] found id: ""
	I0816 18:18:20.189488   75402 logs.go:276] 0 containers: []
	W0816 18:18:20.189495   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:20.189500   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:20.189550   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:20.224779   75402 cri.go:89] found id: ""
	I0816 18:18:20.224807   75402 logs.go:276] 0 containers: []
	W0816 18:18:20.224817   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:20.224824   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:20.224888   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:20.257021   75402 cri.go:89] found id: ""
	I0816 18:18:20.257048   75402 logs.go:276] 0 containers: []
	W0816 18:18:20.257059   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:20.257067   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:20.257121   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:20.290991   75402 cri.go:89] found id: ""
	I0816 18:18:20.291023   75402 logs.go:276] 0 containers: []
	W0816 18:18:20.291032   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:20.291039   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:20.291099   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:20.323674   75402 cri.go:89] found id: ""
	I0816 18:18:20.323704   75402 logs.go:276] 0 containers: []
	W0816 18:18:20.323715   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:20.323726   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:20.323742   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:20.373411   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:20.373447   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:20.386954   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:20.386981   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:20.464366   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:20.464384   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:20.464403   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:20.541836   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:20.541881   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:23.085071   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:23.100460   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:23.100524   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:20.440656   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:22.942713   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:20.706771   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:23.207824   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:20.448676   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:22.948907   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:23.141239   75402 cri.go:89] found id: ""
	I0816 18:18:23.141269   75402 logs.go:276] 0 containers: []
	W0816 18:18:23.141280   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:23.141287   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:23.141354   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:23.172914   75402 cri.go:89] found id: ""
	I0816 18:18:23.172941   75402 logs.go:276] 0 containers: []
	W0816 18:18:23.172950   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:23.172958   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:23.173015   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:23.205593   75402 cri.go:89] found id: ""
	I0816 18:18:23.205621   75402 logs.go:276] 0 containers: []
	W0816 18:18:23.205632   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:23.205640   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:23.205706   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:23.239358   75402 cri.go:89] found id: ""
	I0816 18:18:23.239383   75402 logs.go:276] 0 containers: []
	W0816 18:18:23.239392   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:23.239401   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:23.239463   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:23.271798   75402 cri.go:89] found id: ""
	I0816 18:18:23.271828   75402 logs.go:276] 0 containers: []
	W0816 18:18:23.271838   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:23.271844   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:23.271911   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:23.305287   75402 cri.go:89] found id: ""
	I0816 18:18:23.305316   75402 logs.go:276] 0 containers: []
	W0816 18:18:23.305327   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:23.305335   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:23.305397   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:23.344041   75402 cri.go:89] found id: ""
	I0816 18:18:23.344067   75402 logs.go:276] 0 containers: []
	W0816 18:18:23.344075   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:23.344080   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:23.344134   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:23.376540   75402 cri.go:89] found id: ""
	I0816 18:18:23.376571   75402 logs.go:276] 0 containers: []
	W0816 18:18:23.376583   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:23.376601   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:23.376616   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:23.428265   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:23.428301   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:23.441377   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:23.441404   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:23.509219   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:23.509243   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:23.509259   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:23.589151   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:23.589186   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:26.126176   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:26.140228   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:26.140292   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:26.176768   75402 cri.go:89] found id: ""
	I0816 18:18:26.176807   75402 logs.go:276] 0 containers: []
	W0816 18:18:26.176820   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:26.176829   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:26.176887   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:26.212357   75402 cri.go:89] found id: ""
	I0816 18:18:26.212383   75402 logs.go:276] 0 containers: []
	W0816 18:18:26.212390   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:26.212396   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:26.212457   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:26.245256   75402 cri.go:89] found id: ""
	I0816 18:18:26.245290   75402 logs.go:276] 0 containers: []
	W0816 18:18:26.245302   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:26.245309   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:26.245370   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:26.277525   75402 cri.go:89] found id: ""
	I0816 18:18:26.277561   75402 logs.go:276] 0 containers: []
	W0816 18:18:26.277569   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:26.277575   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:26.277627   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:26.310928   75402 cri.go:89] found id: ""
	I0816 18:18:26.310956   75402 logs.go:276] 0 containers: []
	W0816 18:18:26.310967   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:26.310976   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:26.311052   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:26.344595   75402 cri.go:89] found id: ""
	I0816 18:18:26.344647   75402 logs.go:276] 0 containers: []
	W0816 18:18:26.344661   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:26.344669   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:26.344741   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:26.377776   75402 cri.go:89] found id: ""
	I0816 18:18:26.377805   75402 logs.go:276] 0 containers: []
	W0816 18:18:26.377814   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:26.377820   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:26.377872   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:26.411139   75402 cri.go:89] found id: ""
	I0816 18:18:26.411167   75402 logs.go:276] 0 containers: []
	W0816 18:18:26.411179   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:26.411190   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:26.411204   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:26.493802   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:26.493838   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:26.529542   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:26.529576   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:26.583544   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:26.583588   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:26.596429   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:26.596459   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:26.667858   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:25.441062   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:27.940609   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:25.706109   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:28.206196   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:25.448352   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:27.947950   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:29.168766   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:29.182032   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:29.182103   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:29.220213   75402 cri.go:89] found id: ""
	I0816 18:18:29.220239   75402 logs.go:276] 0 containers: []
	W0816 18:18:29.220247   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:29.220253   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:29.220300   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:29.257820   75402 cri.go:89] found id: ""
	I0816 18:18:29.257850   75402 logs.go:276] 0 containers: []
	W0816 18:18:29.257861   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:29.257867   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:29.257933   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:29.290450   75402 cri.go:89] found id: ""
	I0816 18:18:29.290473   75402 logs.go:276] 0 containers: []
	W0816 18:18:29.290480   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:29.290485   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:29.290546   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:29.328032   75402 cri.go:89] found id: ""
	I0816 18:18:29.328061   75402 logs.go:276] 0 containers: []
	W0816 18:18:29.328070   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:29.328076   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:29.328135   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:29.362104   75402 cri.go:89] found id: ""
	I0816 18:18:29.362132   75402 logs.go:276] 0 containers: []
	W0816 18:18:29.362141   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:29.362149   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:29.362218   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:29.395258   75402 cri.go:89] found id: ""
	I0816 18:18:29.395290   75402 logs.go:276] 0 containers: []
	W0816 18:18:29.395301   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:29.395309   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:29.395375   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:29.426617   75402 cri.go:89] found id: ""
	I0816 18:18:29.426646   75402 logs.go:276] 0 containers: []
	W0816 18:18:29.426656   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:29.426663   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:29.426725   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:29.462861   75402 cri.go:89] found id: ""
	I0816 18:18:29.462890   75402 logs.go:276] 0 containers: []
	W0816 18:18:29.462901   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:29.462912   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:29.462928   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:29.514882   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:29.514915   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:29.528101   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:29.528128   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:29.598983   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:29.599005   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:29.599020   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:29.684955   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:29.684991   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:32.230155   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:32.244158   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:32.244226   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:32.281993   75402 cri.go:89] found id: ""
	I0816 18:18:32.282020   75402 logs.go:276] 0 containers: []
	W0816 18:18:32.282031   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:32.282037   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:32.282100   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:32.316870   75402 cri.go:89] found id: ""
	I0816 18:18:32.316896   75402 logs.go:276] 0 containers: []
	W0816 18:18:32.316906   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:32.316914   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:32.316976   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:32.352597   75402 cri.go:89] found id: ""
	I0816 18:18:32.352637   75402 logs.go:276] 0 containers: []
	W0816 18:18:32.352649   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:32.352656   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:32.352722   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:32.387520   75402 cri.go:89] found id: ""
	I0816 18:18:32.387564   75402 logs.go:276] 0 containers: []
	W0816 18:18:32.387576   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:32.387584   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:32.387638   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:32.421499   75402 cri.go:89] found id: ""
	I0816 18:18:32.421526   75402 logs.go:276] 0 containers: []
	W0816 18:18:32.421537   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:32.421544   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:32.421603   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:32.460048   75402 cri.go:89] found id: ""
	I0816 18:18:32.460075   75402 logs.go:276] 0 containers: []
	W0816 18:18:32.460086   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:32.460093   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:32.460151   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:32.498148   75402 cri.go:89] found id: ""
	I0816 18:18:32.498176   75402 logs.go:276] 0 containers: []
	W0816 18:18:32.498184   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:32.498190   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:32.498248   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:32.530683   75402 cri.go:89] found id: ""
	I0816 18:18:32.530717   75402 logs.go:276] 0 containers: []
	W0816 18:18:32.530730   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:32.530741   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:32.530762   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:32.614776   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:32.614820   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:32.655628   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:32.655667   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:32.722763   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:32.722807   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:32.739817   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:32.739847   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:32.819297   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:30.440684   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:32.441210   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:30.206433   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:32.707436   74828 pod_ready.go:103] pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:30.448781   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:32.457660   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:35.320173   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:35.332427   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:35.332503   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:35.366316   75402 cri.go:89] found id: ""
	I0816 18:18:35.366346   75402 logs.go:276] 0 containers: []
	W0816 18:18:35.366357   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:35.366365   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:35.366433   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:35.399308   75402 cri.go:89] found id: ""
	I0816 18:18:35.399346   75402 logs.go:276] 0 containers: []
	W0816 18:18:35.399357   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:35.399367   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:35.399434   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:35.434926   75402 cri.go:89] found id: ""
	I0816 18:18:35.434958   75402 logs.go:276] 0 containers: []
	W0816 18:18:35.434971   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:35.434980   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:35.435042   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:35.473222   75402 cri.go:89] found id: ""
	I0816 18:18:35.473247   75402 logs.go:276] 0 containers: []
	W0816 18:18:35.473258   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:35.473266   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:35.473343   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:35.505484   75402 cri.go:89] found id: ""
	I0816 18:18:35.505521   75402 logs.go:276] 0 containers: []
	W0816 18:18:35.505533   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:35.505540   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:35.505608   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:35.540532   75402 cri.go:89] found id: ""
	I0816 18:18:35.540573   75402 logs.go:276] 0 containers: []
	W0816 18:18:35.540584   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:35.540590   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:35.540663   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:35.574205   75402 cri.go:89] found id: ""
	I0816 18:18:35.574235   75402 logs.go:276] 0 containers: []
	W0816 18:18:35.574245   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:35.574252   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:35.574343   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:35.614707   75402 cri.go:89] found id: ""
	I0816 18:18:35.614732   75402 logs.go:276] 0 containers: []
	W0816 18:18:35.614739   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:35.614747   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:35.614759   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:35.690830   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:35.690861   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:35.726601   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:35.726627   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:35.774706   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:35.774736   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:35.787557   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:35.787616   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:35.857474   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:34.940337   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:37.440507   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:34.701151   74828 pod_ready.go:82] duration metric: took 4m0.000965442s for pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace to be "Ready" ...
	E0816 18:18:34.701178   74828 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-rxtwg" in "kube-system" namespace to be "Ready" (will not retry!)
	I0816 18:18:34.701196   74828 pod_ready.go:39] duration metric: took 4m13.502588966s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:18:34.701228   74828 kubeadm.go:597] duration metric: took 4m21.306103533s to restartPrimaryControlPlane
	W0816 18:18:34.701293   74828 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0816 18:18:34.701330   74828 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 18:18:34.948583   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:37.447544   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:39.448942   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:38.358057   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:38.371128   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:38.371189   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:38.404812   75402 cri.go:89] found id: ""
	I0816 18:18:38.404844   75402 logs.go:276] 0 containers: []
	W0816 18:18:38.404855   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:38.404864   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:38.404926   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:38.437922   75402 cri.go:89] found id: ""
	I0816 18:18:38.437950   75402 logs.go:276] 0 containers: []
	W0816 18:18:38.437960   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:38.437967   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:38.438023   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:38.471474   75402 cri.go:89] found id: ""
	I0816 18:18:38.471509   75402 logs.go:276] 0 containers: []
	W0816 18:18:38.471519   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:38.471525   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:38.471582   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:38.510132   75402 cri.go:89] found id: ""
	I0816 18:18:38.510158   75402 logs.go:276] 0 containers: []
	W0816 18:18:38.510168   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:38.510184   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:38.510246   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:38.542212   75402 cri.go:89] found id: ""
	I0816 18:18:38.542251   75402 logs.go:276] 0 containers: []
	W0816 18:18:38.542262   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:38.542269   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:38.542341   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:38.579037   75402 cri.go:89] found id: ""
	I0816 18:18:38.579068   75402 logs.go:276] 0 containers: []
	W0816 18:18:38.579076   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:38.579082   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:38.579129   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:38.619219   75402 cri.go:89] found id: ""
	I0816 18:18:38.619252   75402 logs.go:276] 0 containers: []
	W0816 18:18:38.619263   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:38.619272   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:38.619335   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:38.655124   75402 cri.go:89] found id: ""
	I0816 18:18:38.655149   75402 logs.go:276] 0 containers: []
	W0816 18:18:38.655169   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:38.655180   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:38.655194   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:38.737857   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:38.737894   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:38.779777   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:38.779806   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:38.831556   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:38.831590   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:38.844496   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:38.844523   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:38.914543   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:41.415612   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:41.428187   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:41.428251   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:41.462932   75402 cri.go:89] found id: ""
	I0816 18:18:41.462964   75402 logs.go:276] 0 containers: []
	W0816 18:18:41.462975   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:41.462983   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:41.463043   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:41.497712   75402 cri.go:89] found id: ""
	I0816 18:18:41.497739   75402 logs.go:276] 0 containers: []
	W0816 18:18:41.497748   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:41.497754   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:41.497804   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:41.528430   75402 cri.go:89] found id: ""
	I0816 18:18:41.528455   75402 logs.go:276] 0 containers: []
	W0816 18:18:41.528463   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:41.528468   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:41.528527   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:41.560048   75402 cri.go:89] found id: ""
	I0816 18:18:41.560071   75402 logs.go:276] 0 containers: []
	W0816 18:18:41.560081   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:41.560088   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:41.560142   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:41.592536   75402 cri.go:89] found id: ""
	I0816 18:18:41.592566   75402 logs.go:276] 0 containers: []
	W0816 18:18:41.592577   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:41.592585   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:41.592663   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:41.626850   75402 cri.go:89] found id: ""
	I0816 18:18:41.626884   75402 logs.go:276] 0 containers: []
	W0816 18:18:41.626894   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:41.626902   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:41.626965   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:41.660452   75402 cri.go:89] found id: ""
	I0816 18:18:41.660478   75402 logs.go:276] 0 containers: []
	W0816 18:18:41.660486   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:41.660491   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:41.660542   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:41.695990   75402 cri.go:89] found id: ""
	I0816 18:18:41.696012   75402 logs.go:276] 0 containers: []
	W0816 18:18:41.696020   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:41.696028   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:41.696039   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:41.733107   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:41.733134   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:41.782812   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:41.782843   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:41.795954   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:41.795984   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:41.867473   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:41.867526   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:41.867545   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:39.442037   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:41.940088   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:41.948682   75006 pod_ready.go:103] pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:43.942215   75006 pod_ready.go:82] duration metric: took 4m0.000164284s for pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace to be "Ready" ...
	E0816 18:18:43.942239   75006 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-fc4h4" in "kube-system" namespace to be "Ready" (will not retry!)
	I0816 18:18:43.942255   75006 pod_ready.go:39] duration metric: took 4m12.163955241s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:18:43.942279   75006 kubeadm.go:597] duration metric: took 4m21.898271101s to restartPrimaryControlPlane
	W0816 18:18:43.942326   75006 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0816 18:18:43.942352   75006 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 18:18:44.450340   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:44.463299   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:18:44.463361   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:18:44.495068   75402 cri.go:89] found id: ""
	I0816 18:18:44.495098   75402 logs.go:276] 0 containers: []
	W0816 18:18:44.495108   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:18:44.495116   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:18:44.495221   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:18:44.529615   75402 cri.go:89] found id: ""
	I0816 18:18:44.529638   75402 logs.go:276] 0 containers: []
	W0816 18:18:44.529646   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:18:44.529651   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:18:44.529701   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:18:44.565275   75402 cri.go:89] found id: ""
	I0816 18:18:44.565298   75402 logs.go:276] 0 containers: []
	W0816 18:18:44.565306   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:18:44.565321   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:18:44.565384   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:18:44.598554   75402 cri.go:89] found id: ""
	I0816 18:18:44.598590   75402 logs.go:276] 0 containers: []
	W0816 18:18:44.598601   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:18:44.598609   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:18:44.598673   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:18:44.631389   75402 cri.go:89] found id: ""
	I0816 18:18:44.631422   75402 logs.go:276] 0 containers: []
	W0816 18:18:44.631436   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:18:44.631446   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:18:44.631519   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:18:44.663986   75402 cri.go:89] found id: ""
	I0816 18:18:44.664013   75402 logs.go:276] 0 containers: []
	W0816 18:18:44.664023   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:18:44.664031   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:18:44.664095   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:18:44.700238   75402 cri.go:89] found id: ""
	I0816 18:18:44.700263   75402 logs.go:276] 0 containers: []
	W0816 18:18:44.700272   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:18:44.700277   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:18:44.700330   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:18:44.732737   75402 cri.go:89] found id: ""
	I0816 18:18:44.732766   75402 logs.go:276] 0 containers: []
	W0816 18:18:44.732779   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:18:44.732790   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:18:44.732807   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:18:44.806427   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:18:44.806462   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:18:44.842965   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:18:44.842994   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:18:44.895745   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:18:44.895781   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:18:44.909850   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:18:44.909885   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:18:44.979315   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:18:47.479563   75402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:18:47.491876   75402 kubeadm.go:597] duration metric: took 4m4.431091965s to restartPrimaryControlPlane
	W0816 18:18:47.491939   75402 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0816 18:18:47.491962   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 18:18:43.941047   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:46.440592   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:48.441208   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:51.168302   75402 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.676317513s)
	I0816 18:18:51.168387   75402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 18:18:51.182492   75402 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 18:18:51.192403   75402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 18:18:51.202058   75402 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 18:18:51.202075   75402 kubeadm.go:157] found existing configuration files:
	
	I0816 18:18:51.202115   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 18:18:51.210661   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 18:18:51.210721   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 18:18:51.219979   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 18:18:51.228422   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 18:18:51.228488   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 18:18:51.237159   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 18:18:51.245555   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 18:18:51.245622   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 18:18:51.253986   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 18:18:51.261885   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 18:18:51.261927   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 18:18:51.270479   75402 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 18:18:51.335784   75402 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0816 18:18:51.335883   75402 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 18:18:51.482910   75402 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 18:18:51.483069   75402 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 18:18:51.483228   75402 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0816 18:18:51.652730   75402 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 18:18:51.655077   75402 out.go:235]   - Generating certificates and keys ...
	I0816 18:18:51.655185   75402 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 18:18:51.655304   75402 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 18:18:51.655425   75402 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 18:18:51.655521   75402 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 18:18:51.657408   75402 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 18:18:51.657485   75402 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 18:18:51.657561   75402 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 18:18:51.657645   75402 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 18:18:51.657748   75402 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 18:18:51.657854   75402 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 18:18:51.657911   75402 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 18:18:51.657984   75402 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 18:18:51.720786   75402 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 18:18:51.991165   75402 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 18:18:52.140983   75402 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 18:18:52.453361   75402 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 18:18:52.467210   75402 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 18:18:52.469222   75402 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 18:18:52.469338   75402 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 18:18:52.590938   75402 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 18:18:52.592875   75402 out.go:235]   - Booting up control plane ...
	I0816 18:18:52.592987   75402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 18:18:52.602597   75402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 18:18:52.603616   75402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 18:18:52.604417   75402 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 18:18:52.606669   75402 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0816 18:18:50.939639   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:52.940202   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:54.940917   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:57.439382   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:18:59.443139   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:01.940496   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:00.803654   74828 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.102297191s)
	I0816 18:19:00.803740   74828 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 18:19:00.818126   74828 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 18:19:00.827602   74828 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 18:19:00.836389   74828 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 18:19:00.836410   74828 kubeadm.go:157] found existing configuration files:
	
	I0816 18:19:00.836455   74828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 18:19:00.844830   74828 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 18:19:00.844880   74828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 18:19:00.853736   74828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 18:19:00.862795   74828 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 18:19:00.862855   74828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 18:19:00.872056   74828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 18:19:00.880410   74828 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 18:19:00.880461   74828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 18:19:00.889000   74828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 18:19:00.897508   74828 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 18:19:00.897568   74828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 18:19:00.906256   74828 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 18:19:00.953336   74828 kubeadm.go:310] W0816 18:19:00.929461    3053 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 18:19:00.955337   74828 kubeadm.go:310] W0816 18:19:00.931382    3053 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 18:19:01.068247   74828 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 18:19:03.940545   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:06.439727   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:08.440027   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:09.225829   74828 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0816 18:19:09.225908   74828 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 18:19:09.226014   74828 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 18:19:09.226126   74828 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 18:19:09.226242   74828 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0816 18:19:09.226329   74828 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 18:19:09.228065   74828 out.go:235]   - Generating certificates and keys ...
	I0816 18:19:09.228133   74828 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 18:19:09.228183   74828 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 18:19:09.228252   74828 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 18:19:09.228315   74828 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 18:19:09.228403   74828 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 18:19:09.228489   74828 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 18:19:09.228584   74828 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 18:19:09.228686   74828 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 18:19:09.228787   74828 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 18:19:09.228864   74828 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 18:19:09.228903   74828 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 18:19:09.228983   74828 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 18:19:09.229052   74828 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 18:19:09.229147   74828 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0816 18:19:09.229234   74828 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 18:19:09.229332   74828 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 18:19:09.229410   74828 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 18:19:09.229532   74828 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 18:19:09.229607   74828 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 18:19:09.230874   74828 out.go:235]   - Booting up control plane ...
	I0816 18:19:09.230948   74828 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 18:19:09.231032   74828 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 18:19:09.231090   74828 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 18:19:09.231202   74828 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 18:19:09.231321   74828 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 18:19:09.231381   74828 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 18:19:09.231572   74828 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0816 18:19:09.231662   74828 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0816 18:19:09.231711   74828 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.32263ms
	I0816 18:19:09.231774   74828 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0816 18:19:09.231824   74828 kubeadm.go:310] [api-check] The API server is healthy after 5.002367118s
	I0816 18:19:09.231923   74828 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0816 18:19:09.232091   74828 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0816 18:19:09.232166   74828 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0816 18:19:09.232419   74828 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-864476 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0816 18:19:09.232497   74828 kubeadm.go:310] [bootstrap-token] Using token: 6m1jus.xr9uhx26t28q092p
	I0816 18:19:09.233962   74828 out.go:235]   - Configuring RBAC rules ...
	I0816 18:19:09.234068   74828 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0816 18:19:09.234164   74828 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0816 18:19:09.234315   74828 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0816 18:19:09.234425   74828 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0816 18:19:09.234522   74828 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0816 18:19:09.234615   74828 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0816 18:19:09.234775   74828 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0816 18:19:09.234830   74828 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0816 18:19:09.234892   74828 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0816 18:19:09.234901   74828 kubeadm.go:310] 
	I0816 18:19:09.234971   74828 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0816 18:19:09.234980   74828 kubeadm.go:310] 
	I0816 18:19:09.235067   74828 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0816 18:19:09.235076   74828 kubeadm.go:310] 
	I0816 18:19:09.235115   74828 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0816 18:19:09.235194   74828 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0816 18:19:09.235271   74828 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0816 18:19:09.235280   74828 kubeadm.go:310] 
	I0816 18:19:09.235367   74828 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0816 18:19:09.235376   74828 kubeadm.go:310] 
	I0816 18:19:09.235448   74828 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0816 18:19:09.235459   74828 kubeadm.go:310] 
	I0816 18:19:09.235533   74828 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0816 18:19:09.235607   74828 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0816 18:19:09.235677   74828 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0816 18:19:09.235683   74828 kubeadm.go:310] 
	I0816 18:19:09.235795   74828 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0816 18:19:09.235907   74828 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0816 18:19:09.235916   74828 kubeadm.go:310] 
	I0816 18:19:09.235986   74828 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 6m1jus.xr9uhx26t28q092p \
	I0816 18:19:09.236080   74828 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:340cd7e6f4afd769ffe0b97bb40a910498795aaf29fab06cd6b8c9a3957ccd57 \
	I0816 18:19:09.236099   74828 kubeadm.go:310] 	--control-plane 
	I0816 18:19:09.236105   74828 kubeadm.go:310] 
	I0816 18:19:09.236177   74828 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0816 18:19:09.236185   74828 kubeadm.go:310] 
	I0816 18:19:09.236268   74828 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 6m1jus.xr9uhx26t28q092p \
	I0816 18:19:09.236403   74828 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:340cd7e6f4afd769ffe0b97bb40a910498795aaf29fab06cd6b8c9a3957ccd57 
	I0816 18:19:09.236416   74828 cni.go:84] Creating CNI manager for ""
	I0816 18:19:09.236422   74828 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 18:19:09.237971   74828 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 18:19:10.069497   75006 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.127122656s)
	I0816 18:19:10.069585   75006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 18:19:10.085322   75006 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 18:19:10.098736   75006 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 18:19:10.108163   75006 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 18:19:10.108183   75006 kubeadm.go:157] found existing configuration files:
	
	I0816 18:19:10.108224   75006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0816 18:19:10.117330   75006 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 18:19:10.117382   75006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 18:19:10.127090   75006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0816 18:19:10.135574   75006 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 18:19:10.135648   75006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 18:19:10.146127   75006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0816 18:19:10.154474   75006 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 18:19:10.154533   75006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 18:19:10.163245   75006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0816 18:19:10.171315   75006 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 18:19:10.171375   75006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 18:19:10.181088   75006 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 18:19:10.225495   75006 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0816 18:19:10.225571   75006 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 18:19:10.327332   75006 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 18:19:10.327442   75006 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 18:19:10.327586   75006 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0816 18:19:10.335739   75006 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 18:19:10.337610   75006 out.go:235]   - Generating certificates and keys ...
	I0816 18:19:10.337730   75006 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 18:19:10.337818   75006 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 18:19:10.337935   75006 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 18:19:10.338054   75006 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 18:19:10.338174   75006 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 18:19:10.338254   75006 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 18:19:10.338359   75006 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 18:19:10.338452   75006 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 18:19:10.338562   75006 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 18:19:10.338668   75006 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 18:19:10.338718   75006 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 18:19:10.338796   75006 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 18:19:10.437447   75006 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 18:19:10.868191   75006 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0816 18:19:10.961497   75006 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 18:19:11.363158   75006 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 18:19:11.963929   75006 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 18:19:11.964410   75006 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 18:19:11.967675   75006 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 18:19:09.239250   74828 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 18:19:09.250270   74828 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 18:19:09.267205   74828 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 18:19:09.267346   74828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:09.267366   74828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-864476 minikube.k8s.io/updated_at=2024_08_16T18_19_09_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd minikube.k8s.io/name=no-preload-864476 minikube.k8s.io/primary=true
	I0816 18:19:09.282111   74828 ops.go:34] apiserver oom_adj: -16
	I0816 18:19:09.471160   74828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:09.971453   74828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:10.471576   74828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:10.971748   74828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:11.471954   74828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:11.971371   74828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:12.471626   74828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:12.972021   74828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:13.472254   74828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:13.588350   74828 kubeadm.go:1113] duration metric: took 4.321062687s to wait for elevateKubeSystemPrivileges
	I0816 18:19:13.588392   74828 kubeadm.go:394] duration metric: took 5m0.245036951s to StartCluster
	I0816 18:19:13.588413   74828 settings.go:142] acquiring lock: {Name:mk7d6bbff611e99a9222156e58c7ea3b0a5541f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:19:13.588500   74828 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19461-9545/kubeconfig
	I0816 18:19:13.591118   74828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/kubeconfig: {Name:mk7d5abb3a1b9b654c5d7490d32aeae2c9a1bdd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:19:13.591418   74828 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.50 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 18:19:13.591683   74828 config.go:182] Loaded profile config "no-preload-864476": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 18:19:13.591744   74828 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 18:19:13.591809   74828 addons.go:69] Setting storage-provisioner=true in profile "no-preload-864476"
	I0816 18:19:13.591839   74828 addons.go:234] Setting addon storage-provisioner=true in "no-preload-864476"
	W0816 18:19:13.591851   74828 addons.go:243] addon storage-provisioner should already be in state true
	I0816 18:19:13.591882   74828 host.go:66] Checking if "no-preload-864476" exists ...
	I0816 18:19:13.592025   74828 addons.go:69] Setting default-storageclass=true in profile "no-preload-864476"
	I0816 18:19:13.592070   74828 addons.go:69] Setting metrics-server=true in profile "no-preload-864476"
	I0816 18:19:13.592135   74828 addons.go:234] Setting addon metrics-server=true in "no-preload-864476"
	W0816 18:19:13.592150   74828 addons.go:243] addon metrics-server should already be in state true
	I0816 18:19:13.592073   74828 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-864476"
	I0816 18:19:13.592272   74828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:13.592206   74828 host.go:66] Checking if "no-preload-864476" exists ...
	I0816 18:19:13.592326   74828 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:13.592654   74828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:13.592677   74828 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:13.592731   74828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:13.592753   74828 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:13.592790   74828 out.go:177] * Verifying Kubernetes components...
	I0816 18:19:13.594236   74828 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:19:13.613019   74828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42847
	I0816 18:19:13.613061   74828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44393
	I0816 18:19:13.613087   74828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40547
	I0816 18:19:13.613498   74828 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:13.613552   74828 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:13.613708   74828 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:13.614094   74828 main.go:141] libmachine: Using API Version  1
	I0816 18:19:13.614113   74828 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:13.614198   74828 main.go:141] libmachine: Using API Version  1
	I0816 18:19:13.614222   74828 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:13.614403   74828 main.go:141] libmachine: Using API Version  1
	I0816 18:19:13.614420   74828 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:13.614478   74828 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:13.614675   74828 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:13.614728   74828 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:13.614856   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetState
	I0816 18:19:13.615039   74828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:13.615068   74828 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:13.615401   74828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:13.615442   74828 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:13.619787   74828 addons.go:234] Setting addon default-storageclass=true in "no-preload-864476"
	W0816 18:19:13.619815   74828 addons.go:243] addon default-storageclass should already be in state true
	I0816 18:19:13.619848   74828 host.go:66] Checking if "no-preload-864476" exists ...
	I0816 18:19:13.620274   74828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:13.620438   74828 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:13.642013   74828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43679
	I0816 18:19:13.642196   74828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46207
	I0816 18:19:13.642654   74828 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:13.643201   74828 main.go:141] libmachine: Using API Version  1
	I0816 18:19:13.643227   74828 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:13.643304   74828 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:13.643888   74828 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:13.644065   74828 main.go:141] libmachine: Using API Version  1
	I0816 18:19:13.644086   74828 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:13.644537   74828 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:13.644548   74828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:13.644591   74828 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:13.645002   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetState
	I0816 18:19:13.646881   74828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40749
	I0816 18:19:13.647127   74828 main.go:141] libmachine: (no-preload-864476) Calling .DriverName
	I0816 18:19:13.647406   74828 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:13.648126   74828 main.go:141] libmachine: Using API Version  1
	I0816 18:19:13.648156   74828 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:13.648725   74828 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:13.648935   74828 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0816 18:19:13.649121   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetState
	I0816 18:19:13.649823   74828 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 18:19:13.649840   74828 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0816 18:19:13.649861   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:19:13.651524   74828 main.go:141] libmachine: (no-preload-864476) Calling .DriverName
	I0816 18:19:13.652917   74828 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:19:10.441027   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:12.939870   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:13.653916   74828 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 18:19:13.653933   74828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 18:19:13.653952   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:19:13.654035   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:19:13.654463   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:19:13.654482   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:19:13.654665   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHPort
	I0816 18:19:13.654883   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:19:13.655044   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHUsername
	I0816 18:19:13.655247   74828 sshutil.go:53] new ssh client: &{IP:192.168.50.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/no-preload-864476/id_rsa Username:docker}
	I0816 18:19:13.657315   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:19:13.657699   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:19:13.657783   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:19:13.657974   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHPort
	I0816 18:19:13.658125   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:19:13.658247   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHUsername
	I0816 18:19:13.658362   74828 sshutil.go:53] new ssh client: &{IP:192.168.50.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/no-preload-864476/id_rsa Username:docker}
	I0816 18:19:13.670111   74828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45349
	I0816 18:19:13.670711   74828 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:13.671220   74828 main.go:141] libmachine: Using API Version  1
	I0816 18:19:13.671239   74828 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:13.671585   74828 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:13.671778   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetState
	I0816 18:19:13.673274   74828 main.go:141] libmachine: (no-preload-864476) Calling .DriverName
	I0816 18:19:13.673480   74828 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 18:19:13.673493   74828 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 18:19:13.673511   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHHostname
	I0816 18:19:13.677160   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:19:13.677542   74828 main.go:141] libmachine: (no-preload-864476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:50:53", ip: ""} in network mk-no-preload-864476: {Iface:virbr2 ExpiryTime:2024-08-16 19:13:46 +0000 UTC Type:0 Mac:52:54:00:f3:50:53 Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:no-preload-864476 Clientid:01:52:54:00:f3:50:53}
	I0816 18:19:13.677564   74828 main.go:141] libmachine: (no-preload-864476) DBG | domain no-preload-864476 has defined IP address 192.168.50.50 and MAC address 52:54:00:f3:50:53 in network mk-no-preload-864476
	I0816 18:19:13.677854   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHPort
	I0816 18:19:13.678049   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHKeyPath
	I0816 18:19:13.678170   74828 main.go:141] libmachine: (no-preload-864476) Calling .GetSSHUsername
	I0816 18:19:13.678263   74828 sshutil.go:53] new ssh client: &{IP:192.168.50.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/no-preload-864476/id_rsa Username:docker}
	I0816 18:19:11.970291   75006 out.go:235]   - Booting up control plane ...
	I0816 18:19:11.970385   75006 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 18:19:11.970516   75006 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 18:19:11.970617   75006 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 18:19:11.988374   75006 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 18:19:11.997980   75006 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 18:19:11.998045   75006 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 18:19:12.132297   75006 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0816 18:19:12.132447   75006 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0816 18:19:13.135489   75006 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.003222114s
	I0816 18:19:13.135584   75006 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0816 18:19:13.840111   74828 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 18:19:13.903130   74828 node_ready.go:35] waiting up to 6m0s for node "no-preload-864476" to be "Ready" ...
	I0816 18:19:13.915130   74828 node_ready.go:49] node "no-preload-864476" has status "Ready":"True"
	I0816 18:19:13.915163   74828 node_ready.go:38] duration metric: took 12.001127ms for node "no-preload-864476" to be "Ready" ...
	I0816 18:19:13.915174   74828 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:19:13.926756   74828 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-6zfgr" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:13.944598   74828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 18:19:13.971002   74828 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 18:19:13.971036   74828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0816 18:19:13.998897   74828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 18:19:14.015731   74828 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 18:19:14.015754   74828 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0816 18:19:14.080186   74828 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 18:19:14.080212   74828 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0816 18:19:14.187279   74828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 18:19:15.075984   74828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.077053329s)
	I0816 18:19:15.076058   74828 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:15.076071   74828 main.go:141] libmachine: (no-preload-864476) Calling .Close
	I0816 18:19:15.076364   74828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.131733705s)
	I0816 18:19:15.076478   74828 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:15.076495   74828 main.go:141] libmachine: (no-preload-864476) Calling .Close
	I0816 18:19:15.076405   74828 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:15.076567   74828 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:15.076591   74828 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:15.076600   74828 main.go:141] libmachine: (no-preload-864476) Calling .Close
	I0816 18:19:15.076436   74828 main.go:141] libmachine: (no-preload-864476) DBG | Closing plugin on server side
	I0816 18:19:15.076786   74828 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:15.076838   74828 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:15.076859   74828 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:15.076879   74828 main.go:141] libmachine: (no-preload-864476) Calling .Close
	I0816 18:19:15.076969   74828 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:15.076987   74828 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:15.077443   74828 main.go:141] libmachine: (no-preload-864476) DBG | Closing plugin on server side
	I0816 18:19:15.077517   74828 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:15.077535   74828 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:15.164872   74828 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:15.164903   74828 main.go:141] libmachine: (no-preload-864476) Calling .Close
	I0816 18:19:15.165218   74828 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:15.165238   74828 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:15.373294   74828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.1859614s)
	I0816 18:19:15.373399   74828 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:15.373417   74828 main.go:141] libmachine: (no-preload-864476) Calling .Close
	I0816 18:19:15.373716   74828 main.go:141] libmachine: (no-preload-864476) DBG | Closing plugin on server side
	I0816 18:19:15.373769   74828 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:15.373804   74828 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:15.373825   74828 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:15.373837   74828 main.go:141] libmachine: (no-preload-864476) Calling .Close
	I0816 18:19:15.374124   74828 main.go:141] libmachine: (no-preload-864476) DBG | Closing plugin on server side
	I0816 18:19:15.374130   74828 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:15.374181   74828 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:15.374192   74828 addons.go:475] Verifying addon metrics-server=true in "no-preload-864476"
	I0816 18:19:15.375801   74828 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0816 18:19:17.638005   75006 kubeadm.go:310] [api-check] The API server is healthy after 4.502130995s
	I0816 18:19:17.658334   75006 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0816 18:19:17.678882   75006 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0816 18:19:17.709612   75006 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0816 18:19:17.709881   75006 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-256678 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0816 18:19:17.724755   75006 kubeadm.go:310] [bootstrap-token] Using token: cdypho.k0vxtmnp4c93945s
	I0816 18:19:14.941895   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:17.440923   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:15.377611   74828 addons.go:510] duration metric: took 1.785861834s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0816 18:19:15.934515   74828 pod_ready.go:103] pod "coredns-6f6b679f8f-6zfgr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:18.435321   74828 pod_ready.go:103] pod "coredns-6f6b679f8f-6zfgr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:17.726222   75006 out.go:235]   - Configuring RBAC rules ...
	I0816 18:19:17.726361   75006 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0816 18:19:17.733325   75006 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0816 18:19:17.740707   75006 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0816 18:19:17.747325   75006 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0816 18:19:17.751554   75006 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0816 18:19:17.761084   75006 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0816 18:19:18.044607   75006 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0816 18:19:18.485134   75006 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0816 18:19:19.044481   75006 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0816 18:19:19.045968   75006 kubeadm.go:310] 
	I0816 18:19:19.046038   75006 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0816 18:19:19.046069   75006 kubeadm.go:310] 
	I0816 18:19:19.046185   75006 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0816 18:19:19.046198   75006 kubeadm.go:310] 
	I0816 18:19:19.046229   75006 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0816 18:19:19.046298   75006 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0816 18:19:19.046343   75006 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0816 18:19:19.046349   75006 kubeadm.go:310] 
	I0816 18:19:19.046396   75006 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0816 18:19:19.046413   75006 kubeadm.go:310] 
	I0816 18:19:19.046504   75006 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0816 18:19:19.046529   75006 kubeadm.go:310] 
	I0816 18:19:19.046614   75006 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0816 18:19:19.046718   75006 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0816 18:19:19.046813   75006 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0816 18:19:19.046828   75006 kubeadm.go:310] 
	I0816 18:19:19.046941   75006 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0816 18:19:19.047047   75006 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0816 18:19:19.047056   75006 kubeadm.go:310] 
	I0816 18:19:19.047153   75006 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token cdypho.k0vxtmnp4c93945s \
	I0816 18:19:19.047304   75006 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:340cd7e6f4afd769ffe0b97bb40a910498795aaf29fab06cd6b8c9a3957ccd57 \
	I0816 18:19:19.047346   75006 kubeadm.go:310] 	--control-plane 
	I0816 18:19:19.047358   75006 kubeadm.go:310] 
	I0816 18:19:19.047470   75006 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0816 18:19:19.047480   75006 kubeadm.go:310] 
	I0816 18:19:19.047596   75006 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token cdypho.k0vxtmnp4c93945s \
	I0816 18:19:19.047740   75006 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:340cd7e6f4afd769ffe0b97bb40a910498795aaf29fab06cd6b8c9a3957ccd57 
	I0816 18:19:19.048871   75006 kubeadm.go:310] W0816 18:19:10.202021    2564 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 18:19:19.049167   75006 kubeadm.go:310] W0816 18:19:10.202700    2564 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 18:19:19.049279   75006 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 18:19:19.049304   75006 cni.go:84] Creating CNI manager for ""
	I0816 18:19:19.049318   75006 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 18:19:19.051543   75006 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 18:19:19.052677   75006 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 18:19:19.063536   75006 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 18:19:19.084460   75006 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 18:19:19.084540   75006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:19.084608   75006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-256678 minikube.k8s.io/updated_at=2024_08_16T18_19_19_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd minikube.k8s.io/name=default-k8s-diff-port-256678 minikube.k8s.io/primary=true
	I0816 18:19:19.257760   75006 ops.go:34] apiserver oom_adj: -16
	I0816 18:19:19.258124   75006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:19.759000   75006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:19.940737   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:22.440273   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:20.934243   74828 pod_ready.go:103] pod "coredns-6f6b679f8f-6zfgr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:23.433046   74828 pod_ready.go:103] pod "coredns-6f6b679f8f-6zfgr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:20.258798   75006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:20.759112   75006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:21.258598   75006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:21.758433   75006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:22.258181   75006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:22.758276   75006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:23.258184   75006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:23.758168   75006 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 18:19:23.846653   75006 kubeadm.go:1113] duration metric: took 4.762173901s to wait for elevateKubeSystemPrivileges
	I0816 18:19:23.846688   75006 kubeadm.go:394] duration metric: took 5m1.846731834s to StartCluster
	I0816 18:19:23.846708   75006 settings.go:142] acquiring lock: {Name:mk7d6bbff611e99a9222156e58c7ea3b0a5541f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:19:23.846784   75006 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19461-9545/kubeconfig
	I0816 18:19:23.848375   75006 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-9545/kubeconfig: {Name:mk7d5abb3a1b9b654c5d7490d32aeae2c9a1bdd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:19:23.848662   75006 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.144 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 18:19:23.848750   75006 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 18:19:23.848814   75006 config.go:182] Loaded profile config "default-k8s-diff-port-256678": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 18:19:23.848840   75006 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-256678"
	I0816 18:19:23.848858   75006 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-256678"
	I0816 18:19:23.848866   75006 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-256678"
	I0816 18:19:23.848878   75006 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-256678"
	I0816 18:19:23.848882   75006 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-256678"
	W0816 18:19:23.848887   75006 addons.go:243] addon storage-provisioner should already be in state true
	W0816 18:19:23.848890   75006 addons.go:243] addon metrics-server should already be in state true
	I0816 18:19:23.848915   75006 host.go:66] Checking if "default-k8s-diff-port-256678" exists ...
	I0816 18:19:23.848918   75006 host.go:66] Checking if "default-k8s-diff-port-256678" exists ...
	I0816 18:19:23.848914   75006 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-256678"
	I0816 18:19:23.849232   75006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:23.849259   75006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:23.849271   75006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:23.849293   75006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:23.849362   75006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:23.849404   75006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:23.850478   75006 out.go:177] * Verifying Kubernetes components...
	I0816 18:19:23.852034   75006 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:19:23.865786   75006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32949
	I0816 18:19:23.865939   75006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32833
	I0816 18:19:23.866248   75006 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:23.866304   75006 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:23.866398   75006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46809
	I0816 18:19:23.866816   75006 main.go:141] libmachine: Using API Version  1
	I0816 18:19:23.866845   75006 main.go:141] libmachine: Using API Version  1
	I0816 18:19:23.866860   75006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:23.866863   75006 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:23.866935   75006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:23.867328   75006 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:23.867333   75006 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:23.867430   75006 main.go:141] libmachine: Using API Version  1
	I0816 18:19:23.867447   75006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:23.867517   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetState
	I0816 18:19:23.867742   75006 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:23.867871   75006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:23.867897   75006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:23.868227   75006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:23.868247   75006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:23.870993   75006 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-256678"
	W0816 18:19:23.871020   75006 addons.go:243] addon default-storageclass should already be in state true
	I0816 18:19:23.871051   75006 host.go:66] Checking if "default-k8s-diff-port-256678" exists ...
	I0816 18:19:23.871403   75006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:23.871433   75006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:23.885139   75006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42813
	I0816 18:19:23.885814   75006 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:23.886386   75006 main.go:141] libmachine: Using API Version  1
	I0816 18:19:23.886403   75006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:23.886814   75006 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:23.886856   75006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39359
	I0816 18:19:23.887024   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetState
	I0816 18:19:23.887202   75006 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:23.887542   75006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36361
	I0816 18:19:23.887784   75006 main.go:141] libmachine: Using API Version  1
	I0816 18:19:23.887797   75006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:23.887863   75006 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:23.888165   75006 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:23.888372   75006 main.go:141] libmachine: Using API Version  1
	I0816 18:19:23.888389   75006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:23.889026   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .DriverName
	I0816 18:19:23.889254   75006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 18:19:23.889268   75006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 18:19:23.889518   75006 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:23.889758   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetState
	I0816 18:19:23.890483   75006 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:19:23.891262   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .DriverName
	I0816 18:19:23.891838   75006 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 18:19:23.891859   75006 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 18:19:23.891877   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:19:23.892581   75006 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0816 18:19:23.893621   75006 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 18:19:23.893684   75006 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0816 18:19:23.893882   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:19:23.894413   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:19:23.894973   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:19:23.894994   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:19:23.895161   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHPort
	I0816 18:19:23.895322   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:19:23.895578   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHUsername
	I0816 18:19:23.895757   75006 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/default-k8s-diff-port-256678/id_rsa Username:docker}
	I0816 18:19:23.897167   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:19:23.897666   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:19:23.897685   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:19:23.897802   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHPort
	I0816 18:19:23.897972   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:19:23.898132   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHUsername
	I0816 18:19:23.898248   75006 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/default-k8s-diff-port-256678/id_rsa Username:docker}
	I0816 18:19:23.906377   75006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43895
	I0816 18:19:23.906708   75006 main.go:141] libmachine: () Calling .GetVersion
	I0816 18:19:23.907497   75006 main.go:141] libmachine: Using API Version  1
	I0816 18:19:23.907513   75006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 18:19:23.907932   75006 main.go:141] libmachine: () Calling .GetMachineName
	I0816 18:19:23.908240   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetState
	I0816 18:19:23.909917   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .DriverName
	I0816 18:19:23.910141   75006 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 18:19:23.910159   75006 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 18:19:23.910177   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHHostname
	I0816 18:19:23.912435   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:19:23.912678   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:32:d8", ip: ""} in network mk-default-k8s-diff-port-256678: {Iface:virbr4 ExpiryTime:2024-08-16 19:14:07 +0000 UTC Type:0 Mac:52:54:00:76:32:d8 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:default-k8s-diff-port-256678 Clientid:01:52:54:00:76:32:d8}
	I0816 18:19:23.912710   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | domain default-k8s-diff-port-256678 has defined IP address 192.168.72.144 and MAC address 52:54:00:76:32:d8 in network mk-default-k8s-diff-port-256678
	I0816 18:19:23.912858   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHPort
	I0816 18:19:23.912982   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHKeyPath
	I0816 18:19:23.913066   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .GetSSHUsername
	I0816 18:19:23.913138   75006 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/default-k8s-diff-port-256678/id_rsa Username:docker}
	I0816 18:19:24.062487   75006 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 18:19:24.083148   75006 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-256678" to be "Ready" ...
	I0816 18:19:24.092886   75006 node_ready.go:49] node "default-k8s-diff-port-256678" has status "Ready":"True"
	I0816 18:19:24.092907   75006 node_ready.go:38] duration metric: took 9.72996ms for node "default-k8s-diff-port-256678" to be "Ready" ...
	I0816 18:19:24.092916   75006 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:19:24.099123   75006 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-hx7sb" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:24.184211   75006 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 18:19:24.197461   75006 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 18:19:24.197491   75006 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0816 18:19:24.219263   75006 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 18:19:24.258463   75006 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 18:19:24.258498   75006 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0816 18:19:24.355822   75006 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 18:19:24.355902   75006 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0816 18:19:24.436401   75006 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 18:19:24.866038   75006 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:24.866125   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .Close
	I0816 18:19:24.866058   75006 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:24.866163   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .Close
	I0816 18:19:24.866478   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | Closing plugin on server side
	I0816 18:19:24.866517   75006 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:24.866526   75006 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:24.866536   75006 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:24.866546   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .Close
	I0816 18:19:24.866600   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | Closing plugin on server side
	I0816 18:19:24.866626   75006 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:24.866636   75006 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:24.866649   75006 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:24.866676   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .Close
	I0816 18:19:24.866778   75006 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:24.866793   75006 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:24.866810   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | Closing plugin on server side
	I0816 18:19:24.866888   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | Closing plugin on server side
	I0816 18:19:24.866923   75006 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:24.866932   75006 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:24.886041   75006 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:24.886065   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .Close
	I0816 18:19:24.886338   75006 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:24.886359   75006 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:24.886384   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | Closing plugin on server side
	I0816 18:19:25.225367   75006 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:25.225397   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .Close
	I0816 18:19:25.225704   75006 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:25.225720   75006 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:25.225730   75006 main.go:141] libmachine: Making call to close driver server
	I0816 18:19:25.225739   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) Calling .Close
	I0816 18:19:25.225961   75006 main.go:141] libmachine: (default-k8s-diff-port-256678) DBG | Closing plugin on server side
	I0816 18:19:25.226005   75006 main.go:141] libmachine: Successfully made call to close driver server
	I0816 18:19:25.226025   75006 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 18:19:25.226043   75006 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-256678"
	I0816 18:19:25.227605   75006 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0816 18:19:23.934167   74828 pod_ready.go:93] pod "coredns-6f6b679f8f-6zfgr" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:23.934191   74828 pod_ready.go:82] duration metric: took 10.007408518s for pod "coredns-6f6b679f8f-6zfgr" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:23.934200   74828 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-qr4q9" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:23.940226   74828 pod_ready.go:93] pod "coredns-6f6b679f8f-qr4q9" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:23.940249   74828 pod_ready.go:82] duration metric: took 6.040513ms for pod "coredns-6f6b679f8f-qr4q9" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:23.940260   74828 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:23.945330   74828 pod_ready.go:93] pod "etcd-no-preload-864476" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:23.945351   74828 pod_ready.go:82] duration metric: took 5.082362ms for pod "etcd-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:23.945361   74828 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:23.949772   74828 pod_ready.go:93] pod "kube-apiserver-no-preload-864476" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:23.949800   74828 pod_ready.go:82] duration metric: took 4.429575ms for pod "kube-apiserver-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:23.949810   74828 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:23.954308   74828 pod_ready.go:93] pod "kube-controller-manager-no-preload-864476" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:23.954328   74828 pod_ready.go:82] duration metric: took 4.510361ms for pod "kube-controller-manager-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:23.954338   74828 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6g6zx" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:24.331265   74828 pod_ready.go:93] pod "kube-proxy-6g6zx" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:24.331306   74828 pod_ready.go:82] duration metric: took 376.9609ms for pod "kube-proxy-6g6zx" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:24.331320   74828 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:24.730715   74828 pod_ready.go:93] pod "kube-scheduler-no-preload-864476" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:24.730740   74828 pod_ready.go:82] duration metric: took 399.412376ms for pod "kube-scheduler-no-preload-864476" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:24.730748   74828 pod_ready.go:39] duration metric: took 10.815561534s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:19:24.730761   74828 api_server.go:52] waiting for apiserver process to appear ...
	I0816 18:19:24.730820   74828 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:19:24.746674   74828 api_server.go:72] duration metric: took 11.155216371s to wait for apiserver process to appear ...
	I0816 18:19:24.746697   74828 api_server.go:88] waiting for apiserver healthz status ...
	I0816 18:19:24.746714   74828 api_server.go:253] Checking apiserver healthz at https://192.168.50.50:8443/healthz ...
	I0816 18:19:24.750801   74828 api_server.go:279] https://192.168.50.50:8443/healthz returned 200:
	ok
	I0816 18:19:24.751835   74828 api_server.go:141] control plane version: v1.31.0
	I0816 18:19:24.751864   74828 api_server.go:131] duration metric: took 5.159229ms to wait for apiserver health ...
	I0816 18:19:24.751872   74828 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 18:19:24.935471   74828 system_pods.go:59] 9 kube-system pods found
	I0816 18:19:24.935510   74828 system_pods.go:61] "coredns-6f6b679f8f-6zfgr" [99157766-5089-4abe-a888-ec5992e5720a] Running
	I0816 18:19:24.935520   74828 system_pods.go:61] "coredns-6f6b679f8f-qr4q9" [d20f51f3-6786-496b-a6bc-7457462e46e9] Running
	I0816 18:19:24.935539   74828 system_pods.go:61] "etcd-no-preload-864476" [246e2b57-dbfe-4fd2-bc9d-ef927d48ba0b] Running
	I0816 18:19:24.935548   74828 system_pods.go:61] "kube-apiserver-no-preload-864476" [0e386448-037f-4543-941a-63f07e0d3186] Running
	I0816 18:19:24.935555   74828 system_pods.go:61] "kube-controller-manager-no-preload-864476" [71617b5c-9968-4d49-ac6c-7728712ac880] Running
	I0816 18:19:24.935562   74828 system_pods.go:61] "kube-proxy-6g6zx" [71a027eb-99e3-4b48-b9f1-2fc80cad9d2e] Running
	I0816 18:19:24.935572   74828 system_pods.go:61] "kube-scheduler-no-preload-864476" [c9b6ef2a-41fa-408b-86b7-eae10db4bec6] Running
	I0816 18:19:24.935584   74828 system_pods.go:61] "metrics-server-6867b74b74-r6cph" [a842267c-2c75-4799-aefc-2fb92ccb9129] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 18:19:24.935596   74828 system_pods.go:61] "storage-provisioner" [c05cdb7c-d74e-4008-a0fc-5eb6df9595af] Running
	I0816 18:19:24.935607   74828 system_pods.go:74] duration metric: took 183.727841ms to wait for pod list to return data ...
	I0816 18:19:24.935621   74828 default_sa.go:34] waiting for default service account to be created ...
	I0816 18:19:25.132713   74828 default_sa.go:45] found service account: "default"
	I0816 18:19:25.132740   74828 default_sa.go:55] duration metric: took 197.112152ms for default service account to be created ...
	I0816 18:19:25.132750   74828 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 18:19:25.335012   74828 system_pods.go:86] 9 kube-system pods found
	I0816 18:19:25.335043   74828 system_pods.go:89] "coredns-6f6b679f8f-6zfgr" [99157766-5089-4abe-a888-ec5992e5720a] Running
	I0816 18:19:25.335048   74828 system_pods.go:89] "coredns-6f6b679f8f-qr4q9" [d20f51f3-6786-496b-a6bc-7457462e46e9] Running
	I0816 18:19:25.335052   74828 system_pods.go:89] "etcd-no-preload-864476" [246e2b57-dbfe-4fd2-bc9d-ef927d48ba0b] Running
	I0816 18:19:25.335057   74828 system_pods.go:89] "kube-apiserver-no-preload-864476" [0e386448-037f-4543-941a-63f07e0d3186] Running
	I0816 18:19:25.335061   74828 system_pods.go:89] "kube-controller-manager-no-preload-864476" [71617b5c-9968-4d49-ac6c-7728712ac880] Running
	I0816 18:19:25.335064   74828 system_pods.go:89] "kube-proxy-6g6zx" [71a027eb-99e3-4b48-b9f1-2fc80cad9d2e] Running
	I0816 18:19:25.335068   74828 system_pods.go:89] "kube-scheduler-no-preload-864476" [c9b6ef2a-41fa-408b-86b7-eae10db4bec6] Running
	I0816 18:19:25.335075   74828 system_pods.go:89] "metrics-server-6867b74b74-r6cph" [a842267c-2c75-4799-aefc-2fb92ccb9129] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 18:19:25.335081   74828 system_pods.go:89] "storage-provisioner" [c05cdb7c-d74e-4008-a0fc-5eb6df9595af] Running
	I0816 18:19:25.335089   74828 system_pods.go:126] duration metric: took 202.33381ms to wait for k8s-apps to be running ...
	I0816 18:19:25.335098   74828 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 18:19:25.335141   74828 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 18:19:25.349420   74828 system_svc.go:56] duration metric: took 14.310938ms WaitForService to wait for kubelet
	I0816 18:19:25.349457   74828 kubeadm.go:582] duration metric: took 11.758002576s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 18:19:25.349480   74828 node_conditions.go:102] verifying NodePressure condition ...
	I0816 18:19:25.532145   74828 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 18:19:25.532175   74828 node_conditions.go:123] node cpu capacity is 2
	I0816 18:19:25.532189   74828 node_conditions.go:105] duration metric: took 182.702662ms to run NodePressure ...
	I0816 18:19:25.532200   74828 start.go:241] waiting for startup goroutines ...
	I0816 18:19:25.532209   74828 start.go:246] waiting for cluster config update ...
	I0816 18:19:25.532222   74828 start.go:255] writing updated cluster config ...
	I0816 18:19:25.532529   74828 ssh_runner.go:195] Run: rm -f paused
	I0816 18:19:25.588070   74828 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0816 18:19:25.589615   74828 out.go:177] * Done! kubectl is now configured to use "no-preload-864476" cluster and "default" namespace by default
	I0816 18:19:24.440489   74510 pod_ready.go:103] pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:25.441683   74510 pod_ready.go:82] duration metric: took 4m0.007816418s for pod "metrics-server-6867b74b74-6hkzb" in "kube-system" namespace to be "Ready" ...
	E0816 18:19:25.441706   74510 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0816 18:19:25.441714   74510 pod_ready.go:39] duration metric: took 4m6.551547163s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:19:25.441726   74510 api_server.go:52] waiting for apiserver process to appear ...
	I0816 18:19:25.441753   74510 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:19:25.441805   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:19:25.492207   74510 cri.go:89] found id: "8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b"
	I0816 18:19:25.492235   74510 cri.go:89] found id: ""
	I0816 18:19:25.492245   74510 logs.go:276] 1 containers: [8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b]
	I0816 18:19:25.492313   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:25.497307   74510 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:19:25.497388   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:19:25.537185   74510 cri.go:89] found id: "fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468"
	I0816 18:19:25.537211   74510 cri.go:89] found id: ""
	I0816 18:19:25.537220   74510 logs.go:276] 1 containers: [fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468]
	I0816 18:19:25.537422   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:25.546564   74510 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:19:25.546644   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:19:25.602794   74510 cri.go:89] found id: "3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d"
	I0816 18:19:25.602817   74510 cri.go:89] found id: ""
	I0816 18:19:25.602827   74510 logs.go:276] 1 containers: [3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d]
	I0816 18:19:25.602879   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:25.609018   74510 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:19:25.609097   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:19:25.657942   74510 cri.go:89] found id: "99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc"
	I0816 18:19:25.657970   74510 cri.go:89] found id: ""
	I0816 18:19:25.657980   74510 logs.go:276] 1 containers: [99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc]
	I0816 18:19:25.658044   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:25.663485   74510 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:19:25.663551   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:19:25.709526   74510 cri.go:89] found id: "92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf"
	I0816 18:19:25.709554   74510 cri.go:89] found id: ""
	I0816 18:19:25.709564   74510 logs.go:276] 1 containers: [92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf]
	I0816 18:19:25.709612   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:25.715845   74510 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:19:25.715898   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:19:25.766505   74510 cri.go:89] found id: "72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c"
	I0816 18:19:25.766522   74510 cri.go:89] found id: ""
	I0816 18:19:25.766529   74510 logs.go:276] 1 containers: [72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c]
	I0816 18:19:25.766573   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:25.771051   74510 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:19:25.771127   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:19:25.810669   74510 cri.go:89] found id: ""
	I0816 18:19:25.810699   74510 logs.go:276] 0 containers: []
	W0816 18:19:25.810711   74510 logs.go:278] No container was found matching "kindnet"
	I0816 18:19:25.810720   74510 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 18:19:25.810779   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 18:19:25.851412   74510 cri.go:89] found id: "08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970"
	I0816 18:19:25.851432   74510 cri.go:89] found id: "81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e"
	I0816 18:19:25.851438   74510 cri.go:89] found id: ""
	I0816 18:19:25.851454   74510 logs.go:276] 2 containers: [08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970 81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e]
	I0816 18:19:25.851507   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:25.856154   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:25.860812   74510 logs.go:123] Gathering logs for etcd [fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468] ...
	I0816 18:19:25.860837   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468"
	I0816 18:19:25.910929   74510 logs.go:123] Gathering logs for coredns [3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d] ...
	I0816 18:19:25.910957   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d"
	I0816 18:19:25.951932   74510 logs.go:123] Gathering logs for storage-provisioner [08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970] ...
	I0816 18:19:25.951959   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970"
	I0816 18:19:25.999861   74510 logs.go:123] Gathering logs for storage-provisioner [81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e] ...
	I0816 18:19:25.999894   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e"
	I0816 18:19:26.036535   74510 logs.go:123] Gathering logs for container status ...
	I0816 18:19:26.036559   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:19:26.089637   74510 logs.go:123] Gathering logs for kubelet ...
	I0816 18:19:26.089675   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:19:26.157679   74510 logs.go:123] Gathering logs for dmesg ...
	I0816 18:19:26.157714   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:19:26.171402   74510 logs.go:123] Gathering logs for kube-scheduler [99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc] ...
	I0816 18:19:26.171432   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc"
	I0816 18:19:26.209537   74510 logs.go:123] Gathering logs for kube-proxy [92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf] ...
	I0816 18:19:26.209564   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf"
	I0816 18:19:26.252702   74510 logs.go:123] Gathering logs for kube-controller-manager [72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c] ...
	I0816 18:19:26.252732   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c"
	I0816 18:19:26.303169   74510 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:19:26.303203   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:19:26.784058   74510 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:19:26.784090   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 18:19:26.904095   74510 logs.go:123] Gathering logs for kube-apiserver [8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b] ...
	I0816 18:19:26.904137   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b"
	I0816 18:19:25.228674   75006 addons.go:510] duration metric: took 1.37992722s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0816 18:19:26.105147   75006 pod_ready.go:103] pod "coredns-6f6b679f8f-hx7sb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:28.107202   75006 pod_ready.go:103] pod "coredns-6f6b679f8f-hx7sb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:32.607933   75402 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0816 18:19:32.608136   75402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:19:32.608430   75402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:19:29.459100   74510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:19:29.476158   74510 api_server.go:72] duration metric: took 4m17.827179017s to wait for apiserver process to appear ...
	I0816 18:19:29.476183   74510 api_server.go:88] waiting for apiserver healthz status ...
	I0816 18:19:29.476222   74510 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:19:29.476279   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:19:29.509739   74510 cri.go:89] found id: "8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b"
	I0816 18:19:29.509767   74510 cri.go:89] found id: ""
	I0816 18:19:29.509776   74510 logs.go:276] 1 containers: [8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b]
	I0816 18:19:29.509836   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:29.516078   74510 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:19:29.516150   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:19:29.553766   74510 cri.go:89] found id: "fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468"
	I0816 18:19:29.553795   74510 cri.go:89] found id: ""
	I0816 18:19:29.553805   74510 logs.go:276] 1 containers: [fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468]
	I0816 18:19:29.553857   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:29.558145   74510 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:19:29.558210   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:19:29.599559   74510 cri.go:89] found id: "3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d"
	I0816 18:19:29.599583   74510 cri.go:89] found id: ""
	I0816 18:19:29.599594   74510 logs.go:276] 1 containers: [3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d]
	I0816 18:19:29.599651   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:29.604108   74510 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:19:29.604187   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:19:29.641990   74510 cri.go:89] found id: "99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc"
	I0816 18:19:29.642009   74510 cri.go:89] found id: ""
	I0816 18:19:29.642016   74510 logs.go:276] 1 containers: [99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc]
	I0816 18:19:29.642062   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:29.645990   74510 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:19:29.646047   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:19:29.679480   74510 cri.go:89] found id: "92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf"
	I0816 18:19:29.679505   74510 cri.go:89] found id: ""
	I0816 18:19:29.679514   74510 logs.go:276] 1 containers: [92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf]
	I0816 18:19:29.679571   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:29.683361   74510 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:19:29.683425   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:19:29.733167   74510 cri.go:89] found id: "72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c"
	I0816 18:19:29.733197   74510 cri.go:89] found id: ""
	I0816 18:19:29.733208   74510 logs.go:276] 1 containers: [72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c]
	I0816 18:19:29.733266   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:29.737449   74510 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:19:29.737518   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:19:29.771597   74510 cri.go:89] found id: ""
	I0816 18:19:29.771628   74510 logs.go:276] 0 containers: []
	W0816 18:19:29.771639   74510 logs.go:278] No container was found matching "kindnet"
	I0816 18:19:29.771647   74510 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 18:19:29.771714   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 18:19:29.812346   74510 cri.go:89] found id: "08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970"
	I0816 18:19:29.812375   74510 cri.go:89] found id: "81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e"
	I0816 18:19:29.812381   74510 cri.go:89] found id: ""
	I0816 18:19:29.812390   74510 logs.go:276] 2 containers: [08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970 81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e]
	I0816 18:19:29.812447   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:29.817909   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:29.821575   74510 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:19:29.821602   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:19:30.288789   74510 logs.go:123] Gathering logs for container status ...
	I0816 18:19:30.288836   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:19:30.332874   74510 logs.go:123] Gathering logs for dmesg ...
	I0816 18:19:30.332904   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:19:30.347128   74510 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:19:30.347168   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 18:19:30.456809   74510 logs.go:123] Gathering logs for etcd [fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468] ...
	I0816 18:19:30.456845   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468"
	I0816 18:19:30.505332   74510 logs.go:123] Gathering logs for kube-scheduler [99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc] ...
	I0816 18:19:30.505362   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc"
	I0816 18:19:30.540765   74510 logs.go:123] Gathering logs for kube-proxy [92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf] ...
	I0816 18:19:30.540798   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf"
	I0816 18:19:30.576047   74510 logs.go:123] Gathering logs for storage-provisioner [81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e] ...
	I0816 18:19:30.576077   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e"
	I0816 18:19:30.611956   74510 logs.go:123] Gathering logs for kubelet ...
	I0816 18:19:30.611992   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:19:30.678135   74510 logs.go:123] Gathering logs for kube-apiserver [8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b] ...
	I0816 18:19:30.678177   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b"
	I0816 18:19:30.732409   74510 logs.go:123] Gathering logs for coredns [3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d] ...
	I0816 18:19:30.732437   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d"
	I0816 18:19:30.773306   74510 logs.go:123] Gathering logs for kube-controller-manager [72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c] ...
	I0816 18:19:30.773331   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c"
	I0816 18:19:30.827732   74510 logs.go:123] Gathering logs for storage-provisioner [08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970] ...
	I0816 18:19:30.827763   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970"
	I0816 18:19:33.367134   74510 api_server.go:253] Checking apiserver healthz at https://192.168.61.218:8443/healthz ...
	I0816 18:19:33.371523   74510 api_server.go:279] https://192.168.61.218:8443/healthz returned 200:
	ok
	I0816 18:19:33.372537   74510 api_server.go:141] control plane version: v1.31.0
	I0816 18:19:33.372560   74510 api_server.go:131] duration metric: took 3.896368169s to wait for apiserver health ...
	I0816 18:19:33.372568   74510 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 18:19:33.372589   74510 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:19:33.372653   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:19:33.409551   74510 cri.go:89] found id: "8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b"
	I0816 18:19:33.409579   74510 cri.go:89] found id: ""
	I0816 18:19:33.409590   74510 logs.go:276] 1 containers: [8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b]
	I0816 18:19:33.409648   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:33.413727   74510 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:19:33.413802   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:19:33.457246   74510 cri.go:89] found id: "fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468"
	I0816 18:19:33.457268   74510 cri.go:89] found id: ""
	I0816 18:19:33.457277   74510 logs.go:276] 1 containers: [fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468]
	I0816 18:19:33.457337   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:33.461490   74510 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:19:33.461556   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:19:33.497141   74510 cri.go:89] found id: "3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d"
	I0816 18:19:33.497169   74510 cri.go:89] found id: ""
	I0816 18:19:33.497180   74510 logs.go:276] 1 containers: [3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d]
	I0816 18:19:33.497241   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:33.501353   74510 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:19:33.501421   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:19:33.537797   74510 cri.go:89] found id: "99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc"
	I0816 18:19:33.537816   74510 cri.go:89] found id: ""
	I0816 18:19:33.537823   74510 logs.go:276] 1 containers: [99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc]
	I0816 18:19:33.537877   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:33.541727   74510 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:19:33.541784   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:19:33.575882   74510 cri.go:89] found id: "92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf"
	I0816 18:19:33.575905   74510 cri.go:89] found id: ""
	I0816 18:19:33.575913   74510 logs.go:276] 1 containers: [92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf]
	I0816 18:19:33.575964   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:33.579592   74510 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:19:33.579644   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:19:33.614425   74510 cri.go:89] found id: "72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c"
	I0816 18:19:33.614447   74510 cri.go:89] found id: ""
	I0816 18:19:33.614455   74510 logs.go:276] 1 containers: [72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c]
	I0816 18:19:33.614507   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:33.618130   74510 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:19:33.618178   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:19:33.652369   74510 cri.go:89] found id: ""
	I0816 18:19:33.652393   74510 logs.go:276] 0 containers: []
	W0816 18:19:33.652403   74510 logs.go:278] No container was found matching "kindnet"
	I0816 18:19:33.652410   74510 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 18:19:33.652463   74510 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 18:19:33.687276   74510 cri.go:89] found id: "08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970"
	I0816 18:19:33.687295   74510 cri.go:89] found id: "81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e"
	I0816 18:19:33.687301   74510 cri.go:89] found id: ""
	I0816 18:19:33.687309   74510 logs.go:276] 2 containers: [08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970 81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e]
	I0816 18:19:33.687361   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:33.691100   74510 ssh_runner.go:195] Run: which crictl
	I0816 18:19:33.695148   74510 logs.go:123] Gathering logs for kube-proxy [92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf] ...
	I0816 18:19:33.695179   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92401f8df7e940400841b7ee10ec3a8b7149ac4c439e214e6816a2f162146dbf"
	I0816 18:19:30.110901   75006 pod_ready.go:103] pod "coredns-6f6b679f8f-hx7sb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:32.606195   75006 pod_ready.go:103] pod "coredns-6f6b679f8f-hx7sb" in "kube-system" namespace has status "Ready":"False"
	I0816 18:19:34.110732   75006 pod_ready.go:93] pod "coredns-6f6b679f8f-hx7sb" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:34.110764   75006 pod_ready.go:82] duration metric: took 10.011612904s for pod "coredns-6f6b679f8f-hx7sb" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.110778   75006 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-t74vf" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.116373   75006 pod_ready.go:93] pod "coredns-6f6b679f8f-t74vf" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:34.116392   75006 pod_ready.go:82] duration metric: took 5.607377ms for pod "coredns-6f6b679f8f-t74vf" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.116401   75006 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.124005   75006 pod_ready.go:93] pod "etcd-default-k8s-diff-port-256678" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:34.124027   75006 pod_ready.go:82] duration metric: took 7.618878ms for pod "etcd-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.124039   75006 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.129603   75006 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-256678" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:34.129623   75006 pod_ready.go:82] duration metric: took 5.575452ms for pod "kube-apiserver-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.129633   75006 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.145449   75006 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-256678" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:34.145474   75006 pod_ready.go:82] duration metric: took 15.831669ms for pod "kube-controller-manager-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.145486   75006 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qsskg" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.506455   75006 pod_ready.go:93] pod "kube-proxy-qsskg" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:34.506477   75006 pod_ready.go:82] duration metric: took 360.982998ms for pod "kube-proxy-qsskg" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.506486   75006 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.905345   75006 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-256678" in "kube-system" namespace has status "Ready":"True"
	I0816 18:19:34.905365   75006 pod_ready.go:82] duration metric: took 398.872303ms for pod "kube-scheduler-default-k8s-diff-port-256678" in "kube-system" namespace to be "Ready" ...
	I0816 18:19:34.905373   75006 pod_ready.go:39] duration metric: took 10.812448791s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:19:34.905386   75006 api_server.go:52] waiting for apiserver process to appear ...
	I0816 18:19:34.905430   75006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:19:34.920554   75006 api_server.go:72] duration metric: took 11.071846456s to wait for apiserver process to appear ...
	I0816 18:19:34.920574   75006 api_server.go:88] waiting for apiserver healthz status ...
	I0816 18:19:34.920589   75006 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8444/healthz ...
	I0816 18:19:34.927194   75006 api_server.go:279] https://192.168.72.144:8444/healthz returned 200:
	ok
	I0816 18:19:34.928420   75006 api_server.go:141] control plane version: v1.31.0
	I0816 18:19:34.928437   75006 api_server.go:131] duration metric: took 7.857168ms to wait for apiserver health ...
	I0816 18:19:34.928443   75006 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 18:19:35.107220   75006 system_pods.go:59] 9 kube-system pods found
	I0816 18:19:35.107248   75006 system_pods.go:61] "coredns-6f6b679f8f-hx7sb" [4ebcdf34-c4e8-47bd-83f3-c56ea7bfb7d4] Running
	I0816 18:19:35.107254   75006 system_pods.go:61] "coredns-6f6b679f8f-t74vf" [41afd723-b034-460e-8e5f-197c8d8bcd7a] Running
	I0816 18:19:35.107258   75006 system_pods.go:61] "etcd-default-k8s-diff-port-256678" [46e68942-a5fc-433d-bf35-70f87a1b5962] Running
	I0816 18:19:35.107262   75006 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-256678" [0083826c-61fc-4597-84d9-a529df660696] Running
	I0816 18:19:35.107267   75006 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-256678" [e96435e2-1034-46d7-9f70-ba4435962528] Running
	I0816 18:19:35.107270   75006 system_pods.go:61] "kube-proxy-qsskg" [c863ca3c-8451-4fa7-b22d-c709e67bd26b] Running
	I0816 18:19:35.107274   75006 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-256678" [83bd764c-55ee-4fc4-8ebc-567b3fba1f95] Running
	I0816 18:19:35.107280   75006 system_pods.go:61] "metrics-server-6867b74b74-vmt5v" [8446e983-380f-42a8-ab5b-ce9b6d67ebad] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 18:19:35.107288   75006 system_pods.go:61] "storage-provisioner" [491e3d8e-5a8b-4187-a682-411c6fb9dd92] Running
	I0816 18:19:35.107296   75006 system_pods.go:74] duration metric: took 178.847431ms to wait for pod list to return data ...
	I0816 18:19:35.107302   75006 default_sa.go:34] waiting for default service account to be created ...
	I0816 18:19:35.303619   75006 default_sa.go:45] found service account: "default"
	I0816 18:19:35.303646   75006 default_sa.go:55] duration metric: took 196.337687ms for default service account to be created ...
	I0816 18:19:35.303655   75006 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 18:19:35.508401   75006 system_pods.go:86] 9 kube-system pods found
	I0816 18:19:35.508442   75006 system_pods.go:89] "coredns-6f6b679f8f-hx7sb" [4ebcdf34-c4e8-47bd-83f3-c56ea7bfb7d4] Running
	I0816 18:19:35.508452   75006 system_pods.go:89] "coredns-6f6b679f8f-t74vf" [41afd723-b034-460e-8e5f-197c8d8bcd7a] Running
	I0816 18:19:35.508460   75006 system_pods.go:89] "etcd-default-k8s-diff-port-256678" [46e68942-a5fc-433d-bf35-70f87a1b5962] Running
	I0816 18:19:35.508466   75006 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-256678" [0083826c-61fc-4597-84d9-a529df660696] Running
	I0816 18:19:35.508471   75006 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-256678" [e96435e2-1034-46d7-9f70-ba4435962528] Running
	I0816 18:19:35.508477   75006 system_pods.go:89] "kube-proxy-qsskg" [c863ca3c-8451-4fa7-b22d-c709e67bd26b] Running
	I0816 18:19:35.508483   75006 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-256678" [83bd764c-55ee-4fc4-8ebc-567b3fba1f95] Running
	I0816 18:19:35.508494   75006 system_pods.go:89] "metrics-server-6867b74b74-vmt5v" [8446e983-380f-42a8-ab5b-ce9b6d67ebad] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 18:19:35.508504   75006 system_pods.go:89] "storage-provisioner" [491e3d8e-5a8b-4187-a682-411c6fb9dd92] Running
	I0816 18:19:35.508521   75006 system_pods.go:126] duration metric: took 204.859728ms to wait for k8s-apps to be running ...
	I0816 18:19:35.508544   75006 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 18:19:35.508605   75006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 18:19:35.523660   75006 system_svc.go:56] duration metric: took 15.109288ms WaitForService to wait for kubelet
	I0816 18:19:35.523687   75006 kubeadm.go:582] duration metric: took 11.674985717s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 18:19:35.523704   75006 node_conditions.go:102] verifying NodePressure condition ...
	I0816 18:19:35.704770   75006 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 18:19:35.704797   75006 node_conditions.go:123] node cpu capacity is 2
	I0816 18:19:35.704808   75006 node_conditions.go:105] duration metric: took 181.099433ms to run NodePressure ...
	I0816 18:19:35.704818   75006 start.go:241] waiting for startup goroutines ...
	I0816 18:19:35.704824   75006 start.go:246] waiting for cluster config update ...
	I0816 18:19:35.704834   75006 start.go:255] writing updated cluster config ...
	I0816 18:19:35.705096   75006 ssh_runner.go:195] Run: rm -f paused
	I0816 18:19:35.753637   75006 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0816 18:19:35.755747   75006 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-256678" cluster and "default" namespace by default
	I0816 18:19:33.732856   74510 logs.go:123] Gathering logs for kube-controller-manager [72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c] ...
	I0816 18:19:33.732881   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 72d29c313c76c5c21caefda9b44bb7a66defeafb724b7fde6960caf5e912f57c"
	I0816 18:19:33.796167   74510 logs.go:123] Gathering logs for storage-provisioner [08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970] ...
	I0816 18:19:33.796215   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08db52c38328f60508e736ef773fa44e068b84e5b4d48e83de0c5ac824804970"
	I0816 18:19:33.835842   74510 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:19:33.835869   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 18:19:33.956412   74510 logs.go:123] Gathering logs for kube-apiserver [8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b] ...
	I0816 18:19:33.956450   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c78984b6e3a7f35d60d1dc0e51758d0fb14615fa6f07340e936369aac73d01b"
	I0816 18:19:34.004102   74510 logs.go:123] Gathering logs for etcd [fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468] ...
	I0816 18:19:34.004137   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd0d63ff38eb413e8e8aa005851212df0ff5067a10f3da9a9b4a797c8d3ad468"
	I0816 18:19:34.050504   74510 logs.go:123] Gathering logs for coredns [3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d] ...
	I0816 18:19:34.050548   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3918f8eb004ee3e592d32a0d0e29d94959c1eb6951a1dd69286a282e40a1417d"
	I0816 18:19:34.087815   74510 logs.go:123] Gathering logs for kube-scheduler [99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc] ...
	I0816 18:19:34.087850   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 99d68f23b3bc92f063c0f51a38bd7d2accbe569dfa32d8c3d19b10ecc3bbb9dc"
	I0816 18:19:34.124096   74510 logs.go:123] Gathering logs for kubelet ...
	I0816 18:19:34.124127   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:19:34.193377   74510 logs.go:123] Gathering logs for dmesg ...
	I0816 18:19:34.193410   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:19:34.206480   74510 logs.go:123] Gathering logs for storage-provisioner [81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e] ...
	I0816 18:19:34.206505   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81f4d0a5702662e8c490fd9055dabdacd28af7cd3a903c3edcf8369a453f0f0e"
	I0816 18:19:34.240262   74510 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:19:34.240305   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:19:34.591979   74510 logs.go:123] Gathering logs for container status ...
	I0816 18:19:34.592014   74510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:19:37.142552   74510 system_pods.go:59] 8 kube-system pods found
	I0816 18:19:37.142580   74510 system_pods.go:61] "coredns-6f6b679f8f-8njs2" [f29c31e1-4c2a-4dd8-ba60-62998504c55e] Running
	I0816 18:19:37.142585   74510 system_pods.go:61] "etcd-embed-certs-777541" [9cad9a1c-cea5-4271-9a68-3689ecdad607] Running
	I0816 18:19:37.142590   74510 system_pods.go:61] "kube-apiserver-embed-certs-777541" [a5105f98-368f-4687-8eca-ecc66ae59b42] Running
	I0816 18:19:37.142593   74510 system_pods.go:61] "kube-controller-manager-embed-certs-777541" [63cb72fb-167b-446b-874d-6ee665ad8a55] Running
	I0816 18:19:37.142596   74510 system_pods.go:61] "kube-proxy-j5rl7" [fcbc8903-6fa2-4f55-9ec0-92b77e21fb08] Running
	I0816 18:19:37.142600   74510 system_pods.go:61] "kube-scheduler-embed-certs-777541" [f2224375-d2f9-4e85-9eb8-26070a90642f] Running
	I0816 18:19:37.142605   74510 system_pods.go:61] "metrics-server-6867b74b74-6hkzb" [3e01da8d-7ddf-47cc-9079-5162cf2c2b53] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 18:19:37.142609   74510 system_pods.go:61] "storage-provisioner" [6fc6c4da-0e0f-45cc-84a6-bd4907f5e852] Running
	I0816 18:19:37.142616   74510 system_pods.go:74] duration metric: took 3.770043434s to wait for pod list to return data ...
	I0816 18:19:37.142625   74510 default_sa.go:34] waiting for default service account to be created ...
	I0816 18:19:37.145135   74510 default_sa.go:45] found service account: "default"
	I0816 18:19:37.145161   74510 default_sa.go:55] duration metric: took 2.530779ms for default service account to be created ...
	I0816 18:19:37.145169   74510 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 18:19:37.149397   74510 system_pods.go:86] 8 kube-system pods found
	I0816 18:19:37.149423   74510 system_pods.go:89] "coredns-6f6b679f8f-8njs2" [f29c31e1-4c2a-4dd8-ba60-62998504c55e] Running
	I0816 18:19:37.149431   74510 system_pods.go:89] "etcd-embed-certs-777541" [9cad9a1c-cea5-4271-9a68-3689ecdad607] Running
	I0816 18:19:37.149437   74510 system_pods.go:89] "kube-apiserver-embed-certs-777541" [a5105f98-368f-4687-8eca-ecc66ae59b42] Running
	I0816 18:19:37.149443   74510 system_pods.go:89] "kube-controller-manager-embed-certs-777541" [63cb72fb-167b-446b-874d-6ee665ad8a55] Running
	I0816 18:19:37.149451   74510 system_pods.go:89] "kube-proxy-j5rl7" [fcbc8903-6fa2-4f55-9ec0-92b77e21fb08] Running
	I0816 18:19:37.149458   74510 system_pods.go:89] "kube-scheduler-embed-certs-777541" [f2224375-d2f9-4e85-9eb8-26070a90642f] Running
	I0816 18:19:37.149471   74510 system_pods.go:89] "metrics-server-6867b74b74-6hkzb" [3e01da8d-7ddf-47cc-9079-5162cf2c2b53] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 18:19:37.149480   74510 system_pods.go:89] "storage-provisioner" [6fc6c4da-0e0f-45cc-84a6-bd4907f5e852] Running
	I0816 18:19:37.149491   74510 system_pods.go:126] duration metric: took 4.31556ms to wait for k8s-apps to be running ...
	I0816 18:19:37.149502   74510 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 18:19:37.149564   74510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 18:19:37.166663   74510 system_svc.go:56] duration metric: took 17.15398ms WaitForService to wait for kubelet
	I0816 18:19:37.166692   74510 kubeadm.go:582] duration metric: took 4m25.517719342s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 18:19:37.166711   74510 node_conditions.go:102] verifying NodePressure condition ...
	I0816 18:19:37.170081   74510 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 18:19:37.170102   74510 node_conditions.go:123] node cpu capacity is 2
	I0816 18:19:37.170112   74510 node_conditions.go:105] duration metric: took 3.396116ms to run NodePressure ...
	I0816 18:19:37.170122   74510 start.go:241] waiting for startup goroutines ...
	I0816 18:19:37.170129   74510 start.go:246] waiting for cluster config update ...
	I0816 18:19:37.170138   74510 start.go:255] writing updated cluster config ...
	I0816 18:19:37.170406   74510 ssh_runner.go:195] Run: rm -f paused
	I0816 18:19:37.218383   74510 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0816 18:19:37.220397   74510 out.go:177] * Done! kubectl is now configured to use "embed-certs-777541" cluster and "default" namespace by default
	I0816 18:19:37.609143   75402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:19:37.609401   75402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:19:47.609941   75402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:19:47.610185   75402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:20:07.611108   75402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:20:07.611350   75402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:20:47.613446   75402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:20:47.613708   75402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:20:47.613742   75402 kubeadm.go:310] 
	I0816 18:20:47.613809   75402 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0816 18:20:47.613902   75402 kubeadm.go:310] 		timed out waiting for the condition
	I0816 18:20:47.613926   75402 kubeadm.go:310] 
	I0816 18:20:47.613976   75402 kubeadm.go:310] 	This error is likely caused by:
	I0816 18:20:47.614028   75402 kubeadm.go:310] 		- The kubelet is not running
	I0816 18:20:47.614160   75402 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0816 18:20:47.614174   75402 kubeadm.go:310] 
	I0816 18:20:47.614323   75402 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0816 18:20:47.614383   75402 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0816 18:20:47.614432   75402 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0816 18:20:47.614441   75402 kubeadm.go:310] 
	I0816 18:20:47.614601   75402 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0816 18:20:47.614730   75402 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0816 18:20:47.614751   75402 kubeadm.go:310] 
	I0816 18:20:47.614875   75402 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0816 18:20:47.614982   75402 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0816 18:20:47.615101   75402 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0816 18:20:47.615217   75402 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0816 18:20:47.615230   75402 kubeadm.go:310] 
	I0816 18:20:47.616865   75402 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 18:20:47.616971   75402 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0816 18:20:47.617028   75402 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0816 18:20:47.617173   75402 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0816 18:20:47.617226   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 18:20:48.158066   75402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 18:20:48.172568   75402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 18:20:48.182445   75402 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 18:20:48.182468   75402 kubeadm.go:157] found existing configuration files:
	
	I0816 18:20:48.182527   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 18:20:48.191779   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 18:20:48.191847   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 18:20:48.201531   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 18:20:48.210495   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 18:20:48.210568   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 18:20:48.219701   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 18:20:48.228170   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 18:20:48.228242   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 18:20:48.237366   75402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 18:20:48.246335   75402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 18:20:48.246393   75402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 18:20:48.255655   75402 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 18:20:48.321873   75402 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0816 18:20:48.321930   75402 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 18:20:48.462199   75402 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 18:20:48.462324   75402 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 18:20:48.462448   75402 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0816 18:20:48.646565   75402 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 18:20:48.648485   75402 out.go:235]   - Generating certificates and keys ...
	I0816 18:20:48.648605   75402 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 18:20:48.648748   75402 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 18:20:48.648895   75402 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 18:20:48.648994   75402 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 18:20:48.649088   75402 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 18:20:48.649185   75402 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 18:20:48.649282   75402 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 18:20:48.649368   75402 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 18:20:48.649485   75402 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 18:20:48.649595   75402 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 18:20:48.649649   75402 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 18:20:48.649753   75402 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 18:20:48.864525   75402 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 18:20:49.035729   75402 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 18:20:49.086765   75402 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 18:20:49.222612   75402 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 18:20:49.239121   75402 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 18:20:49.240158   75402 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 18:20:49.240200   75402 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 18:20:49.366027   75402 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 18:20:49.367770   75402 out.go:235]   - Booting up control plane ...
	I0816 18:20:49.367907   75402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 18:20:49.373047   75402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 18:20:49.373886   75402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 18:20:49.374691   75402 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 18:20:49.379220   75402 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0816 18:21:29.381362   75402 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0816 18:21:29.381473   75402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:21:29.381700   75402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:21:34.381889   75402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:21:34.382065   75402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:21:44.382765   75402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:21:44.382964   75402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:22:04.383485   75402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:22:04.383748   75402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:22:44.382265   75402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 18:22:44.382558   75402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 18:22:44.382572   75402 kubeadm.go:310] 
	I0816 18:22:44.382628   75402 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0816 18:22:44.382715   75402 kubeadm.go:310] 		timed out waiting for the condition
	I0816 18:22:44.382741   75402 kubeadm.go:310] 
	I0816 18:22:44.382789   75402 kubeadm.go:310] 	This error is likely caused by:
	I0816 18:22:44.382837   75402 kubeadm.go:310] 		- The kubelet is not running
	I0816 18:22:44.382986   75402 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0816 18:22:44.382997   75402 kubeadm.go:310] 
	I0816 18:22:44.383149   75402 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0816 18:22:44.383202   75402 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0816 18:22:44.383246   75402 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0816 18:22:44.383258   75402 kubeadm.go:310] 
	I0816 18:22:44.383421   75402 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0816 18:22:44.383534   75402 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0816 18:22:44.383549   75402 kubeadm.go:310] 
	I0816 18:22:44.383743   75402 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0816 18:22:44.383877   75402 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0816 18:22:44.383993   75402 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0816 18:22:44.384092   75402 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0816 18:22:44.384103   75402 kubeadm.go:310] 
	I0816 18:22:44.384783   75402 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 18:22:44.384895   75402 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0816 18:22:44.384986   75402 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0816 18:22:44.385062   75402 kubeadm.go:394] duration metric: took 8m1.372176417s to StartCluster
	I0816 18:22:44.385108   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:22:44.385173   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:22:44.425862   75402 cri.go:89] found id: ""
	I0816 18:22:44.425892   75402 logs.go:276] 0 containers: []
	W0816 18:22:44.425901   75402 logs.go:278] No container was found matching "kube-apiserver"
	I0816 18:22:44.425909   75402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 18:22:44.425982   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:22:44.461988   75402 cri.go:89] found id: ""
	I0816 18:22:44.462019   75402 logs.go:276] 0 containers: []
	W0816 18:22:44.462030   75402 logs.go:278] No container was found matching "etcd"
	I0816 18:22:44.462038   75402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 18:22:44.462109   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:22:44.496063   75402 cri.go:89] found id: ""
	I0816 18:22:44.496095   75402 logs.go:276] 0 containers: []
	W0816 18:22:44.496106   75402 logs.go:278] No container was found matching "coredns"
	I0816 18:22:44.496114   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:22:44.496175   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:22:44.529875   75402 cri.go:89] found id: ""
	I0816 18:22:44.529899   75402 logs.go:276] 0 containers: []
	W0816 18:22:44.529906   75402 logs.go:278] No container was found matching "kube-scheduler"
	I0816 18:22:44.529912   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:22:44.529958   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:22:44.565745   75402 cri.go:89] found id: ""
	I0816 18:22:44.565781   75402 logs.go:276] 0 containers: []
	W0816 18:22:44.565791   75402 logs.go:278] No container was found matching "kube-proxy"
	I0816 18:22:44.565798   75402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:22:44.565860   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:22:44.604122   75402 cri.go:89] found id: ""
	I0816 18:22:44.604149   75402 logs.go:276] 0 containers: []
	W0816 18:22:44.604160   75402 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 18:22:44.604168   75402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 18:22:44.604228   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:22:44.636607   75402 cri.go:89] found id: ""
	I0816 18:22:44.636658   75402 logs.go:276] 0 containers: []
	W0816 18:22:44.636669   75402 logs.go:278] No container was found matching "kindnet"
	I0816 18:22:44.636677   75402 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:22:44.636736   75402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:22:44.670942   75402 cri.go:89] found id: ""
	I0816 18:22:44.670973   75402 logs.go:276] 0 containers: []
	W0816 18:22:44.670981   75402 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 18:22:44.670989   75402 logs.go:123] Gathering logs for kubelet ...
	I0816 18:22:44.671001   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 18:22:44.722403   75402 logs.go:123] Gathering logs for dmesg ...
	I0816 18:22:44.722433   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:22:44.738587   75402 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:22:44.738627   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 18:22:44.854530   75402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 18:22:44.854563   75402 logs.go:123] Gathering logs for CRI-O ...
	I0816 18:22:44.854579   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 18:22:44.957308   75402 logs.go:123] Gathering logs for container status ...
	I0816 18:22:44.957342   75402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0816 18:22:44.997652   75402 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0816 18:22:44.997714   75402 out.go:270] * 
	W0816 18:22:44.997804   75402 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0816 18:22:44.997828   75402 out.go:270] * 
	W0816 18:22:44.998787   75402 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 18:22:45.002189   75402 out.go:201] 
	W0816 18:22:45.003254   75402 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0816 18:22:45.003310   75402 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0816 18:22:45.003340   75402 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0816 18:22:45.004826   75402 out.go:201] 
	
	
	==> CRI-O <==
	Aug 16 18:34:12 old-k8s-version-783465 crio[653]: time="2024-08-16 18:34:12.057472774Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723833252057442886,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=736a8e85-5a3f-4451-8a61-2817e517ef5b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 18:34:12 old-k8s-version-783465 crio[653]: time="2024-08-16 18:34:12.057990917Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=013f5fbf-d01f-4b61-ba15-c7bc59784b6a name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:34:12 old-k8s-version-783465 crio[653]: time="2024-08-16 18:34:12.058053448Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=013f5fbf-d01f-4b61-ba15-c7bc59784b6a name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:34:12 old-k8s-version-783465 crio[653]: time="2024-08-16 18:34:12.058082562Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=013f5fbf-d01f-4b61-ba15-c7bc59784b6a name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:34:12 old-k8s-version-783465 crio[653]: time="2024-08-16 18:34:12.089032478Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=efadaa7b-1a84-470f-af66-7207162c1166 name=/runtime.v1.RuntimeService/Version
	Aug 16 18:34:12 old-k8s-version-783465 crio[653]: time="2024-08-16 18:34:12.089112921Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=efadaa7b-1a84-470f-af66-7207162c1166 name=/runtime.v1.RuntimeService/Version
	Aug 16 18:34:12 old-k8s-version-783465 crio[653]: time="2024-08-16 18:34:12.090455339Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=97b97b12-f0fa-4d1b-9d3b-c21ab28ac80d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 18:34:12 old-k8s-version-783465 crio[653]: time="2024-08-16 18:34:12.090848276Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723833252090824877,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=97b97b12-f0fa-4d1b-9d3b-c21ab28ac80d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 18:34:12 old-k8s-version-783465 crio[653]: time="2024-08-16 18:34:12.091561242Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=392c9b96-c682-447c-b51e-4e8017165937 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:34:12 old-k8s-version-783465 crio[653]: time="2024-08-16 18:34:12.091619189Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=392c9b96-c682-447c-b51e-4e8017165937 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:34:12 old-k8s-version-783465 crio[653]: time="2024-08-16 18:34:12.091663118Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=392c9b96-c682-447c-b51e-4e8017165937 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:34:12 old-k8s-version-783465 crio[653]: time="2024-08-16 18:34:12.125585429Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=10ca4f03-81a9-4cf6-b5a1-53ef0409be7c name=/runtime.v1.RuntimeService/Version
	Aug 16 18:34:12 old-k8s-version-783465 crio[653]: time="2024-08-16 18:34:12.125659923Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=10ca4f03-81a9-4cf6-b5a1-53ef0409be7c name=/runtime.v1.RuntimeService/Version
	Aug 16 18:34:12 old-k8s-version-783465 crio[653]: time="2024-08-16 18:34:12.127338859Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=94c14a9f-8574-4d8a-bbf7-f2077a9c120e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 18:34:12 old-k8s-version-783465 crio[653]: time="2024-08-16 18:34:12.127735312Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723833252127713305,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=94c14a9f-8574-4d8a-bbf7-f2077a9c120e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 18:34:12 old-k8s-version-783465 crio[653]: time="2024-08-16 18:34:12.128275389Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b7c0b692-c65d-478c-89cd-e941a70d1ddf name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:34:12 old-k8s-version-783465 crio[653]: time="2024-08-16 18:34:12.128323848Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b7c0b692-c65d-478c-89cd-e941a70d1ddf name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:34:12 old-k8s-version-783465 crio[653]: time="2024-08-16 18:34:12.128366356Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b7c0b692-c65d-478c-89cd-e941a70d1ddf name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:34:12 old-k8s-version-783465 crio[653]: time="2024-08-16 18:34:12.158117479Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=20fc783a-579a-4184-bc79-caa9b972cef5 name=/runtime.v1.RuntimeService/Version
	Aug 16 18:34:12 old-k8s-version-783465 crio[653]: time="2024-08-16 18:34:12.158232916Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=20fc783a-579a-4184-bc79-caa9b972cef5 name=/runtime.v1.RuntimeService/Version
	Aug 16 18:34:12 old-k8s-version-783465 crio[653]: time="2024-08-16 18:34:12.159579521Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7ee7e19b-0f9a-4d0a-9d72-69c16ad79108 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 18:34:12 old-k8s-version-783465 crio[653]: time="2024-08-16 18:34:12.160061226Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723833252160027698,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7ee7e19b-0f9a-4d0a-9d72-69c16ad79108 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 18:34:12 old-k8s-version-783465 crio[653]: time="2024-08-16 18:34:12.160703184Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ecdcb94c-7d56-4255-828a-7a1f6534200c name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:34:12 old-k8s-version-783465 crio[653]: time="2024-08-16 18:34:12.160779593Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ecdcb94c-7d56-4255-828a-7a1f6534200c name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 18:34:12 old-k8s-version-783465 crio[653]: time="2024-08-16 18:34:12.160813829Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ecdcb94c-7d56-4255-828a-7a1f6534200c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Aug16 18:14] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.064977] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.045169] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.997853] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.853876] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.352877] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.345481] systemd-fstab-generator[572]: Ignoring "noauto" option for root device
	[  +0.064693] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054338] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.181344] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.146416] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.232451] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +6.280356] systemd-fstab-generator[902]: Ignoring "noauto" option for root device
	[  +0.058572] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.868893] systemd-fstab-generator[1027]: Ignoring "noauto" option for root device
	[ +13.997238] kauditd_printk_skb: 46 callbacks suppressed
	[Aug16 18:18] systemd-fstab-generator[5183]: Ignoring "noauto" option for root device
	[Aug16 18:20] systemd-fstab-generator[5458]: Ignoring "noauto" option for root device
	[  +0.064746] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 18:34:12 up 19 min,  0 users,  load average: 0.04, 0.04, 0.03
	Linux old-k8s-version-783465 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Aug 16 18:34:08 old-k8s-version-783465 kubelet[6956]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc00009e0c0, 0xc000bfabd0)
	Aug 16 18:34:08 old-k8s-version-783465 kubelet[6956]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Aug 16 18:34:08 old-k8s-version-783465 kubelet[6956]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Aug 16 18:34:08 old-k8s-version-783465 kubelet[6956]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Aug 16 18:34:08 old-k8s-version-783465 kubelet[6956]: goroutine 161 [select]:
	Aug 16 18:34:08 old-k8s-version-783465 kubelet[6956]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000b41ef0, 0x4f0ac20, 0xc000c16370, 0x1, 0xc00009e0c0)
	Aug 16 18:34:08 old-k8s-version-783465 kubelet[6956]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Aug 16 18:34:08 old-k8s-version-783465 kubelet[6956]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000246700, 0xc00009e0c0)
	Aug 16 18:34:08 old-k8s-version-783465 kubelet[6956]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Aug 16 18:34:08 old-k8s-version-783465 kubelet[6956]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Aug 16 18:34:08 old-k8s-version-783465 kubelet[6956]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Aug 16 18:34:08 old-k8s-version-783465 kubelet[6956]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000b86dd0, 0xc000bef060)
	Aug 16 18:34:08 old-k8s-version-783465 kubelet[6956]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Aug 16 18:34:08 old-k8s-version-783465 kubelet[6956]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Aug 16 18:34:08 old-k8s-version-783465 kubelet[6956]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Aug 16 18:34:08 old-k8s-version-783465 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Aug 16 18:34:08 old-k8s-version-783465 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Aug 16 18:34:09 old-k8s-version-783465 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 139.
	Aug 16 18:34:09 old-k8s-version-783465 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Aug 16 18:34:09 old-k8s-version-783465 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Aug 16 18:34:09 old-k8s-version-783465 kubelet[6965]: I0816 18:34:09.344388    6965 server.go:416] Version: v1.20.0
	Aug 16 18:34:09 old-k8s-version-783465 kubelet[6965]: I0816 18:34:09.344767    6965 server.go:837] Client rotation is on, will bootstrap in background
	Aug 16 18:34:09 old-k8s-version-783465 kubelet[6965]: I0816 18:34:09.346887    6965 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Aug 16 18:34:09 old-k8s-version-783465 kubelet[6965]: W0816 18:34:09.348575    6965 manager.go:159] Cannot detect current cgroup on cgroup v2
	Aug 16 18:34:09 old-k8s-version-783465 kubelet[6965]: I0816 18:34:09.348740    6965 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-783465 -n old-k8s-version-783465
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-783465 -n old-k8s-version-783465: exit status 2 (220.932338ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-783465" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (141.72s)

                                                
                                    

Test pass (252/318)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 23.1
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.0/json-events 13.73
13 TestDownloadOnly/v1.31.0/preload-exists 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.05
18 TestDownloadOnly/v1.31.0/DeleteAll 0.14
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.58
22 TestOffline 76.99
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.04
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 133.9
31 TestAddons/serial/GCPAuth/Namespaces 0.14
33 TestAddons/parallel/Registry 17.33
35 TestAddons/parallel/InspektorGadget 10.75
37 TestAddons/parallel/HelmTiller 13.05
39 TestAddons/parallel/CSI 95.25
40 TestAddons/parallel/Headlamp 19.82
41 TestAddons/parallel/CloudSpanner 5.5
42 TestAddons/parallel/LocalPath 13.17
43 TestAddons/parallel/NvidiaDevicePlugin 5.71
44 TestAddons/parallel/Yakd 12.05
46 TestCertOptions 45.96
47 TestCertExpiration 271.48
49 TestForceSystemdFlag 58.38
50 TestForceSystemdEnv 60.71
52 TestKVMDriverInstallOrUpdate 3.52
56 TestErrorSpam/setup 39.34
57 TestErrorSpam/start 0.32
58 TestErrorSpam/status 0.69
59 TestErrorSpam/pause 1.49
60 TestErrorSpam/unpause 1.62
61 TestErrorSpam/stop 4.84
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 84.4
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 40.82
68 TestFunctional/serial/KubeContext 0.04
69 TestFunctional/serial/KubectlGetPods 0.07
72 TestFunctional/serial/CacheCmd/cache/add_remote 4.16
73 TestFunctional/serial/CacheCmd/cache/add_local 2.11
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
75 TestFunctional/serial/CacheCmd/cache/list 0.04
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.79
78 TestFunctional/serial/CacheCmd/cache/delete 0.08
79 TestFunctional/serial/MinikubeKubectlCmd 0.1
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
81 TestFunctional/serial/ExtraConfig 33.74
82 TestFunctional/serial/ComponentHealth 0.06
83 TestFunctional/serial/LogsCmd 1.31
84 TestFunctional/serial/LogsFileCmd 1.32
85 TestFunctional/serial/InvalidService 3.85
87 TestFunctional/parallel/ConfigCmd 0.33
88 TestFunctional/parallel/DashboardCmd 18.47
89 TestFunctional/parallel/DryRun 0.29
90 TestFunctional/parallel/InternationalLanguage 0.14
91 TestFunctional/parallel/StatusCmd 0.94
95 TestFunctional/parallel/ServiceCmdConnect 6.57
96 TestFunctional/parallel/AddonsCmd 0.16
97 TestFunctional/parallel/PersistentVolumeClaim 40.82
99 TestFunctional/parallel/SSHCmd 0.46
100 TestFunctional/parallel/CpCmd 1.34
101 TestFunctional/parallel/MySQL 32.33
102 TestFunctional/parallel/FileSync 0.26
103 TestFunctional/parallel/CertSync 1.44
107 TestFunctional/parallel/NodeLabels 0.1
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.48
111 TestFunctional/parallel/License 0.55
112 TestFunctional/parallel/ServiceCmd/DeployApp 11.22
113 TestFunctional/parallel/ProfileCmd/profile_not_create 0.35
114 TestFunctional/parallel/MountCmd/any-port 10.82
115 TestFunctional/parallel/ProfileCmd/profile_list 0.34
116 TestFunctional/parallel/ProfileCmd/profile_json_output 0.27
126 TestFunctional/parallel/MountCmd/specific-port 1.91
127 TestFunctional/parallel/ServiceCmd/List 0.43
128 TestFunctional/parallel/ServiceCmd/JSONOutput 0.42
129 TestFunctional/parallel/ServiceCmd/HTTPS 0.28
130 TestFunctional/parallel/ServiceCmd/Format 0.28
131 TestFunctional/parallel/ServiceCmd/URL 0.34
132 TestFunctional/parallel/MountCmd/VerifyCleanup 1.54
133 TestFunctional/parallel/UpdateContextCmd/no_changes 0.38
134 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
135 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
136 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
137 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
138 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
139 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
140 TestFunctional/parallel/ImageCommands/ImageBuild 3.24
141 TestFunctional/parallel/ImageCommands/Setup 1.91
142 TestFunctional/parallel/Version/short 0.04
143 TestFunctional/parallel/Version/components 0.55
144 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.55
145 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.87
146 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.09
147 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.78
148 TestFunctional/parallel/ImageCommands/ImageRemove 3.51
149 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 5.12
150 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.53
151 TestFunctional/delete_echo-server_images 0.03
152 TestFunctional/delete_my-image_image 0.02
153 TestFunctional/delete_minikube_cached_images 0.02
157 TestMultiControlPlane/serial/StartCluster 197.79
158 TestMultiControlPlane/serial/DeployApp 6.26
159 TestMultiControlPlane/serial/PingHostFromPods 1.15
160 TestMultiControlPlane/serial/AddWorkerNode 56.89
161 TestMultiControlPlane/serial/NodeLabels 0.06
162 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.52
163 TestMultiControlPlane/serial/CopyFile 12.31
165 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.47
167 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.39
169 TestMultiControlPlane/serial/DeleteSecondaryNode 16.52
170 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.36
172 TestMultiControlPlane/serial/RestartCluster 339.2
173 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.38
174 TestMultiControlPlane/serial/AddSecondaryNode 74.49
175 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.51
179 TestJSONOutput/start/Command 75.95
180 TestJSONOutput/start/Audit 0
182 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
185 TestJSONOutput/pause/Command 0.66
186 TestJSONOutput/pause/Audit 0
188 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/unpause/Command 0.58
192 TestJSONOutput/unpause/Audit 0
194 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/stop/Command 6.52
198 TestJSONOutput/stop/Audit 0
200 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
202 TestErrorJSONOutput 0.18
207 TestMainNoArgs 0.04
208 TestMinikubeProfile 86.45
211 TestMountStart/serial/StartWithMountFirst 24.08
212 TestMountStart/serial/VerifyMountFirst 0.36
213 TestMountStart/serial/StartWithMountSecond 28.84
214 TestMountStart/serial/VerifyMountSecond 0.36
215 TestMountStart/serial/DeleteFirst 0.7
216 TestMountStart/serial/VerifyMountPostDelete 0.36
217 TestMountStart/serial/Stop 1.27
218 TestMountStart/serial/RestartStopped 22.85
219 TestMountStart/serial/VerifyMountPostStop 0.36
222 TestMultiNode/serial/FreshStart2Nodes 110.71
223 TestMultiNode/serial/DeployApp2Nodes 5.12
224 TestMultiNode/serial/PingHostFrom2Pods 0.75
225 TestMultiNode/serial/AddNode 51.36
226 TestMultiNode/serial/MultiNodeLabels 0.06
227 TestMultiNode/serial/ProfileList 0.21
228 TestMultiNode/serial/CopyFile 6.9
229 TestMultiNode/serial/StopNode 2.17
230 TestMultiNode/serial/StartAfterStop 39.32
232 TestMultiNode/serial/DeleteNode 2.15
234 TestMultiNode/serial/RestartMultiNode 194.56
235 TestMultiNode/serial/ValidateNameConflict 41.3
242 TestScheduledStopUnix 109.64
246 TestRunningBinaryUpgrade 192.88
251 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
252 TestNoKubernetes/serial/StartWithK8s 85.99
253 TestNoKubernetes/serial/StartWithStopK8s 41.99
254 TestNoKubernetes/serial/Start 27.35
262 TestNetworkPlugins/group/false 2.83
273 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
274 TestNoKubernetes/serial/ProfileList 28.18
275 TestNoKubernetes/serial/Stop 2.58
276 TestNoKubernetes/serial/StartNoArgs 23.84
277 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.18
279 TestPause/serial/Start 72.17
280 TestPause/serial/SecondStartNoReconfiguration 52.05
281 TestStoppedBinaryUpgrade/Setup 2.28
282 TestStoppedBinaryUpgrade/Upgrade 125.67
283 TestPause/serial/Pause 0.87
284 TestPause/serial/VerifyStatus 0.26
285 TestPause/serial/Unpause 1.03
286 TestPause/serial/PauseAgain 1.33
287 TestPause/serial/DeletePaused 1.39
288 TestPause/serial/VerifyDeletedResources 15.97
289 TestNetworkPlugins/group/auto/Start 51.93
290 TestNetworkPlugins/group/auto/KubeletFlags 0.61
291 TestNetworkPlugins/group/auto/NetCatPod 12.19
292 TestNetworkPlugins/group/auto/DNS 16.43
293 TestNetworkPlugins/group/flannel/Start 74.16
294 TestStoppedBinaryUpgrade/MinikubeLogs 0.8
295 TestNetworkPlugins/group/enable-default-cni/Start 109.15
296 TestNetworkPlugins/group/auto/Localhost 0.13
297 TestNetworkPlugins/group/auto/HairPin 0.12
298 TestNetworkPlugins/group/bridge/Start 95.49
299 TestNetworkPlugins/group/calico/Start 120.09
300 TestNetworkPlugins/group/flannel/ControllerPod 6.01
301 TestNetworkPlugins/group/flannel/KubeletFlags 0.22
302 TestNetworkPlugins/group/flannel/NetCatPod 12.26
303 TestNetworkPlugins/group/flannel/DNS 0.17
304 TestNetworkPlugins/group/flannel/Localhost 0.13
305 TestNetworkPlugins/group/flannel/HairPin 0.15
306 TestNetworkPlugins/group/kindnet/Start 67.71
307 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.21
308 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.26
309 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
310 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
311 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
312 TestNetworkPlugins/group/bridge/KubeletFlags 0.22
313 TestNetworkPlugins/group/bridge/NetCatPod 10.23
314 TestNetworkPlugins/group/bridge/DNS 0.21
315 TestNetworkPlugins/group/bridge/Localhost 0.17
316 TestNetworkPlugins/group/bridge/HairPin 0.19
317 TestNetworkPlugins/group/custom-flannel/Start 75.01
320 TestNetworkPlugins/group/calico/ControllerPod 6.01
321 TestNetworkPlugins/group/calico/KubeletFlags 0.22
322 TestNetworkPlugins/group/calico/NetCatPod 10.22
323 TestNetworkPlugins/group/calico/DNS 0.21
324 TestNetworkPlugins/group/calico/Localhost 0.16
325 TestNetworkPlugins/group/calico/HairPin 0.15
326 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
327 TestNetworkPlugins/group/kindnet/KubeletFlags 0.27
328 TestNetworkPlugins/group/kindnet/NetCatPod 11.33
330 TestStartStop/group/embed-certs/serial/FirstStart 90.5
331 TestNetworkPlugins/group/kindnet/DNS 0.17
332 TestNetworkPlugins/group/kindnet/Localhost 0.15
333 TestNetworkPlugins/group/kindnet/HairPin 0.12
335 TestStartStop/group/no-preload/serial/FirstStart 103.35
336 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.19
337 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.22
338 TestNetworkPlugins/group/custom-flannel/DNS 0.16
339 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
340 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
342 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 85.51
343 TestStartStop/group/embed-certs/serial/DeployApp 10.32
344 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.9
346 TestStartStop/group/no-preload/serial/DeployApp 10.27
347 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.93
349 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.25
350 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.87
355 TestStartStop/group/embed-certs/serial/SecondStart 638.8
357 TestStartStop/group/no-preload/serial/SecondStart 592.15
359 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 586.14
360 TestStartStop/group/old-k8s-version/serial/Stop 2.28
361 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
372 TestStartStop/group/newest-cni/serial/FirstStart 46.87
373 TestStartStop/group/newest-cni/serial/DeployApp 0
374 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.95
375 TestStartStop/group/newest-cni/serial/Stop 7.3
376 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
377 TestStartStop/group/newest-cni/serial/SecondStart 34.93
378 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
379 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
380 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.21
381 TestStartStop/group/newest-cni/serial/Pause 2.51
x
+
TestDownloadOnly/v1.20.0/json-events (23.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-651132 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-651132 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (23.098799179s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (23.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-651132
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-651132: exit status 85 (58.820198ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-651132 | jenkins | v1.33.1 | 16 Aug 24 16:48 UTC |          |
	|         | -p download-only-651132        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 16:48:19
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 16:48:19.754657   16764 out.go:345] Setting OutFile to fd 1 ...
	I0816 16:48:19.754775   16764 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 16:48:19.754784   16764 out.go:358] Setting ErrFile to fd 2...
	I0816 16:48:19.754788   16764 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 16:48:19.754976   16764 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-9545/.minikube/bin
	W0816 16:48:19.755102   16764 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19461-9545/.minikube/config/config.json: open /home/jenkins/minikube-integration/19461-9545/.minikube/config/config.json: no such file or directory
	I0816 16:48:19.755696   16764 out.go:352] Setting JSON to true
	I0816 16:48:19.756510   16764 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1798,"bootTime":1723825102,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0816 16:48:19.756568   16764 start.go:139] virtualization: kvm guest
	I0816 16:48:19.758826   16764 out.go:97] [download-only-651132] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0816 16:48:19.758910   16764 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball: no such file or directory
	I0816 16:48:19.758960   16764 notify.go:220] Checking for updates...
	I0816 16:48:19.760564   16764 out.go:169] MINIKUBE_LOCATION=19461
	I0816 16:48:19.761898   16764 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 16:48:19.763248   16764 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19461-9545/kubeconfig
	I0816 16:48:19.764630   16764 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19461-9545/.minikube
	I0816 16:48:19.765882   16764 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0816 16:48:19.767794   16764 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0816 16:48:19.767974   16764 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 16:48:19.870177   16764 out.go:97] Using the kvm2 driver based on user configuration
	I0816 16:48:19.870206   16764 start.go:297] selected driver: kvm2
	I0816 16:48:19.870217   16764 start.go:901] validating driver "kvm2" against <nil>
	I0816 16:48:19.870561   16764 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 16:48:19.870716   16764 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19461-9545/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 16:48:19.885311   16764 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0816 16:48:19.885362   16764 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 16:48:19.885836   16764 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0816 16:48:19.885976   16764 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0816 16:48:19.886004   16764 cni.go:84] Creating CNI manager for ""
	I0816 16:48:19.886015   16764 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 16:48:19.886025   16764 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0816 16:48:19.886063   16764 start.go:340] cluster config:
	{Name:download-only-651132 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-651132 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 16:48:19.886216   16764 iso.go:125] acquiring lock: {Name:mke35866c1d0f078e1f027c2d727692722810e04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 16:48:19.887922   16764 out.go:97] Downloading VM boot image ...
	I0816 16:48:19.887964   16764 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19461-9545/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0816 16:48:29.513770   16764 out.go:97] Starting "download-only-651132" primary control-plane node in "download-only-651132" cluster
	I0816 16:48:29.513789   16764 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0816 16:48:29.612865   16764 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0816 16:48:29.612898   16764 cache.go:56] Caching tarball of preloaded images
	I0816 16:48:29.613063   16764 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0816 16:48:29.614562   16764 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0816 16:48:29.614576   16764 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0816 16:48:29.717265   16764 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-651132 host does not exist
	  To start a cluster, run: "minikube start -p download-only-651132"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-651132
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (13.73s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-696494 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-696494 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (13.728501719s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (13.73s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-696494
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-696494: exit status 85 (54.408247ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-651132 | jenkins | v1.33.1 | 16 Aug 24 16:48 UTC |                     |
	|         | -p download-only-651132        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 16 Aug 24 16:48 UTC | 16 Aug 24 16:48 UTC |
	| delete  | -p download-only-651132        | download-only-651132 | jenkins | v1.33.1 | 16 Aug 24 16:48 UTC | 16 Aug 24 16:48 UTC |
	| start   | -o=json --download-only        | download-only-696494 | jenkins | v1.33.1 | 16 Aug 24 16:48 UTC |                     |
	|         | -p download-only-696494        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 16:48:43
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 16:48:43.161867   17026 out.go:345] Setting OutFile to fd 1 ...
	I0816 16:48:43.161988   17026 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 16:48:43.161999   17026 out.go:358] Setting ErrFile to fd 2...
	I0816 16:48:43.162003   17026 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 16:48:43.162212   17026 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-9545/.minikube/bin
	I0816 16:48:43.162800   17026 out.go:352] Setting JSON to true
	I0816 16:48:43.163760   17026 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1821,"bootTime":1723825102,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0816 16:48:43.163819   17026 start.go:139] virtualization: kvm guest
	I0816 16:48:43.165726   17026 out.go:97] [download-only-696494] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0816 16:48:43.165892   17026 notify.go:220] Checking for updates...
	I0816 16:48:43.167166   17026 out.go:169] MINIKUBE_LOCATION=19461
	I0816 16:48:43.168426   17026 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 16:48:43.169802   17026 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19461-9545/kubeconfig
	I0816 16:48:43.171020   17026 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19461-9545/.minikube
	I0816 16:48:43.172373   17026 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0816 16:48:43.174753   17026 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0816 16:48:43.175033   17026 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 16:48:43.207511   17026 out.go:97] Using the kvm2 driver based on user configuration
	I0816 16:48:43.207539   17026 start.go:297] selected driver: kvm2
	I0816 16:48:43.207551   17026 start.go:901] validating driver "kvm2" against <nil>
	I0816 16:48:43.207874   17026 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 16:48:43.207958   17026 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19461-9545/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 16:48:43.222767   17026 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0816 16:48:43.222815   17026 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 16:48:43.223264   17026 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0816 16:48:43.223437   17026 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0816 16:48:43.223466   17026 cni.go:84] Creating CNI manager for ""
	I0816 16:48:43.223473   17026 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 16:48:43.223483   17026 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0816 16:48:43.223539   17026 start.go:340] cluster config:
	{Name:download-only-696494 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-696494 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 16:48:43.223627   17026 iso.go:125] acquiring lock: {Name:mke35866c1d0f078e1f027c2d727692722810e04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 16:48:43.225211   17026 out.go:97] Starting "download-only-696494" primary control-plane node in "download-only-696494" cluster
	I0816 16:48:43.225228   17026 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 16:48:43.723395   17026 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0816 16:48:43.723433   17026 cache.go:56] Caching tarball of preloaded images
	I0816 16:48:43.723612   17026 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 16:48:43.725357   17026 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0816 16:48:43.725385   17026 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 ...
	I0816 16:48:43.824331   17026 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:4a2ae163f7665ceaa95dee8ffc8efdba -> /home/jenkins/minikube-integration/19461-9545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-696494 host does not exist
	  To start a cluster, run: "minikube start -p download-only-696494"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-696494
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-250559 --alsologtostderr --binary-mirror http://127.0.0.1:41735 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-250559" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-250559
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestOffline (76.99s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-989328 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-989328 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m16.20969854s)
helpers_test.go:175: Cleaning up "offline-crio-989328" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-989328
--- PASS: TestOffline (76.99s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.04s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-671083
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-671083: exit status 85 (44.21704ms)

                                                
                                                
-- stdout --
	* Profile "addons-671083" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-671083"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.04s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-671083
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-671083: exit status 85 (45.388629ms)

                                                
                                                
-- stdout --
	* Profile "addons-671083" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-671083"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (133.9s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-671083 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-671083 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m13.898318038s)
--- PASS: TestAddons/Setup (133.90s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-671083 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-671083 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.33s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 2.662141ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-rvzfr" [ef669560-d120-4b0c-96ee-3b4786b10c8c] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.002984834s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-qpbf4" [afdfd628-7037-4056-b825-d6a9bf88c250] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003643351s
addons_test.go:342: (dbg) Run:  kubectl --context addons-671083 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-671083 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-671083 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.470750311s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-671083 ip
2024/08/16 16:51:46 [DEBUG] GET http://192.168.39.240:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-671083 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.33s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.75s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-vz8gb" [2230d582-e9c4-4ec2-a573-91584eab4e82] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.005875621s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-671083
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-671083: (5.740908058s)
--- PASS: TestAddons/parallel/InspektorGadget (10.75s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (13.05s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 2.199473ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-xdgrc" [9075d95d-30f9-45ec-944b-3ee3d7e01862] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.005865404s
addons_test.go:475: (dbg) Run:  kubectl --context addons-671083 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-671083 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (6.104660093s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-671083 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (13.05s)

                                                
                                    
x
+
TestAddons/parallel/CSI (95.25s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 8.097589ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-671083 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-671083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-671083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-671083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-671083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-671083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-671083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-671083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-671083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-671083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-671083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-671083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-671083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-671083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-671083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-671083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-671083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-671083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-671083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-671083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-671083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-671083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-671083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-671083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-671083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-671083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-671083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-671083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-671083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-671083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-671083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-671083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-671083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-671083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-671083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-671083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-671083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-671083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-671083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-671083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-671083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-671083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-671083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-671083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-671083 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-671083 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [eaa6467d-f3ba-4409-91a0-5d1268245c27] Pending
helpers_test.go:344: "task-pv-pod" [eaa6467d-f3ba-4409-91a0-5d1268245c27] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [eaa6467d-f3ba-4409-91a0-5d1268245c27] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.004184191s
addons_test.go:590: (dbg) Run:  kubectl --context addons-671083 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-671083 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-671083 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-671083 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-671083 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-671083 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-671083 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-671083 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-671083 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-671083 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-671083 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-671083 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-671083 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-671083 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-671083 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-671083 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-671083 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-671083 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-671083 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-671083 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-671083 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-671083 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-671083 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-671083 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-671083 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-671083 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-671083 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [77795698-d59f-46af-b2c6-195f289ab0b7] Pending
helpers_test.go:344: "task-pv-pod-restore" [77795698-d59f-46af-b2c6-195f289ab0b7] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [77795698-d59f-46af-b2c6-195f289ab0b7] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003762996s
addons_test.go:632: (dbg) Run:  kubectl --context addons-671083 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-671083 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-671083 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-671083 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-671083 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.668317211s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-671083 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (95.25s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19.82s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-671083 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-l5ch8" [8d52f855-2698-449f-9828-223f07265e96] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-l5ch8" [8d52f855-2698-449f-9828-223f07265e96] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.004981198s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-671083 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p addons-671083 addons disable headlamp --alsologtostderr -v=1: (5.81863483s)
--- PASS: TestAddons/parallel/Headlamp (19.82s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.5s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-c4bc9b5f8-jwz4z" [ba0baecb-e84a-4f9f-a1a1-6e1b74d8a353] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003860857s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-671083
--- PASS: TestAddons/parallel/CloudSpanner (5.50s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (13.17s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-671083 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-671083 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-671083 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-671083 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-671083 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-671083 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-671083 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-671083 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-671083 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [86f0de56-5e94-4cb2-8912-b26d1bb2d1d5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [86f0de56-5e94-4cb2-8912-b26d1bb2d1d5] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [86f0de56-5e94-4cb2-8912-b26d1bb2d1d5] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.003834086s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-671083 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-671083 ssh "cat /opt/local-path-provisioner/pvc-38437f91-cec1-425d-a656-8ecfa2176521_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-671083 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-671083 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-671083 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (13.17s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.71s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-6fkvh" [fad33474-a661-4441-a3d3-61e1e753fc6a] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004991496s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-671083
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.71s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.05s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-rpbt5" [7259f6ca-52ff-4173-9e5d-c5bcd34ec342] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004043846s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-671083 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-671083 addons disable yakd --alsologtostderr -v=1: (6.049051181s)
--- PASS: TestAddons/parallel/Yakd (12.05s)

                                                
                                    
x
+
TestCertOptions (45.96s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-232274 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-232274 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (44.760247896s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-232274 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-232274 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-232274 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-232274" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-232274
--- PASS: TestCertOptions (45.96s)

                                                
                                    
x
+
TestCertExpiration (271.48s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-014588 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-014588 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (44.429381001s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-014588 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
E0816 18:01:12.269094   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-014588 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (46.246194187s)
helpers_test.go:175: Cleaning up "cert-expiration-014588" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-014588
--- PASS: TestCertExpiration (271.48s)

                                                
                                    
x
+
TestForceSystemdFlag (58.38s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-703169 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-703169 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (57.176721706s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-703169 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-703169" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-703169
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-703169: (1.014315614s)
--- PASS: TestForceSystemdFlag (58.38s)

                                                
                                    
x
+
TestForceSystemdEnv (60.71s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-015155 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-015155 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (59.697485117s)
helpers_test.go:175: Cleaning up "force-systemd-env-015155" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-015155
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-015155: (1.013241187s)
--- PASS: TestForceSystemdEnv (60.71s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.52s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.52s)

                                                
                                    
x
+
TestErrorSpam/setup (39.34s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-595234 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-595234 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-595234 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-595234 --driver=kvm2  --container-runtime=crio: (39.337226269s)
--- PASS: TestErrorSpam/setup (39.34s)

                                                
                                    
x
+
TestErrorSpam/start (0.32s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-595234 --log_dir /tmp/nospam-595234 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-595234 --log_dir /tmp/nospam-595234 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-595234 --log_dir /tmp/nospam-595234 start --dry-run
--- PASS: TestErrorSpam/start (0.32s)

                                                
                                    
x
+
TestErrorSpam/status (0.69s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-595234 --log_dir /tmp/nospam-595234 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-595234 --log_dir /tmp/nospam-595234 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-595234 --log_dir /tmp/nospam-595234 status
--- PASS: TestErrorSpam/status (0.69s)

                                                
                                    
x
+
TestErrorSpam/pause (1.49s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-595234 --log_dir /tmp/nospam-595234 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-595234 --log_dir /tmp/nospam-595234 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-595234 --log_dir /tmp/nospam-595234 pause
--- PASS: TestErrorSpam/pause (1.49s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.62s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-595234 --log_dir /tmp/nospam-595234 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-595234 --log_dir /tmp/nospam-595234 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-595234 --log_dir /tmp/nospam-595234 unpause
--- PASS: TestErrorSpam/unpause (1.62s)

                                                
                                    
x
+
TestErrorSpam/stop (4.84s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-595234 --log_dir /tmp/nospam-595234 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-595234 --log_dir /tmp/nospam-595234 stop: (1.549142857s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-595234 --log_dir /tmp/nospam-595234 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-595234 --log_dir /tmp/nospam-595234 stop: (1.345982517s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-595234 --log_dir /tmp/nospam-595234 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-595234 --log_dir /tmp/nospam-595234 stop: (1.946877917s)
--- PASS: TestErrorSpam/stop (4.84s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19461-9545/.minikube/files/etc/test/nested/copy/16753/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (84.4s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-654639 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0816 17:01:12.269763   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/client.crt: no such file or directory" logger="UnhandledError"
E0816 17:01:12.276631   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/client.crt: no such file or directory" logger="UnhandledError"
E0816 17:01:12.288010   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/client.crt: no such file or directory" logger="UnhandledError"
E0816 17:01:12.309401   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/client.crt: no such file or directory" logger="UnhandledError"
E0816 17:01:12.350844   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/client.crt: no such file or directory" logger="UnhandledError"
E0816 17:01:12.432283   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/client.crt: no such file or directory" logger="UnhandledError"
E0816 17:01:12.593846   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/client.crt: no such file or directory" logger="UnhandledError"
E0816 17:01:12.915648   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/client.crt: no such file or directory" logger="UnhandledError"
E0816 17:01:13.557757   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/client.crt: no such file or directory" logger="UnhandledError"
E0816 17:01:14.839735   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/client.crt: no such file or directory" logger="UnhandledError"
E0816 17:01:17.402672   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/client.crt: no such file or directory" logger="UnhandledError"
E0816 17:01:22.524793   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/client.crt: no such file or directory" logger="UnhandledError"
E0816 17:01:32.766227   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-654639 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m24.403646226s)
--- PASS: TestFunctional/serial/StartWithProxy (84.40s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (40.82s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-654639 --alsologtostderr -v=8
E0816 17:01:53.247730   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-654639 --alsologtostderr -v=8: (40.821126249s)
functional_test.go:663: soft start took 40.821782069s for "functional-654639" cluster.
--- PASS: TestFunctional/serial/SoftStart (40.82s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-654639 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-654639 cache add registry.k8s.io/pause:3.1: (1.357969024s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 cache add registry.k8s.io/pause:3.3
E0816 17:02:34.209894   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-654639 cache add registry.k8s.io/pause:3.3: (1.455351956s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-654639 cache add registry.k8s.io/pause:latest: (1.347157445s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.16s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-654639 /tmp/TestFunctionalserialCacheCmdcacheadd_local1470834196/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 cache add minikube-local-cache-test:functional-654639
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-654639 cache add minikube-local-cache-test:functional-654639: (1.804184086s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 cache delete minikube-local-cache-test:functional-654639
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-654639
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.79s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-654639 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (221.11349ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-amd64 -p functional-654639 cache reload: (1.116719394s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.79s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 kubectl -- --context functional-654639 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-654639 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.74s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-654639 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-654639 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.739092807s)
functional_test.go:761: restart took 33.739271867s for "functional-654639" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (33.74s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-654639 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.31s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-654639 logs: (1.306642473s)
--- PASS: TestFunctional/serial/LogsCmd (1.31s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.32s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 logs --file /tmp/TestFunctionalserialLogsFileCmd4038549200/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-654639 logs --file /tmp/TestFunctionalserialLogsFileCmd4038549200/001/logs.txt: (1.316661757s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.32s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.85s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-654639 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-654639
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-654639: exit status 115 (264.941286ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.207:31394 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-654639 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.85s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-654639 config get cpus: exit status 14 (57.354478ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-654639 config get cpus: exit status 14 (53.168524ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (18.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-654639 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-654639 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 25270: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (18.47s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-654639 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-654639 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (144.904693ms)

                                                
                                                
-- stdout --
	* [functional-654639] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19461
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19461-9545/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19461-9545/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 17:03:22.822164   25153 out.go:345] Setting OutFile to fd 1 ...
	I0816 17:03:22.822420   25153 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:03:22.822429   25153 out.go:358] Setting ErrFile to fd 2...
	I0816 17:03:22.822434   25153 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:03:22.822700   25153 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-9545/.minikube/bin
	I0816 17:03:22.823287   25153 out.go:352] Setting JSON to false
	I0816 17:03:22.824313   25153 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2701,"bootTime":1723825102,"procs":238,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0816 17:03:22.824375   25153 start.go:139] virtualization: kvm guest
	I0816 17:03:22.826279   25153 out.go:177] * [functional-654639] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0816 17:03:22.827623   25153 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 17:03:22.827693   25153 notify.go:220] Checking for updates...
	I0816 17:03:22.830049   25153 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 17:03:22.831322   25153 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19461-9545/kubeconfig
	I0816 17:03:22.832578   25153 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19461-9545/.minikube
	I0816 17:03:22.833692   25153 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 17:03:22.834877   25153 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 17:03:22.836756   25153 config.go:182] Loaded profile config "functional-654639": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 17:03:22.837536   25153 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:03:22.837584   25153 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:03:22.856161   25153 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45659
	I0816 17:03:22.856691   25153 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:03:22.857347   25153 main.go:141] libmachine: Using API Version  1
	I0816 17:03:22.857380   25153 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:03:22.857727   25153 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:03:22.857898   25153 main.go:141] libmachine: (functional-654639) Calling .DriverName
	I0816 17:03:22.858124   25153 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 17:03:22.858412   25153 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:03:22.858458   25153 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:03:22.874140   25153 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38485
	I0816 17:03:22.874507   25153 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:03:22.875166   25153 main.go:141] libmachine: Using API Version  1
	I0816 17:03:22.875190   25153 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:03:22.875550   25153 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:03:22.875748   25153 main.go:141] libmachine: (functional-654639) Calling .DriverName
	I0816 17:03:22.915492   25153 out.go:177] * Using the kvm2 driver based on existing profile
	I0816 17:03:22.916761   25153 start.go:297] selected driver: kvm2
	I0816 17:03:22.916785   25153 start.go:901] validating driver "kvm2" against &{Name:functional-654639 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:functional-654639 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.207 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 17:03:22.916929   25153 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 17:03:22.919392   25153 out.go:201] 
	W0816 17:03:22.921041   25153 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0816 17:03:22.922537   25153 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-654639 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-654639 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-654639 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (141.175937ms)

                                                
                                                
-- stdout --
	* [functional-654639] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19461
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19461-9545/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19461-9545/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 17:03:22.686785   25112 out.go:345] Setting OutFile to fd 1 ...
	I0816 17:03:22.686910   25112 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:03:22.686920   25112 out.go:358] Setting ErrFile to fd 2...
	I0816 17:03:22.686925   25112 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:03:22.687190   25112 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-9545/.minikube/bin
	I0816 17:03:22.687773   25112 out.go:352] Setting JSON to false
	I0816 17:03:22.688808   25112 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2701,"bootTime":1723825102,"procs":237,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0816 17:03:22.688870   25112 start.go:139] virtualization: kvm guest
	I0816 17:03:22.690916   25112 out.go:177] * [functional-654639] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0816 17:03:22.692599   25112 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 17:03:22.692618   25112 notify.go:220] Checking for updates...
	I0816 17:03:22.694872   25112 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 17:03:22.696500   25112 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19461-9545/kubeconfig
	I0816 17:03:22.697671   25112 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19461-9545/.minikube
	I0816 17:03:22.698837   25112 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 17:03:22.700109   25112 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 17:03:22.701877   25112 config.go:182] Loaded profile config "functional-654639": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 17:03:22.702526   25112 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:03:22.702589   25112 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:03:22.718066   25112 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38399
	I0816 17:03:22.718458   25112 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:03:22.719073   25112 main.go:141] libmachine: Using API Version  1
	I0816 17:03:22.719107   25112 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:03:22.719489   25112 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:03:22.719670   25112 main.go:141] libmachine: (functional-654639) Calling .DriverName
	I0816 17:03:22.719907   25112 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 17:03:22.720315   25112 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:03:22.720370   25112 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:03:22.736094   25112 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36795
	I0816 17:03:22.736506   25112 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:03:22.737054   25112 main.go:141] libmachine: Using API Version  1
	I0816 17:03:22.737077   25112 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:03:22.737374   25112 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:03:22.737538   25112 main.go:141] libmachine: (functional-654639) Calling .DriverName
	I0816 17:03:22.772754   25112 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0816 17:03:22.773936   25112 start.go:297] selected driver: kvm2
	I0816 17:03:22.773962   25112 start.go:901] validating driver "kvm2" against &{Name:functional-654639 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:functional-654639 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.207 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 17:03:22.774102   25112 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 17:03:22.776426   25112 out.go:201] 
	W0816 17:03:22.777687   25112 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0816 17:03:22.778895   25112 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (6.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-654639 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-654639 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-jsczb" [e397abd2-29ea-41d6-bf1e-192e6a865c78] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-jsczb" [e397abd2-29ea-41d6-bf1e-192e6a865c78] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 6.004355309s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.207:31179
functional_test.go:1675: http://192.168.39.207:31179: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-jsczb

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.207:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.207:31179
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (6.57s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (40.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [fada2611-f943-4602-ada9-261a5f49de60] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004894722s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-654639 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-654639 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-654639 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-654639 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ceb3f547-8ee1-4592-8248-34e8a2dc3aa5] Pending
helpers_test.go:344: "sp-pod" [ceb3f547-8ee1-4592-8248-34e8a2dc3aa5] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [ceb3f547-8ee1-4592-8248-34e8a2dc3aa5] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 23.004413141s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-654639 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-654639 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-654639 delete -f testdata/storage-provisioner/pod.yaml: (1.021998525s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-654639 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [b3e13252-89ac-4b83-b015-334a435debac] Pending
helpers_test.go:344: "sp-pod" [b3e13252-89ac-4b83-b015-334a435debac] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0816 17:03:56.131193   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "sp-pod" [b3e13252-89ac-4b83-b015-334a435debac] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.004426007s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-654639 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (40.82s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 ssh -n functional-654639 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 cp functional-654639:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1081587076/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 ssh -n functional-654639 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 ssh -n functional-654639 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (32.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-654639 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-plrkc" [89d8cea8-b7e4-4c97-9829-47dc2e1278e2] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-plrkc" [89d8cea8-b7e4-4c97-9829-47dc2e1278e2] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 31.004207397s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-654639 exec mysql-6cdb49bbb-plrkc -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-654639 exec mysql-6cdb49bbb-plrkc -- mysql -ppassword -e "show databases;": exit status 1 (135.35691ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-654639 exec mysql-6cdb49bbb-plrkc -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (32.33s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/16753/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 ssh "sudo cat /etc/test/nested/copy/16753/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/16753.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 ssh "sudo cat /etc/ssl/certs/16753.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/16753.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 ssh "sudo cat /usr/share/ca-certificates/16753.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/167532.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 ssh "sudo cat /etc/ssl/certs/167532.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/167532.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 ssh "sudo cat /usr/share/ca-certificates/167532.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-654639 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-654639 ssh "sudo systemctl is-active docker": exit status 1 (241.825343ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-654639 ssh "sudo systemctl is-active containerd": exit status 1 (233.746093ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-654639 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-654639 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-2lrwn" [4d815464-d494-47f6-9b7b-29caa0c85f2f] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-2lrwn" [4d815464-d494-47f6-9b7b-29caa0c85f2f] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.007025676s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-654639 /tmp/TestFunctionalparallelMountCmdany-port2129235163/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1723827801172029024" to /tmp/TestFunctionalparallelMountCmdany-port2129235163/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1723827801172029024" to /tmp/TestFunctionalparallelMountCmdany-port2129235163/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1723827801172029024" to /tmp/TestFunctionalparallelMountCmdany-port2129235163/001/test-1723827801172029024
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-654639 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (235.464164ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 16 17:03 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 16 17:03 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 16 17:03 test-1723827801172029024
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 ssh cat /mount-9p/test-1723827801172029024
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-654639 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [74cfe606-d5a8-4dd2-b7b1-368151772c53] Pending
helpers_test.go:344: "busybox-mount" [74cfe606-d5a8-4dd2-b7b1-368151772c53] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [74cfe606-d5a8-4dd2-b7b1-368151772c53] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [74cfe606-d5a8-4dd2-b7b1-368151772c53] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 8.00417979s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-654639 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-654639 /tmp/TestFunctionalparallelMountCmdany-port2129235163/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (10.82s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "298.763807ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "43.338039ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "223.905106ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "45.802701ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-654639 /tmp/TestFunctionalparallelMountCmdspecific-port3552691099/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-654639 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (189.495ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-654639 /tmp/TestFunctionalparallelMountCmdspecific-port3552691099/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-654639 ssh "sudo umount -f /mount-9p": exit status 1 (249.809168ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-654639 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-654639 /tmp/TestFunctionalparallelMountCmdspecific-port3552691099/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.91s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 service list -o json
functional_test.go:1494: Took "415.916323ms" to run "out/minikube-linux-amd64 -p functional-654639 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.207:30662
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.207:30662
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-654639 /tmp/TestFunctionalparallelMountCmdVerifyCleanup925031714/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-654639 /tmp/TestFunctionalparallelMountCmdVerifyCleanup925031714/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-654639 /tmp/TestFunctionalparallelMountCmdVerifyCleanup925031714/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-654639 ssh "findmnt -T" /mount1: exit status 1 (347.135277ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-654639 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-654639 /tmp/TestFunctionalparallelMountCmdVerifyCleanup925031714/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-654639 /tmp/TestFunctionalparallelMountCmdVerifyCleanup925031714/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-654639 /tmp/TestFunctionalparallelMountCmdVerifyCleanup925031714/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-654639 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-654639
localhost/kicbase/echo-server:functional-654639
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240730-75a5af0c
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-654639 image ls --format short --alsologtostderr:
I0816 17:03:57.500373   26927 out.go:345] Setting OutFile to fd 1 ...
I0816 17:03:57.500606   26927 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 17:03:57.500663   26927 out.go:358] Setting ErrFile to fd 2...
I0816 17:03:57.500679   26927 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 17:03:57.501190   26927 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-9545/.minikube/bin
I0816 17:03:57.501978   26927 config.go:182] Loaded profile config "functional-654639": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0816 17:03:57.502138   26927 config.go:182] Loaded profile config "functional-654639": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0816 17:03:57.502801   26927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0816 17:03:57.502858   26927 main.go:141] libmachine: Launching plugin server for driver kvm2
I0816 17:03:57.517451   26927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38113
I0816 17:03:57.517892   26927 main.go:141] libmachine: () Calling .GetVersion
I0816 17:03:57.518741   26927 main.go:141] libmachine: Using API Version  1
I0816 17:03:57.518762   26927 main.go:141] libmachine: () Calling .SetConfigRaw
I0816 17:03:57.519121   26927 main.go:141] libmachine: () Calling .GetMachineName
I0816 17:03:57.519263   26927 main.go:141] libmachine: (functional-654639) Calling .GetState
I0816 17:03:57.521191   26927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0816 17:03:57.521232   26927 main.go:141] libmachine: Launching plugin server for driver kvm2
I0816 17:03:57.535188   26927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35671
I0816 17:03:57.535641   26927 main.go:141] libmachine: () Calling .GetVersion
I0816 17:03:57.536137   26927 main.go:141] libmachine: Using API Version  1
I0816 17:03:57.536160   26927 main.go:141] libmachine: () Calling .SetConfigRaw
I0816 17:03:57.536490   26927 main.go:141] libmachine: () Calling .GetMachineName
I0816 17:03:57.536725   26927 main.go:141] libmachine: (functional-654639) Calling .DriverName
I0816 17:03:57.536904   26927 ssh_runner.go:195] Run: systemctl --version
I0816 17:03:57.536928   26927 main.go:141] libmachine: (functional-654639) Calling .GetSSHHostname
I0816 17:03:57.539942   26927 main.go:141] libmachine: (functional-654639) DBG | domain functional-654639 has defined MAC address 52:54:00:04:38:b7 in network mk-functional-654639
I0816 17:03:57.540333   26927 main.go:141] libmachine: (functional-654639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:38:b7", ip: ""} in network mk-functional-654639: {Iface:virbr1 ExpiryTime:2024-08-16 18:00:40 +0000 UTC Type:0 Mac:52:54:00:04:38:b7 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:functional-654639 Clientid:01:52:54:00:04:38:b7}
I0816 17:03:57.540360   26927 main.go:141] libmachine: (functional-654639) DBG | domain functional-654639 has defined IP address 192.168.39.207 and MAC address 52:54:00:04:38:b7 in network mk-functional-654639
I0816 17:03:57.540530   26927 main.go:141] libmachine: (functional-654639) Calling .GetSSHPort
I0816 17:03:57.540689   26927 main.go:141] libmachine: (functional-654639) Calling .GetSSHKeyPath
I0816 17:03:57.540839   26927 main.go:141] libmachine: (functional-654639) Calling .GetSSHUsername
I0816 17:03:57.540976   26927 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/functional-654639/id_rsa Username:docker}
I0816 17:03:57.627622   26927 ssh_runner.go:195] Run: sudo crictl images --output json
I0816 17:03:57.669250   26927 main.go:141] libmachine: Making call to close driver server
I0816 17:03:57.669266   26927 main.go:141] libmachine: (functional-654639) Calling .Close
I0816 17:03:57.669553   26927 main.go:141] libmachine: (functional-654639) DBG | Closing plugin on server side
I0816 17:03:57.669566   26927 main.go:141] libmachine: Successfully made call to close driver server
I0816 17:03:57.669581   26927 main.go:141] libmachine: Making call to close connection to plugin binary
I0816 17:03:57.669599   26927 main.go:141] libmachine: Making call to close driver server
I0816 17:03:57.669609   26927 main.go:141] libmachine: (functional-654639) Calling .Close
I0816 17:03:57.669845   26927 main.go:141] libmachine: Successfully made call to close driver server
I0816 17:03:57.669861   26927 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-654639 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/kicbase/echo-server           | functional-654639  | 9056ab77afb8e | 4.94MB |
| localhost/minikube-local-cache-test     | functional-654639  | 55470e62ab64f | 3.33kB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| docker.io/kindest/kindnetd              | v20240730-75a5af0c | 917d7814b9b5b | 87.2MB |
| docker.io/library/nginx                 | latest             | 5ef79149e0ec8 | 192MB  |
| registry.k8s.io/kube-controller-manager | v1.31.0            | 045733566833c | 89.4MB |
| registry.k8s.io/kube-proxy              | v1.31.0            | ad83b2ca7b09e | 92.7MB |
| registry.k8s.io/kube-scheduler          | v1.31.0            | 1766f54c897f0 | 68.4MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/kube-apiserver          | v1.31.0            | 604f5db92eaa8 | 95.2MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-654639 image ls --format table --alsologtostderr:
I0816 17:03:57.936642   27026 out.go:345] Setting OutFile to fd 1 ...
I0816 17:03:57.936753   27026 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 17:03:57.936761   27026 out.go:358] Setting ErrFile to fd 2...
I0816 17:03:57.936765   27026 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 17:03:57.936957   27026 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-9545/.minikube/bin
I0816 17:03:57.937606   27026 config.go:182] Loaded profile config "functional-654639": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0816 17:03:57.937729   27026 config.go:182] Loaded profile config "functional-654639": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0816 17:03:57.938209   27026 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0816 17:03:57.938259   27026 main.go:141] libmachine: Launching plugin server for driver kvm2
I0816 17:03:57.952701   27026 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39129
I0816 17:03:57.953126   27026 main.go:141] libmachine: () Calling .GetVersion
I0816 17:03:57.953663   27026 main.go:141] libmachine: Using API Version  1
I0816 17:03:57.953685   27026 main.go:141] libmachine: () Calling .SetConfigRaw
I0816 17:03:57.954010   27026 main.go:141] libmachine: () Calling .GetMachineName
I0816 17:03:57.954195   27026 main.go:141] libmachine: (functional-654639) Calling .GetState
I0816 17:03:57.956011   27026 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0816 17:03:57.956057   27026 main.go:141] libmachine: Launching plugin server for driver kvm2
I0816 17:03:57.971524   27026 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34515
I0816 17:03:57.971878   27026 main.go:141] libmachine: () Calling .GetVersion
I0816 17:03:57.972382   27026 main.go:141] libmachine: Using API Version  1
I0816 17:03:57.972401   27026 main.go:141] libmachine: () Calling .SetConfigRaw
I0816 17:03:57.972787   27026 main.go:141] libmachine: () Calling .GetMachineName
I0816 17:03:57.973074   27026 main.go:141] libmachine: (functional-654639) Calling .DriverName
I0816 17:03:57.973289   27026 ssh_runner.go:195] Run: systemctl --version
I0816 17:03:57.973311   27026 main.go:141] libmachine: (functional-654639) Calling .GetSSHHostname
I0816 17:03:57.976544   27026 main.go:141] libmachine: (functional-654639) DBG | domain functional-654639 has defined MAC address 52:54:00:04:38:b7 in network mk-functional-654639
I0816 17:03:57.977045   27026 main.go:141] libmachine: (functional-654639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:38:b7", ip: ""} in network mk-functional-654639: {Iface:virbr1 ExpiryTime:2024-08-16 18:00:40 +0000 UTC Type:0 Mac:52:54:00:04:38:b7 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:functional-654639 Clientid:01:52:54:00:04:38:b7}
I0816 17:03:57.977075   27026 main.go:141] libmachine: (functional-654639) DBG | domain functional-654639 has defined IP address 192.168.39.207 and MAC address 52:54:00:04:38:b7 in network mk-functional-654639
I0816 17:03:57.977188   27026 main.go:141] libmachine: (functional-654639) Calling .GetSSHPort
I0816 17:03:57.977393   27026 main.go:141] libmachine: (functional-654639) Calling .GetSSHKeyPath
I0816 17:03:57.977625   27026 main.go:141] libmachine: (functional-654639) Calling .GetSSHUsername
I0816 17:03:57.977800   27026 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/functional-654639/id_rsa Username:docker}
I0816 17:03:58.069867   27026 ssh_runner.go:195] Run: sudo crictl images --output json
I0816 17:03:58.118573   27026 main.go:141] libmachine: Making call to close driver server
I0816 17:03:58.118593   27026 main.go:141] libmachine: (functional-654639) Calling .Close
I0816 17:03:58.118904   27026 main.go:141] libmachine: Successfully made call to close driver server
I0816 17:03:58.118919   27026 main.go:141] libmachine: Making call to close connection to plugin binary
I0816 17:03:58.118931   27026 main.go:141] libmachine: Making call to close driver server
I0816 17:03:58.118938   27026 main.go:141] libmachine: (functional-654639) Calling .Close
I0816 17:03:58.119164   27026 main.go:141] libmachine: Successfully made call to close driver server
I0816 17:03:58.119193   27026 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-654639 image ls --format json --alsologtostderr:
[{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557","repoDigests":["docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3","docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"],"repoTags":["docker.io/kindest/kindnetd:v20240730-75a5af0c"],"size":"87165492"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df5
9a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"55470e62ab64f8f5bcc659d0b4520214c572063e59283fe849cf50a80aeae5c3","repoDigests":["localhost/minikube-local-cache-test@sha256:887995ac4e51d20240b08e4e7bde1f56291288438f85da1f911c2cc49848e2e3"],"repoTags":["localhost/minikube-local-cache-test:functional-654639"],"size":"3330"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","re
poDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494","repoDigests":["registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf","registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"92728217"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae565
36f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3","repoDigests"
:["registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf","registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"95233506"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"5ef79149e0ec84a7a9f9284c3f91aa3c20608f8391f5445eabe92ef07dbda03c","repoDigests":["docker.io/library/nginx@sha256:447a8665
cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add","docker.io/library/nginx@sha256:5f0574409b3add89581b96c68afe9e9c7b284651c3a974b6e8bac46bf95e6b7f"],"repoTags":["docker.io/library/nginx:latest"],"size":"191841612"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-654639"],"size":"4943877"},{"id":"045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d","registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"89437512"},{"id":"1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94","repoDigests":["registry.k8s.io/kube-scheduler@sha256:8
82f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a","registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"68420936"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-654639 image ls --format json --alsologtostderr:
I0816 17:03:57.719550   26972 out.go:345] Setting OutFile to fd 1 ...
I0816 17:03:57.719794   26972 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 17:03:57.719806   26972 out.go:358] Setting ErrFile to fd 2...
I0816 17:03:57.719812   26972 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 17:03:57.720115   26972 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-9545/.minikube/bin
I0816 17:03:57.720770   26972 config.go:182] Loaded profile config "functional-654639": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0816 17:03:57.720893   26972 config.go:182] Loaded profile config "functional-654639": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0816 17:03:57.721272   26972 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0816 17:03:57.721328   26972 main.go:141] libmachine: Launching plugin server for driver kvm2
I0816 17:03:57.737425   26972 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34443
I0816 17:03:57.738007   26972 main.go:141] libmachine: () Calling .GetVersion
I0816 17:03:57.738711   26972 main.go:141] libmachine: Using API Version  1
I0816 17:03:57.738725   26972 main.go:141] libmachine: () Calling .SetConfigRaw
I0816 17:03:57.739092   26972 main.go:141] libmachine: () Calling .GetMachineName
I0816 17:03:57.739284   26972 main.go:141] libmachine: (functional-654639) Calling .GetState
I0816 17:03:57.741209   26972 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0816 17:03:57.741243   26972 main.go:141] libmachine: Launching plugin server for driver kvm2
I0816 17:03:57.756996   26972 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38357
I0816 17:03:57.757425   26972 main.go:141] libmachine: () Calling .GetVersion
I0816 17:03:57.758094   26972 main.go:141] libmachine: Using API Version  1
I0816 17:03:57.758140   26972 main.go:141] libmachine: () Calling .SetConfigRaw
I0816 17:03:57.758589   26972 main.go:141] libmachine: () Calling .GetMachineName
I0816 17:03:57.758796   26972 main.go:141] libmachine: (functional-654639) Calling .DriverName
I0816 17:03:57.758988   26972 ssh_runner.go:195] Run: systemctl --version
I0816 17:03:57.759010   26972 main.go:141] libmachine: (functional-654639) Calling .GetSSHHostname
I0816 17:03:57.761910   26972 main.go:141] libmachine: (functional-654639) DBG | domain functional-654639 has defined MAC address 52:54:00:04:38:b7 in network mk-functional-654639
I0816 17:03:57.762313   26972 main.go:141] libmachine: (functional-654639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:38:b7", ip: ""} in network mk-functional-654639: {Iface:virbr1 ExpiryTime:2024-08-16 18:00:40 +0000 UTC Type:0 Mac:52:54:00:04:38:b7 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:functional-654639 Clientid:01:52:54:00:04:38:b7}
I0816 17:03:57.762343   26972 main.go:141] libmachine: (functional-654639) DBG | domain functional-654639 has defined IP address 192.168.39.207 and MAC address 52:54:00:04:38:b7 in network mk-functional-654639
I0816 17:03:57.762487   26972 main.go:141] libmachine: (functional-654639) Calling .GetSSHPort
I0816 17:03:57.762673   26972 main.go:141] libmachine: (functional-654639) Calling .GetSSHKeyPath
I0816 17:03:57.762834   26972 main.go:141] libmachine: (functional-654639) Calling .GetSSHUsername
I0816 17:03:57.763013   26972 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/functional-654639/id_rsa Username:docker}
I0816 17:03:57.854574   26972 ssh_runner.go:195] Run: sudo crictl images --output json
I0816 17:03:57.890220   26972 main.go:141] libmachine: Making call to close driver server
I0816 17:03:57.890236   26972 main.go:141] libmachine: (functional-654639) Calling .Close
I0816 17:03:57.890524   26972 main.go:141] libmachine: (functional-654639) DBG | Closing plugin on server side
I0816 17:03:57.890589   26972 main.go:141] libmachine: Successfully made call to close driver server
I0816 17:03:57.890607   26972 main.go:141] libmachine: Making call to close connection to plugin binary
I0816 17:03:57.890617   26972 main.go:141] libmachine: Making call to close driver server
I0816 17:03:57.890630   26972 main.go:141] libmachine: (functional-654639) Calling .Close
I0816 17:03:57.890879   26972 main.go:141] libmachine: Successfully made call to close driver server
I0816 17:03:57.890898   26972 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-654639 image ls --format yaml --alsologtostderr:
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-654639
size: "4943877"
- id: 55470e62ab64f8f5bcc659d0b4520214c572063e59283fe849cf50a80aeae5c3
repoDigests:
- localhost/minikube-local-cache-test@sha256:887995ac4e51d20240b08e4e7bde1f56291288438f85da1f911c2cc49848e2e3
repoTags:
- localhost/minikube-local-cache-test:functional-654639
size: "3330"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d
- registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "89437512"
- id: 917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557
repoDigests:
- docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3
- docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a
repoTags:
- docker.io/kindest/kindnetd:v20240730-75a5af0c
size: "87165492"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 5ef79149e0ec84a7a9f9284c3f91aa3c20608f8391f5445eabe92ef07dbda03c
repoDigests:
- docker.io/library/nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add
- docker.io/library/nginx@sha256:5f0574409b3add89581b96c68afe9e9c7b284651c3a974b6e8bac46bf95e6b7f
repoTags:
- docker.io/library/nginx:latest
size: "191841612"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a
- registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "68420936"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf
- registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "95233506"
- id: ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494
repoDigests:
- registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf
- registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "92728217"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-654639 image ls --format yaml --alsologtostderr:
I0816 17:03:57.501914   26926 out.go:345] Setting OutFile to fd 1 ...
I0816 17:03:57.502343   26926 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 17:03:57.502370   26926 out.go:358] Setting ErrFile to fd 2...
I0816 17:03:57.502377   26926 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 17:03:57.502791   26926 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-9545/.minikube/bin
I0816 17:03:57.503304   26926 config.go:182] Loaded profile config "functional-654639": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0816 17:03:57.503409   26926 config.go:182] Loaded profile config "functional-654639": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0816 17:03:57.503743   26926 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0816 17:03:57.503778   26926 main.go:141] libmachine: Launching plugin server for driver kvm2
I0816 17:03:57.517208   26926 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45079
I0816 17:03:57.517784   26926 main.go:141] libmachine: () Calling .GetVersion
I0816 17:03:57.518421   26926 main.go:141] libmachine: Using API Version  1
I0816 17:03:57.518444   26926 main.go:141] libmachine: () Calling .SetConfigRaw
I0816 17:03:57.518840   26926 main.go:141] libmachine: () Calling .GetMachineName
I0816 17:03:57.519012   26926 main.go:141] libmachine: (functional-654639) Calling .GetState
I0816 17:03:57.521006   26926 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0816 17:03:57.521045   26926 main.go:141] libmachine: Launching plugin server for driver kvm2
I0816 17:03:57.535221   26926 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36227
I0816 17:03:57.535602   26926 main.go:141] libmachine: () Calling .GetVersion
I0816 17:03:57.536093   26926 main.go:141] libmachine: Using API Version  1
I0816 17:03:57.536118   26926 main.go:141] libmachine: () Calling .SetConfigRaw
I0816 17:03:57.536479   26926 main.go:141] libmachine: () Calling .GetMachineName
I0816 17:03:57.536705   26926 main.go:141] libmachine: (functional-654639) Calling .DriverName
I0816 17:03:57.536924   26926 ssh_runner.go:195] Run: systemctl --version
I0816 17:03:57.536946   26926 main.go:141] libmachine: (functional-654639) Calling .GetSSHHostname
I0816 17:03:57.540036   26926 main.go:141] libmachine: (functional-654639) DBG | domain functional-654639 has defined MAC address 52:54:00:04:38:b7 in network mk-functional-654639
I0816 17:03:57.540385   26926 main.go:141] libmachine: (functional-654639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:38:b7", ip: ""} in network mk-functional-654639: {Iface:virbr1 ExpiryTime:2024-08-16 18:00:40 +0000 UTC Type:0 Mac:52:54:00:04:38:b7 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:functional-654639 Clientid:01:52:54:00:04:38:b7}
I0816 17:03:57.540411   26926 main.go:141] libmachine: (functional-654639) DBG | domain functional-654639 has defined IP address 192.168.39.207 and MAC address 52:54:00:04:38:b7 in network mk-functional-654639
I0816 17:03:57.540701   26926 main.go:141] libmachine: (functional-654639) Calling .GetSSHPort
I0816 17:03:57.540899   26926 main.go:141] libmachine: (functional-654639) Calling .GetSSHKeyPath
I0816 17:03:57.541039   26926 main.go:141] libmachine: (functional-654639) Calling .GetSSHUsername
I0816 17:03:57.541199   26926 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/functional-654639/id_rsa Username:docker}
I0816 17:03:57.627935   26926 ssh_runner.go:195] Run: sudo crictl images --output json
I0816 17:03:57.677945   26926 main.go:141] libmachine: Making call to close driver server
I0816 17:03:57.677961   26926 main.go:141] libmachine: (functional-654639) Calling .Close
I0816 17:03:57.678227   26926 main.go:141] libmachine: Successfully made call to close driver server
I0816 17:03:57.678245   26926 main.go:141] libmachine: Making call to close connection to plugin binary
I0816 17:03:57.678246   26926 main.go:141] libmachine: (functional-654639) DBG | Closing plugin on server side
I0816 17:03:57.678253   26926 main.go:141] libmachine: Making call to close driver server
I0816 17:03:57.678262   26926 main.go:141] libmachine: (functional-654639) Calling .Close
I0816 17:03:57.678520   26926 main.go:141] libmachine: Successfully made call to close driver server
I0816 17:03:57.678531   26926 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-654639 ssh pgrep buildkitd: exit status 1 (210.250647ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 image build -t localhost/my-image:functional-654639 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-654639 image build -t localhost/my-image:functional-654639 testdata/build --alsologtostderr: (2.81897298s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-654639 image build -t localhost/my-image:functional-654639 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 9210536d671
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-654639
--> 0aedbe2d5c8
Successfully tagged localhost/my-image:functional-654639
0aedbe2d5c8eab3fc5b432a47e2526439532d5a080b51131ec242e3921bbbcb0
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-654639 image build -t localhost/my-image:functional-654639 testdata/build --alsologtostderr:
I0816 17:03:57.939586   27025 out.go:345] Setting OutFile to fd 1 ...
I0816 17:03:57.939708   27025 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 17:03:57.939716   27025 out.go:358] Setting ErrFile to fd 2...
I0816 17:03:57.939721   27025 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 17:03:57.939874   27025 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-9545/.minikube/bin
I0816 17:03:57.940372   27025 config.go:182] Loaded profile config "functional-654639": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0816 17:03:57.940903   27025 config.go:182] Loaded profile config "functional-654639": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0816 17:03:57.941239   27025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0816 17:03:57.941292   27025 main.go:141] libmachine: Launching plugin server for driver kvm2
I0816 17:03:57.955207   27025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37597
I0816 17:03:57.955683   27025 main.go:141] libmachine: () Calling .GetVersion
I0816 17:03:57.956229   27025 main.go:141] libmachine: Using API Version  1
I0816 17:03:57.956251   27025 main.go:141] libmachine: () Calling .SetConfigRaw
I0816 17:03:57.956656   27025 main.go:141] libmachine: () Calling .GetMachineName
I0816 17:03:57.956857   27025 main.go:141] libmachine: (functional-654639) Calling .GetState
I0816 17:03:57.959146   27025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0816 17:03:57.959190   27025 main.go:141] libmachine: Launching plugin server for driver kvm2
I0816 17:03:57.972613   27025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34339
I0816 17:03:57.972987   27025 main.go:141] libmachine: () Calling .GetVersion
I0816 17:03:57.973435   27025 main.go:141] libmachine: Using API Version  1
I0816 17:03:57.973469   27025 main.go:141] libmachine: () Calling .SetConfigRaw
I0816 17:03:57.973773   27025 main.go:141] libmachine: () Calling .GetMachineName
I0816 17:03:57.973964   27025 main.go:141] libmachine: (functional-654639) Calling .DriverName
I0816 17:03:57.974141   27025 ssh_runner.go:195] Run: systemctl --version
I0816 17:03:57.974169   27025 main.go:141] libmachine: (functional-654639) Calling .GetSSHHostname
I0816 17:03:57.977316   27025 main.go:141] libmachine: (functional-654639) DBG | domain functional-654639 has defined MAC address 52:54:00:04:38:b7 in network mk-functional-654639
I0816 17:03:57.977728   27025 main.go:141] libmachine: (functional-654639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:38:b7", ip: ""} in network mk-functional-654639: {Iface:virbr1 ExpiryTime:2024-08-16 18:00:40 +0000 UTC Type:0 Mac:52:54:00:04:38:b7 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:functional-654639 Clientid:01:52:54:00:04:38:b7}
I0816 17:03:57.977759   27025 main.go:141] libmachine: (functional-654639) DBG | domain functional-654639 has defined IP address 192.168.39.207 and MAC address 52:54:00:04:38:b7 in network mk-functional-654639
I0816 17:03:57.977984   27025 main.go:141] libmachine: (functional-654639) Calling .GetSSHPort
I0816 17:03:57.978130   27025 main.go:141] libmachine: (functional-654639) Calling .GetSSHKeyPath
I0816 17:03:57.978262   27025 main.go:141] libmachine: (functional-654639) Calling .GetSSHUsername
I0816 17:03:57.978426   27025 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/functional-654639/id_rsa Username:docker}
I0816 17:03:58.067181   27025 build_images.go:161] Building image from path: /tmp/build.2430404410.tar
I0816 17:03:58.067244   27025 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0816 17:03:58.079518   27025 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2430404410.tar
I0816 17:03:58.084106   27025 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2430404410.tar: stat -c "%s %y" /var/lib/minikube/build/build.2430404410.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2430404410.tar': No such file or directory
I0816 17:03:58.084143   27025 ssh_runner.go:362] scp /tmp/build.2430404410.tar --> /var/lib/minikube/build/build.2430404410.tar (3072 bytes)
I0816 17:03:58.112420   27025 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2430404410
I0816 17:03:58.122469   27025 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2430404410 -xf /var/lib/minikube/build/build.2430404410.tar
I0816 17:03:58.132940   27025 crio.go:315] Building image: /var/lib/minikube/build/build.2430404410
I0816 17:03:58.132998   27025 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-654639 /var/lib/minikube/build/build.2430404410 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0816 17:04:00.687428   27025 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-654639 /var/lib/minikube/build/build.2430404410 --cgroup-manager=cgroupfs: (2.554405156s)
I0816 17:04:00.687484   27025 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2430404410
I0816 17:04:00.698338   27025 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2430404410.tar
I0816 17:04:00.707625   27025 build_images.go:217] Built localhost/my-image:functional-654639 from /tmp/build.2430404410.tar
I0816 17:04:00.707675   27025 build_images.go:133] succeeded building to: functional-654639
I0816 17:04:00.707684   27025 build_images.go:134] failed building to: 
I0816 17:04:00.707761   27025 main.go:141] libmachine: Making call to close driver server
I0816 17:04:00.707783   27025 main.go:141] libmachine: (functional-654639) Calling .Close
I0816 17:04:00.708115   27025 main.go:141] libmachine: Successfully made call to close driver server
I0816 17:04:00.708147   27025 main.go:141] libmachine: Making call to close connection to plugin binary
I0816 17:04:00.708158   27025 main.go:141] libmachine: Making call to close driver server
I0816 17:04:00.708167   27025 main.go:141] libmachine: (functional-654639) Calling .Close
I0816 17:04:00.708168   27025 main.go:141] libmachine: (functional-654639) DBG | Closing plugin on server side
I0816 17:04:00.708374   27025 main.go:141] libmachine: Successfully made call to close driver server
I0816 17:04:00.708389   27025 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
2024/08/16 17:03:41 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.889577762s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-654639
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.91s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 image load --daemon kicbase/echo-server:functional-654639 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-654639 image load --daemon kicbase/echo-server:functional-654639 --alsologtostderr: (1.337694882s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 image load --daemon kicbase/echo-server:functional-654639 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-654639
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 image load --daemon kicbase/echo-server:functional-654639 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 image save kicbase/echo-server:functional-654639 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (3.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 image rm kicbase/echo-server:functional-654639 --alsologtostderr
functional_test.go:392: (dbg) Done: out/minikube-linux-amd64 -p functional-654639 image rm kicbase/echo-server:functional-654639 --alsologtostderr: (3.264211114s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (3.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (5.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:409: (dbg) Done: out/minikube-linux-amd64 -p functional-654639 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (4.905688824s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (5.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-654639
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-654639 image save --daemon kicbase/echo-server:functional-654639 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-654639
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.53s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-654639
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-654639
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-654639
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (197.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-764617 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0816 17:06:12.269233   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/client.crt: no such file or directory" logger="UnhandledError"
E0816 17:06:39.973350   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-764617 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m17.152994158s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (197.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-764617 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-764617 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-764617 -- rollout status deployment/busybox: (4.185034596s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-764617 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-764617 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-764617 -- exec busybox-7dff88458-5kg62 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-764617 -- exec busybox-7dff88458-rcq66 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-764617 -- exec busybox-7dff88458-rvd47 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-764617 -- exec busybox-7dff88458-5kg62 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-764617 -- exec busybox-7dff88458-rcq66 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-764617 -- exec busybox-7dff88458-rvd47 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-764617 -- exec busybox-7dff88458-5kg62 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-764617 -- exec busybox-7dff88458-rcq66 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-764617 -- exec busybox-7dff88458-rvd47 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-764617 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-764617 -- exec busybox-7dff88458-5kg62 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-764617 -- exec busybox-7dff88458-5kg62 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-764617 -- exec busybox-7dff88458-rcq66 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-764617 -- exec busybox-7dff88458-rcq66 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-764617 -- exec busybox-7dff88458-rvd47 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-764617 -- exec busybox-7dff88458-rvd47 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (56.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-764617 -v=7 --alsologtostderr
E0816 17:08:21.061930   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/functional-654639/client.crt: no such file or directory" logger="UnhandledError"
E0816 17:08:21.068332   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/functional-654639/client.crt: no such file or directory" logger="UnhandledError"
E0816 17:08:21.079759   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/functional-654639/client.crt: no such file or directory" logger="UnhandledError"
E0816 17:08:21.101364   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/functional-654639/client.crt: no such file or directory" logger="UnhandledError"
E0816 17:08:21.142924   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/functional-654639/client.crt: no such file or directory" logger="UnhandledError"
E0816 17:08:21.224408   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/functional-654639/client.crt: no such file or directory" logger="UnhandledError"
E0816 17:08:21.385825   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/functional-654639/client.crt: no such file or directory" logger="UnhandledError"
E0816 17:08:21.707471   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/functional-654639/client.crt: no such file or directory" logger="UnhandledError"
E0816 17:08:22.348802   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/functional-654639/client.crt: no such file or directory" logger="UnhandledError"
E0816 17:08:23.630890   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/functional-654639/client.crt: no such file or directory" logger="UnhandledError"
E0816 17:08:26.192761   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/functional-654639/client.crt: no such file or directory" logger="UnhandledError"
E0816 17:08:31.314720   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/functional-654639/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-764617 -v=7 --alsologtostderr: (56.078212256s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (56.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-764617 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 cp testdata/cp-test.txt ha-764617:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 ssh -n ha-764617 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 cp ha-764617:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1933781201/001/cp-test_ha-764617.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 ssh -n ha-764617 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 cp ha-764617:/home/docker/cp-test.txt ha-764617-m02:/home/docker/cp-test_ha-764617_ha-764617-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 ssh -n ha-764617 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 ssh -n ha-764617-m02 "sudo cat /home/docker/cp-test_ha-764617_ha-764617-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 cp ha-764617:/home/docker/cp-test.txt ha-764617-m03:/home/docker/cp-test_ha-764617_ha-764617-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 ssh -n ha-764617 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 ssh -n ha-764617-m03 "sudo cat /home/docker/cp-test_ha-764617_ha-764617-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 cp ha-764617:/home/docker/cp-test.txt ha-764617-m04:/home/docker/cp-test_ha-764617_ha-764617-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 ssh -n ha-764617 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 ssh -n ha-764617-m04 "sudo cat /home/docker/cp-test_ha-764617_ha-764617-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 cp testdata/cp-test.txt ha-764617-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 ssh -n ha-764617-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 cp ha-764617-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1933781201/001/cp-test_ha-764617-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 ssh -n ha-764617-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 cp ha-764617-m02:/home/docker/cp-test.txt ha-764617:/home/docker/cp-test_ha-764617-m02_ha-764617.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 ssh -n ha-764617-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 ssh -n ha-764617 "sudo cat /home/docker/cp-test_ha-764617-m02_ha-764617.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 cp ha-764617-m02:/home/docker/cp-test.txt ha-764617-m03:/home/docker/cp-test_ha-764617-m02_ha-764617-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 ssh -n ha-764617-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 ssh -n ha-764617-m03 "sudo cat /home/docker/cp-test_ha-764617-m02_ha-764617-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 cp ha-764617-m02:/home/docker/cp-test.txt ha-764617-m04:/home/docker/cp-test_ha-764617-m02_ha-764617-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 ssh -n ha-764617-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 ssh -n ha-764617-m04 "sudo cat /home/docker/cp-test_ha-764617-m02_ha-764617-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 cp testdata/cp-test.txt ha-764617-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 ssh -n ha-764617-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 cp ha-764617-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1933781201/001/cp-test_ha-764617-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 ssh -n ha-764617-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 cp ha-764617-m03:/home/docker/cp-test.txt ha-764617:/home/docker/cp-test_ha-764617-m03_ha-764617.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 ssh -n ha-764617-m03 "sudo cat /home/docker/cp-test.txt"
E0816 17:08:41.557045   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/functional-654639/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 ssh -n ha-764617 "sudo cat /home/docker/cp-test_ha-764617-m03_ha-764617.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 cp ha-764617-m03:/home/docker/cp-test.txt ha-764617-m02:/home/docker/cp-test_ha-764617-m03_ha-764617-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 ssh -n ha-764617-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 ssh -n ha-764617-m02 "sudo cat /home/docker/cp-test_ha-764617-m03_ha-764617-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 cp ha-764617-m03:/home/docker/cp-test.txt ha-764617-m04:/home/docker/cp-test_ha-764617-m03_ha-764617-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 ssh -n ha-764617-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 ssh -n ha-764617-m04 "sudo cat /home/docker/cp-test_ha-764617-m03_ha-764617-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 cp testdata/cp-test.txt ha-764617-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 ssh -n ha-764617-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 cp ha-764617-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1933781201/001/cp-test_ha-764617-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 ssh -n ha-764617-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 cp ha-764617-m04:/home/docker/cp-test.txt ha-764617:/home/docker/cp-test_ha-764617-m04_ha-764617.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 ssh -n ha-764617-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 ssh -n ha-764617 "sudo cat /home/docker/cp-test_ha-764617-m04_ha-764617.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 cp ha-764617-m04:/home/docker/cp-test.txt ha-764617-m02:/home/docker/cp-test_ha-764617-m04_ha-764617-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 ssh -n ha-764617-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 ssh -n ha-764617-m02 "sudo cat /home/docker/cp-test_ha-764617-m04_ha-764617-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 cp ha-764617-m04:/home/docker/cp-test.txt ha-764617-m03:/home/docker/cp-test_ha-764617-m04_ha-764617-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 ssh -n ha-764617-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 ssh -n ha-764617-m03 "sudo cat /home/docker/cp-test_ha-764617-m04_ha-764617-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.465191312s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (16.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-764617 node delete m03 -v=7 --alsologtostderr: (15.809195399s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (16.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (339.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-764617 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0816 17:23:21.062266   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/functional-654639/client.crt: no such file or directory" logger="UnhandledError"
E0816 17:24:44.128045   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/functional-654639/client.crt: no such file or directory" logger="UnhandledError"
E0816 17:26:12.269291   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-764617 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m38.487246569s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (339.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (74.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-764617 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-764617 --control-plane -v=7 --alsologtostderr: (1m13.70015104s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-764617 status -v=7 --alsologtostderr
E0816 17:28:21.061535   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/functional-654639/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (74.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.51s)

                                                
                                    
x
+
TestJSONOutput/start/Command (75.95s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-103015 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-103015 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m15.947865748s)
--- PASS: TestJSONOutput/start/Command (75.95s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-103015 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.58s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-103015 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.58s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.52s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-103015 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-103015 --output=json --user=testUser: (6.521278873s)
--- PASS: TestJSONOutput/stop/Command (6.52s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.18s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-760600 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-760600 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (57.039165ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4daf14f3-fb65-4f44-b446-d3c868cf6b6a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-760600] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"67f2c5a7-565e-4bef-83a7-add6d7061bc1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19461"}}
	{"specversion":"1.0","id":"c065af34-bc58-4281-8b02-754d8af7aaab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"9c3329ae-62b5-4a8a-8aae-f82699168d32","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19461-9545/kubeconfig"}}
	{"specversion":"1.0","id":"77fae49d-2b06-4331-8b14-f234ea942b29","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19461-9545/.minikube"}}
	{"specversion":"1.0","id":"bba22e2d-634e-4628-bbdf-043059ac1d7a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"1c75b736-d3ff-4e50-8909-eb758f2348be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d4d5f3d7-57ed-4464-9c8c-8be4e9676cda","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-760600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-760600
--- PASS: TestErrorJSONOutput (0.18s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (86.45s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-947623 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-947623 --driver=kvm2  --container-runtime=crio: (42.077735272s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-950585 --driver=kvm2  --container-runtime=crio
E0816 17:31:12.269913   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-950585 --driver=kvm2  --container-runtime=crio: (41.801863623s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-947623
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-950585
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-950585" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-950585
helpers_test.go:175: Cleaning up "first-947623" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-947623
--- PASS: TestMinikubeProfile (86.45s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (24.08s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-106772 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-106772 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (23.081699376s)
--- PASS: TestMountStart/serial/StartWithMountFirst (24.08s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-106772 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-106772 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (28.84s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-120063 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-120063 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.83635021s)
--- PASS: TestMountStart/serial/StartWithMountSecond (28.84s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-120063 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-120063 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-106772 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-120063 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-120063 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-120063
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-120063: (1.267381202s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.85s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-120063
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-120063: (21.849755005s)
--- PASS: TestMountStart/serial/RestartStopped (22.85s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-120063 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-120063 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (110.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-797386 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0816 17:33:21.061262   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/functional-654639/client.crt: no such file or directory" logger="UnhandledError"
E0816 17:34:15.336735   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-797386 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m50.319033974s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-797386 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (110.71s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-797386 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-797386 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-797386 -- rollout status deployment/busybox: (3.719167498s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-797386 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-797386 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-797386 -- exec busybox-7dff88458-6986q -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-797386 -- exec busybox-7dff88458-r9pdc -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-797386 -- exec busybox-7dff88458-6986q -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-797386 -- exec busybox-7dff88458-r9pdc -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-797386 -- exec busybox-7dff88458-6986q -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-797386 -- exec busybox-7dff88458-r9pdc -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.12s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-797386 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-797386 -- exec busybox-7dff88458-6986q -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-797386 -- exec busybox-7dff88458-6986q -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-797386 -- exec busybox-7dff88458-r9pdc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-797386 -- exec busybox-7dff88458-r9pdc -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.75s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (51.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-797386 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-797386 -v 3 --alsologtostderr: (50.816296509s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-797386 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (51.36s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-797386 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-797386 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-797386 cp testdata/cp-test.txt multinode-797386:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-797386 ssh -n multinode-797386 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-797386 cp multinode-797386:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3908969690/001/cp-test_multinode-797386.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-797386 ssh -n multinode-797386 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-797386 cp multinode-797386:/home/docker/cp-test.txt multinode-797386-m02:/home/docker/cp-test_multinode-797386_multinode-797386-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-797386 ssh -n multinode-797386 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-797386 ssh -n multinode-797386-m02 "sudo cat /home/docker/cp-test_multinode-797386_multinode-797386-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-797386 cp multinode-797386:/home/docker/cp-test.txt multinode-797386-m03:/home/docker/cp-test_multinode-797386_multinode-797386-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-797386 ssh -n multinode-797386 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-797386 ssh -n multinode-797386-m03 "sudo cat /home/docker/cp-test_multinode-797386_multinode-797386-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-797386 cp testdata/cp-test.txt multinode-797386-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-797386 ssh -n multinode-797386-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-797386 cp multinode-797386-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3908969690/001/cp-test_multinode-797386-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-797386 ssh -n multinode-797386-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-797386 cp multinode-797386-m02:/home/docker/cp-test.txt multinode-797386:/home/docker/cp-test_multinode-797386-m02_multinode-797386.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-797386 ssh -n multinode-797386-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-797386 ssh -n multinode-797386 "sudo cat /home/docker/cp-test_multinode-797386-m02_multinode-797386.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-797386 cp multinode-797386-m02:/home/docker/cp-test.txt multinode-797386-m03:/home/docker/cp-test_multinode-797386-m02_multinode-797386-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-797386 ssh -n multinode-797386-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-797386 ssh -n multinode-797386-m03 "sudo cat /home/docker/cp-test_multinode-797386-m02_multinode-797386-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-797386 cp testdata/cp-test.txt multinode-797386-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-797386 ssh -n multinode-797386-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-797386 cp multinode-797386-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3908969690/001/cp-test_multinode-797386-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-797386 ssh -n multinode-797386-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-797386 cp multinode-797386-m03:/home/docker/cp-test.txt multinode-797386:/home/docker/cp-test_multinode-797386-m03_multinode-797386.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-797386 ssh -n multinode-797386-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-797386 ssh -n multinode-797386 "sudo cat /home/docker/cp-test_multinode-797386-m03_multinode-797386.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-797386 cp multinode-797386-m03:/home/docker/cp-test.txt multinode-797386-m02:/home/docker/cp-test_multinode-797386-m03_multinode-797386-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-797386 ssh -n multinode-797386-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-797386 ssh -n multinode-797386-m02 "sudo cat /home/docker/cp-test_multinode-797386-m03_multinode-797386-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.90s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-797386 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-797386 node stop m03: (1.346874253s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-797386 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-797386 status: exit status 7 (406.593347ms)

                                                
                                                
-- stdout --
	multinode-797386
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-797386-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-797386-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-797386 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-797386 status --alsologtostderr: exit status 7 (413.839071ms)

                                                
                                                
-- stdout --
	multinode-797386
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-797386-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-797386-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 17:35:35.291878   44888 out.go:345] Setting OutFile to fd 1 ...
	I0816 17:35:35.292095   44888 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:35:35.292102   44888 out.go:358] Setting ErrFile to fd 2...
	I0816 17:35:35.292106   44888 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:35:35.292333   44888 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-9545/.minikube/bin
	I0816 17:35:35.292560   44888 out.go:352] Setting JSON to false
	I0816 17:35:35.292594   44888 mustload.go:65] Loading cluster: multinode-797386
	I0816 17:35:35.292657   44888 notify.go:220] Checking for updates...
	I0816 17:35:35.293072   44888 config.go:182] Loaded profile config "multinode-797386": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 17:35:35.293089   44888 status.go:255] checking status of multinode-797386 ...
	I0816 17:35:35.293458   44888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:35:35.293513   44888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:35:35.312202   44888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38147
	I0816 17:35:35.312651   44888 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:35:35.313109   44888 main.go:141] libmachine: Using API Version  1
	I0816 17:35:35.313130   44888 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:35:35.313466   44888 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:35:35.313661   44888 main.go:141] libmachine: (multinode-797386) Calling .GetState
	I0816 17:35:35.315056   44888 status.go:330] multinode-797386 host status = "Running" (err=<nil>)
	I0816 17:35:35.315072   44888 host.go:66] Checking if "multinode-797386" exists ...
	I0816 17:35:35.315341   44888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:35:35.315381   44888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:35:35.329961   44888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39321
	I0816 17:35:35.330304   44888 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:35:35.330676   44888 main.go:141] libmachine: Using API Version  1
	I0816 17:35:35.330693   44888 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:35:35.331022   44888 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:35:35.331184   44888 main.go:141] libmachine: (multinode-797386) Calling .GetIP
	I0816 17:35:35.333710   44888 main.go:141] libmachine: (multinode-797386) DBG | domain multinode-797386 has defined MAC address 52:54:00:2f:3e:a1 in network mk-multinode-797386
	I0816 17:35:35.334102   44888 main.go:141] libmachine: (multinode-797386) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:3e:a1", ip: ""} in network mk-multinode-797386: {Iface:virbr1 ExpiryTime:2024-08-16 18:32:52 +0000 UTC Type:0 Mac:52:54:00:2f:3e:a1 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:multinode-797386 Clientid:01:52:54:00:2f:3e:a1}
	I0816 17:35:35.334130   44888 main.go:141] libmachine: (multinode-797386) DBG | domain multinode-797386 has defined IP address 192.168.39.218 and MAC address 52:54:00:2f:3e:a1 in network mk-multinode-797386
	I0816 17:35:35.334221   44888 host.go:66] Checking if "multinode-797386" exists ...
	I0816 17:35:35.334498   44888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:35:35.334531   44888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:35:35.348471   44888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39921
	I0816 17:35:35.348897   44888 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:35:35.349292   44888 main.go:141] libmachine: Using API Version  1
	I0816 17:35:35.349310   44888 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:35:35.349585   44888 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:35:35.349746   44888 main.go:141] libmachine: (multinode-797386) Calling .DriverName
	I0816 17:35:35.349907   44888 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 17:35:35.349931   44888 main.go:141] libmachine: (multinode-797386) Calling .GetSSHHostname
	I0816 17:35:35.352767   44888 main.go:141] libmachine: (multinode-797386) DBG | domain multinode-797386 has defined MAC address 52:54:00:2f:3e:a1 in network mk-multinode-797386
	I0816 17:35:35.353240   44888 main.go:141] libmachine: (multinode-797386) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:3e:a1", ip: ""} in network mk-multinode-797386: {Iface:virbr1 ExpiryTime:2024-08-16 18:32:52 +0000 UTC Type:0 Mac:52:54:00:2f:3e:a1 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:multinode-797386 Clientid:01:52:54:00:2f:3e:a1}
	I0816 17:35:35.353272   44888 main.go:141] libmachine: (multinode-797386) DBG | domain multinode-797386 has defined IP address 192.168.39.218 and MAC address 52:54:00:2f:3e:a1 in network mk-multinode-797386
	I0816 17:35:35.353459   44888 main.go:141] libmachine: (multinode-797386) Calling .GetSSHPort
	I0816 17:35:35.353637   44888 main.go:141] libmachine: (multinode-797386) Calling .GetSSHKeyPath
	I0816 17:35:35.353784   44888 main.go:141] libmachine: (multinode-797386) Calling .GetSSHUsername
	I0816 17:35:35.353993   44888 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/multinode-797386/id_rsa Username:docker}
	I0816 17:35:35.440866   44888 ssh_runner.go:195] Run: systemctl --version
	I0816 17:35:35.447421   44888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 17:35:35.465541   44888 kubeconfig.go:125] found "multinode-797386" server: "https://192.168.39.218:8443"
	I0816 17:35:35.465569   44888 api_server.go:166] Checking apiserver status ...
	I0816 17:35:35.465598   44888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 17:35:35.481251   44888 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1101/cgroup
	W0816 17:35:35.491485   44888 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1101/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 17:35:35.491535   44888 ssh_runner.go:195] Run: ls
	I0816 17:35:35.495547   44888 api_server.go:253] Checking apiserver healthz at https://192.168.39.218:8443/healthz ...
	I0816 17:35:35.499562   44888 api_server.go:279] https://192.168.39.218:8443/healthz returned 200:
	ok
	I0816 17:35:35.499589   44888 status.go:422] multinode-797386 apiserver status = Running (err=<nil>)
	I0816 17:35:35.499602   44888 status.go:257] multinode-797386 status: &{Name:multinode-797386 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 17:35:35.499637   44888 status.go:255] checking status of multinode-797386-m02 ...
	I0816 17:35:35.500032   44888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:35:35.500072   44888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:35:35.515047   44888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39149
	I0816 17:35:35.515586   44888 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:35:35.516023   44888 main.go:141] libmachine: Using API Version  1
	I0816 17:35:35.516043   44888 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:35:35.516324   44888 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:35:35.516488   44888 main.go:141] libmachine: (multinode-797386-m02) Calling .GetState
	I0816 17:35:35.518117   44888 status.go:330] multinode-797386-m02 host status = "Running" (err=<nil>)
	I0816 17:35:35.518131   44888 host.go:66] Checking if "multinode-797386-m02" exists ...
	I0816 17:35:35.518403   44888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:35:35.518434   44888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:35:35.533573   44888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33519
	I0816 17:35:35.534007   44888 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:35:35.534386   44888 main.go:141] libmachine: Using API Version  1
	I0816 17:35:35.534404   44888 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:35:35.534698   44888 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:35:35.534913   44888 main.go:141] libmachine: (multinode-797386-m02) Calling .GetIP
	I0816 17:35:35.537687   44888 main.go:141] libmachine: (multinode-797386-m02) DBG | domain multinode-797386-m02 has defined MAC address 52:54:00:d3:d0:1c in network mk-multinode-797386
	I0816 17:35:35.538074   44888 main.go:141] libmachine: (multinode-797386-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d0:1c", ip: ""} in network mk-multinode-797386: {Iface:virbr1 ExpiryTime:2024-08-16 18:33:56 +0000 UTC Type:0 Mac:52:54:00:d3:d0:1c Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:multinode-797386-m02 Clientid:01:52:54:00:d3:d0:1c}
	I0816 17:35:35.538109   44888 main.go:141] libmachine: (multinode-797386-m02) DBG | domain multinode-797386-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:d3:d0:1c in network mk-multinode-797386
	I0816 17:35:35.538210   44888 host.go:66] Checking if "multinode-797386-m02" exists ...
	I0816 17:35:35.538556   44888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:35:35.538602   44888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:35:35.553139   44888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44193
	I0816 17:35:35.553485   44888 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:35:35.553900   44888 main.go:141] libmachine: Using API Version  1
	I0816 17:35:35.553919   44888 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:35:35.554213   44888 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:35:35.554386   44888 main.go:141] libmachine: (multinode-797386-m02) Calling .DriverName
	I0816 17:35:35.554572   44888 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 17:35:35.554601   44888 main.go:141] libmachine: (multinode-797386-m02) Calling .GetSSHHostname
	I0816 17:35:35.557391   44888 main.go:141] libmachine: (multinode-797386-m02) DBG | domain multinode-797386-m02 has defined MAC address 52:54:00:d3:d0:1c in network mk-multinode-797386
	I0816 17:35:35.557819   44888 main.go:141] libmachine: (multinode-797386-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d0:1c", ip: ""} in network mk-multinode-797386: {Iface:virbr1 ExpiryTime:2024-08-16 18:33:56 +0000 UTC Type:0 Mac:52:54:00:d3:d0:1c Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:multinode-797386-m02 Clientid:01:52:54:00:d3:d0:1c}
	I0816 17:35:35.557843   44888 main.go:141] libmachine: (multinode-797386-m02) DBG | domain multinode-797386-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:d3:d0:1c in network mk-multinode-797386
	I0816 17:35:35.557966   44888 main.go:141] libmachine: (multinode-797386-m02) Calling .GetSSHPort
	I0816 17:35:35.558126   44888 main.go:141] libmachine: (multinode-797386-m02) Calling .GetSSHKeyPath
	I0816 17:35:35.558247   44888 main.go:141] libmachine: (multinode-797386-m02) Calling .GetSSHUsername
	I0816 17:35:35.558394   44888 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19461-9545/.minikube/machines/multinode-797386-m02/id_rsa Username:docker}
	I0816 17:35:35.635210   44888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 17:35:35.647925   44888 status.go:257] multinode-797386-m02 status: &{Name:multinode-797386-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0816 17:35:35.647967   44888 status.go:255] checking status of multinode-797386-m03 ...
	I0816 17:35:35.648384   44888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 17:35:35.648430   44888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 17:35:35.663463   44888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45427
	I0816 17:35:35.663877   44888 main.go:141] libmachine: () Calling .GetVersion
	I0816 17:35:35.664342   44888 main.go:141] libmachine: Using API Version  1
	I0816 17:35:35.664364   44888 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 17:35:35.664713   44888 main.go:141] libmachine: () Calling .GetMachineName
	I0816 17:35:35.664912   44888 main.go:141] libmachine: (multinode-797386-m03) Calling .GetState
	I0816 17:35:35.666340   44888 status.go:330] multinode-797386-m03 host status = "Stopped" (err=<nil>)
	I0816 17:35:35.666354   44888 status.go:343] host is not running, skipping remaining checks
	I0816 17:35:35.666359   44888 status.go:257] multinode-797386-m03 status: &{Name:multinode-797386-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.17s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (39.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-797386 node start m03 -v=7 --alsologtostderr
E0816 17:36:12.269810   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-797386 node start m03 -v=7 --alsologtostderr: (38.72326351s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-797386 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (39.32s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-797386 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-797386 node delete m03: (1.652238244s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-797386 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.15s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (194.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-797386 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0816 17:46:12.269977   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-797386 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m14.035416565s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-797386 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (194.56s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (41.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-797386
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-797386-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-797386-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (63.077051ms)

                                                
                                                
-- stdout --
	* [multinode-797386-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19461
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19461-9545/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19461-9545/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-797386-m02' is duplicated with machine name 'multinode-797386-m02' in profile 'multinode-797386'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-797386-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-797386-m03 --driver=kvm2  --container-runtime=crio: (39.990107173s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-797386
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-797386: exit status 80 (206.10609ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-797386 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-797386-m03 already exists in multinode-797386-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-797386-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-797386-m03: (1.000678919s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (41.30s)

                                                
                                    
x
+
TestScheduledStopUnix (109.64s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-440719 --memory=2048 --driver=kvm2  --container-runtime=crio
E0816 17:53:21.062082   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/functional-654639/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-440719 --memory=2048 --driver=kvm2  --container-runtime=crio: (38.126866291s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-440719 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-440719 -n scheduled-stop-440719
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-440719 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-440719 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-440719 -n scheduled-stop-440719
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-440719
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-440719 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-440719
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-440719: exit status 7 (64.794711ms)

                                                
                                                
-- stdout --
	scheduled-stop-440719
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-440719 -n scheduled-stop-440719
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-440719 -n scheduled-stop-440719: exit status 7 (64.031943ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-440719" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-440719
--- PASS: TestScheduledStopUnix (109.64s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (192.88s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1687748902 start -p running-upgrade-339463 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1687748902 start -p running-upgrade-339463 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m55.297696424s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-339463 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-339463 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m14.260395929s)
helpers_test.go:175: Cleaning up "running-upgrade-339463" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-339463
--- PASS: TestRunningBinaryUpgrade (192.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-999954 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-999954 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (70.587802ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-999954] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19461
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19461-9545/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19461-9545/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (85.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-999954 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-999954 --driver=kvm2  --container-runtime=crio: (1m25.748721303s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-999954 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (85.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (41.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-999954 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0816 17:56:12.269321   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-999954 --no-kubernetes --driver=kvm2  --container-runtime=crio: (40.568406139s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-999954 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-999954 status -o json: exit status 2 (255.234239ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-999954","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-999954
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-999954: (1.170691961s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (41.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (27.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-999954 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-999954 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.351821205s)
--- PASS: TestNoKubernetes/serial/Start (27.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (2.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-791304 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-791304 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (99.073454ms)

                                                
                                                
-- stdout --
	* [false-791304] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19461
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19461-9545/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19461-9545/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 17:56:53.922659   54549 out.go:345] Setting OutFile to fd 1 ...
	I0816 17:56:53.922887   54549 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:56:53.922897   54549 out.go:358] Setting ErrFile to fd 2...
	I0816 17:56:53.922903   54549 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:56:53.923085   54549 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-9545/.minikube/bin
	I0816 17:56:53.923706   54549 out.go:352] Setting JSON to false
	I0816 17:56:53.924703   54549 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5912,"bootTime":1723825102,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0816 17:56:53.924764   54549 start.go:139] virtualization: kvm guest
	I0816 17:56:53.926905   54549 out.go:177] * [false-791304] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0816 17:56:53.928098   54549 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 17:56:53.928103   54549 notify.go:220] Checking for updates...
	I0816 17:56:53.929131   54549 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 17:56:53.930237   54549 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19461-9545/kubeconfig
	I0816 17:56:53.931298   54549 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19461-9545/.minikube
	I0816 17:56:53.932282   54549 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 17:56:53.933446   54549 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 17:56:53.934892   54549 config.go:182] Loaded profile config "NoKubernetes-999954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0816 17:56:53.935011   54549 config.go:182] Loaded profile config "kubernetes-upgrade-108715": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0816 17:56:53.935109   54549 config.go:182] Loaded profile config "running-upgrade-339463": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0816 17:56:53.935199   54549 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 17:56:53.972022   54549 out.go:177] * Using the kvm2 driver based on user configuration
	I0816 17:56:53.973146   54549 start.go:297] selected driver: kvm2
	I0816 17:56:53.973178   54549 start.go:901] validating driver "kvm2" against <nil>
	I0816 17:56:53.973195   54549 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 17:56:53.975168   54549 out.go:201] 
	W0816 17:56:53.976378   54549 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0816 17:56:53.977572   54549 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-791304 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-791304

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-791304

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-791304

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-791304

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-791304

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-791304

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-791304

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-791304

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-791304

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-791304

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-791304"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-791304"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-791304"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-791304

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-791304"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-791304"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-791304" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-791304" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-791304" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-791304" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-791304" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-791304" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-791304" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-791304" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-791304"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-791304"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-791304"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-791304"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-791304"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-791304" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-791304" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-791304" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-791304"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-791304"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-791304"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-791304"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-791304"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 16 Aug 2024 17:56:52 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.72.247:8443
name: running-upgrade-339463
contexts:
- context:
cluster: running-upgrade-339463
user: running-upgrade-339463
name: running-upgrade-339463
current-context: running-upgrade-339463
kind: Config
preferences: {}
users:
- name: running-upgrade-339463
user:
client-certificate: /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/running-upgrade-339463/client.crt
client-key: /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/running-upgrade-339463/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-791304

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-791304"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-791304"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-791304"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-791304"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-791304"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-791304"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-791304"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-791304"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-791304"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-791304"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-791304"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-791304"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-791304"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-791304"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-791304"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-791304"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-791304"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-791304"

                                                
                                                
----------------------- debugLogs end: false-791304 [took: 2.603961333s] --------------------------------
helpers_test.go:175: Cleaning up "false-791304" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-791304
--- PASS: TestNetworkPlugins/group/false (2.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-999954 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-999954 "sudo systemctl is-active --quiet service kubelet": exit status 1 (200.561358ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (28.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (13.760617113s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (14.418705645s)
--- PASS: TestNoKubernetes/serial/ProfileList (28.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.58s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-999954
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-999954: (2.579785239s)
--- PASS: TestNoKubernetes/serial/Stop (2.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (23.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-999954 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-999954 --driver=kvm2  --container-runtime=crio: (23.838963824s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (23.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-999954 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-999954 "sudo systemctl is-active --quiet service kubelet": exit status 1 (182.288215ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.18s)

                                                
                                    
x
+
TestPause/serial/Start (72.17s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-102973 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
E0816 17:58:21.061439   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/functional-654639/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-102973 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m12.173215063s)
--- PASS: TestPause/serial/Start (72.17s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (52.05s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-102973 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-102973 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (52.028574448s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (52.05s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.28s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (125.67s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1560246327 start -p stopped-upgrade-219068 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1560246327 start -p stopped-upgrade-219068 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m8.760321622s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1560246327 -p stopped-upgrade-219068 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1560246327 -p stopped-upgrade-219068 stop: (1.383834674s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-219068 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-219068 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (55.522709519s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (125.67s)

                                                
                                    
x
+
TestPause/serial/Pause (0.87s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-102973 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.87s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.26s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-102973 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-102973 --output=json --layout=cluster: exit status 2 (264.470923ms)

                                                
                                                
-- stdout --
	{"Name":"pause-102973","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-102973","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.26s)

                                                
                                    
x
+
TestPause/serial/Unpause (1.03s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-102973 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-linux-amd64 unpause -p pause-102973 --alsologtostderr -v=5: (1.031016422s)
--- PASS: TestPause/serial/Unpause (1.03s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.33s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-102973 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-102973 --alsologtostderr -v=5: (1.334072365s)
--- PASS: TestPause/serial/PauseAgain (1.33s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.39s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-102973 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-102973 --alsologtostderr -v=5: (1.393817852s)
--- PASS: TestPause/serial/DeletePaused (1.39s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (15.97s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (15.969507725s)
--- PASS: TestPause/serial/VerifyDeletedResources (15.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (51.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-791304 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-791304 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (51.934763231s)
--- PASS: TestNetworkPlugins/group/auto/Start (51.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-791304 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-791304 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context auto-791304 replace --force -f testdata/netcat-deployment.yaml: (1.136047315s)
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-kpt7q" [505bb06f-148d-41a4-b725-62e7a58c45a6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-kpt7q" [505bb06f-148d-41a4-b725-62e7a58c45a6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.003380524s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (16.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-791304 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context auto-791304 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.14751703s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context auto-791304 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (16.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (74.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-791304 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-791304 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m14.163006385s)
--- PASS: TestNetworkPlugins/group/flannel/Start (74.16s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.8s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-219068
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (109.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-791304 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-791304 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m49.154832594s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (109.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-791304 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-791304 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (95.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-791304 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-791304 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m35.491103813s)
--- PASS: TestNetworkPlugins/group/bridge/Start (95.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (120.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-791304 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-791304 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (2m0.089496655s)
--- PASS: TestNetworkPlugins/group/calico/Start (120.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-wzb6t" [a7211d44-d2ab-48cd-89c1-4256f1d63dc3] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004439088s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-791304 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-791304 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-5jcq8" [d56f8806-7d02-43ab-b65f-d646eb6033ea] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-5jcq8" [d56f8806-7d02-43ab-b65f-d646eb6033ea] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.005262511s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-791304 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-791304 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-791304 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (67.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-791304 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-791304 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m7.710467801s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (67.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-791304 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-791304 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-k5jxb" [ba37e5a4-d199-4545-a368-5a65d9d8a491] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-k5jxb" [ba37e5a4-d199-4545-a368-5a65d9d8a491] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.004257309s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-791304 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-791304 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-791304 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-791304 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-791304 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-qhqp8" [15008f7f-e6de-40c6-9544-5f935b9e36bf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-qhqp8" [15008f7f-e6de-40c6-9544-5f935b9e36bf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.005265028s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-791304 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-791304 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-791304 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (75.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-791304 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-791304 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m15.012082246s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (75.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-wrxdm" [558e5477-a3da-4f04-a885-058830901069] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005467109s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-791304 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-791304 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-fn2d5" [b60f6689-30fe-4d67-a4f3-aa6a46cd2e91] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-fn2d5" [b60f6689-30fe-4d67-a4f3-aa6a46cd2e91] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.005091236s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-791304 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-791304 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-791304 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-7tm64" [74395a20-ea22-483a-9eac-483416e7c1d6] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005323581s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-791304 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-791304 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-s6kwr" [9a738ae1-f9f5-4d02-b54c-9383ddd454be] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-s6kwr" [9a738ae1-f9f5-4d02-b54c-9383ddd454be] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004564589s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (90.5s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-777541 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-777541 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (1m30.501449282s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (90.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-791304 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-791304 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-791304 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (103.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-864476 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-864476 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (1m43.34908626s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (103.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-791304 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-791304 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-lcmf5" [cb5a5247-c303-438b-8944-799300b8c89b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-lcmf5" [cb5a5247-c303-438b-8944-799300b8c89b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004053276s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-791304 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-791304 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-791304 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (85.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-256678 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
E0816 18:06:12.269141   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/addons-671083/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-256678 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (1m25.514002207s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (85.51s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-777541 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [eb629961-107a-4695-8482-6072d7bab160] Pending
helpers_test.go:344: "busybox" [eb629961-107a-4695-8482-6072d7bab160] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [eb629961-107a-4695-8482-6072d7bab160] Running
E0816 18:06:23.209772   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/auto-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:06:23.216181   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/auto-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:06:23.227546   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/auto-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:06:23.248920   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/auto-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:06:23.290349   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/auto-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:06:23.372411   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/auto-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:06:23.533940   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/auto-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:06:23.855756   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/auto-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:06:24.497606   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/auto-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:06:25.779038   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/auto-791304/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004012256s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-777541 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-777541 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-777541 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-864476 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [934a25fd-9643-4705-9e45-95dc1995bfb2] Pending
helpers_test.go:344: "busybox" [934a25fd-9643-4705-9e45-95dc1995bfb2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [934a25fd-9643-4705-9e45-95dc1995bfb2] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.005696188s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-864476 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-864476 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-864476 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-256678 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [55d2e34f-54e6-4f0f-93d0-4de08331fb36] Pending
helpers_test.go:344: "busybox" [55d2e34f-54e6-4f0f-93d0-4de08331fb36] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [55d2e34f-54e6-4f0f-93d0-4de08331fb36] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003712564s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-256678 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.87s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-256678 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-256678 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.87s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (638.8s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-777541 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-777541 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (10m38.56197151s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-777541 -n embed-certs-777541
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (638.80s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (592.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-864476 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
E0816 18:09:34.153089   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/kindnet-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:09:35.435392   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/kindnet-791304/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-864476 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (9m51.869511393s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-864476 -n no-preload-864476
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (592.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (586.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-256678 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
E0816 18:09:51.838314   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/enable-default-cni-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:09:52.208106   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/calico-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:09:53.361148   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/kindnet-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:10:05.617987   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/bridge-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:10:13.843364   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/kindnet-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:10:14.631557   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/custom-flannel-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:10:14.637869   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/custom-flannel-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:10:14.649245   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/custom-flannel-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:10:14.670610   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/custom-flannel-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:10:14.712063   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/custom-flannel-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:10:14.793503   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/custom-flannel-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:10:14.955013   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/custom-flannel-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:10:15.276742   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/custom-flannel-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:10:15.918843   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/custom-flannel-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:10:17.200500   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/custom-flannel-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:10:19.762224   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/custom-flannel-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:10:24.884531   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/custom-flannel-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:10:32.634337   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/flannel-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:10:33.169597   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/calico-791304/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:10:35.126445   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/custom-flannel-791304/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-256678 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (9m45.90138848s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-256678 -n default-k8s-diff-port-256678
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (586.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (2.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-783465 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-783465 --alsologtostderr -v=3: (2.281098488s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (2.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-783465 -n old-k8s-version-783465
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-783465 -n old-k8s-version-783465: exit status 7 (67.038643ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-783465 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (46.87s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-774287 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
E0816 18:34:32.866481   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/kindnet-791304/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-774287 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (46.8712077s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (46.87s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-774287 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-774287 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-774287 --alsologtostderr -v=3: (7.295104263s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-774287 -n newest-cni-774287
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-774287 -n newest-cni-774287: exit status 7 (65.679816ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-774287 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (34.93s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-774287 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
E0816 18:35:14.631996   16753 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/custom-flannel-791304/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-774287 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (34.441979161s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-774287 -n newest-cni-774287
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (34.93s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-774287 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.51s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-774287 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-774287 -n newest-cni-774287
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-774287 -n newest-cni-774287: exit status 2 (230.320555ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-774287 -n newest-cni-774287
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-774287 -n newest-cni-774287: exit status 2 (228.199634ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-774287 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-774287 -n newest-cni-774287
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-774287 -n newest-cni-774287
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.51s)

                                                
                                    

Test skip (37/318)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.31.0/cached-images 0
15 TestDownloadOnly/v1.31.0/binaries 0
16 TestDownloadOnly/v1.31.0/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0
38 TestAddons/parallel/Olm 0
48 TestDockerFlags 0
51 TestDockerEnvContainerd 0
53 TestHyperKitDriverInstallOrUpdate 0
54 TestHyperkitDriverSkipUpgrade 0
105 TestFunctional/parallel/DockerEnv 0
106 TestFunctional/parallel/PodmanEnv 0
118 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
119 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
120 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
121 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
122 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
123 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
124 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
125 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
154 TestGvisorAddon 0
176 TestImageBuild 0
203 TestKicCustomNetwork 0
204 TestKicExistingNetwork 0
205 TestKicCustomSubnet 0
206 TestKicStaticIP 0
238 TestChangeNoneUser 0
241 TestScheduledStopWindows 0
243 TestSkaffold 0
245 TestInsufficientStorage 0
249 TestMissingContainerUpgrade 0
257 TestNetworkPlugins/group/kubenet 2.97
265 TestNetworkPlugins/group/cilium 2.9
271 TestStartStop/group/disable-driver-mounts 0.14
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:879: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-791304 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-791304

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-791304

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-791304

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-791304

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-791304

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-791304

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-791304

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-791304

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-791304

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-791304

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-791304"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-791304"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-791304"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-791304

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-791304"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-791304"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-791304" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-791304" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-791304" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-791304" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-791304" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-791304" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-791304" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-791304" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-791304"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-791304"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-791304"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-791304"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-791304"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-791304" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-791304" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-791304" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-791304"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-791304"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-791304"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-791304"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-791304"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 16 Aug 2024 17:56:52 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.72.247:8443
name: running-upgrade-339463
contexts:
- context:
cluster: running-upgrade-339463
user: running-upgrade-339463
name: running-upgrade-339463
current-context: running-upgrade-339463
kind: Config
preferences: {}
users:
- name: running-upgrade-339463
user:
client-certificate: /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/running-upgrade-339463/client.crt
client-key: /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/running-upgrade-339463/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-791304

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-791304"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-791304"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-791304"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-791304"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-791304"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-791304"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-791304"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-791304"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-791304"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-791304"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-791304"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-791304"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-791304"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-791304"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-791304"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-791304"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-791304"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-791304"

                                                
                                                
----------------------- debugLogs end: kubenet-791304 [took: 2.830660128s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-791304" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-791304
--- SKIP: TestNetworkPlugins/group/kubenet (2.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-791304 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-791304

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-791304

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-791304

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-791304

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-791304

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-791304

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-791304

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-791304

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-791304

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-791304

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-791304"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-791304"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-791304"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-791304

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-791304"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-791304"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-791304" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-791304" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-791304" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-791304" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-791304" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-791304" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-791304" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-791304" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-791304"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-791304"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-791304"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-791304"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-791304"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-791304

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-791304

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-791304" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-791304" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-791304

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-791304

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-791304" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-791304" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-791304" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-791304" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-791304" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-791304"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-791304"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-791304"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-791304"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-791304"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19461-9545/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 16 Aug 2024 17:56:52 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.72.247:8443
name: running-upgrade-339463
contexts:
- context:
cluster: running-upgrade-339463
user: running-upgrade-339463
name: running-upgrade-339463
current-context: running-upgrade-339463
kind: Config
preferences: {}
users:
- name: running-upgrade-339463
user:
client-certificate: /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/running-upgrade-339463/client.crt
client-key: /home/jenkins/minikube-integration/19461-9545/.minikube/profiles/running-upgrade-339463/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-791304

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-791304"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-791304"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-791304"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-791304"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-791304"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-791304"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-791304"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-791304"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-791304"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-791304"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-791304"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-791304"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-791304"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-791304"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-791304"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-791304"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-791304"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-791304" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-791304"

                                                
                                                
----------------------- debugLogs end: cilium-791304 [took: 2.769101112s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-791304" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-791304
--- SKIP: TestNetworkPlugins/group/cilium (2.90s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-891032" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-891032
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
Copied to clipboard